Saving 140 TCP connections?

We are currently exploring the most efficient way to communicate between 120-140 embedded hardware devices running on the .NET Micro platform and the server.

Each embedded device must send and request information from the server on a regular basis in real time through TCP.

My question is this: would it be better to initialize 140 TCP connections on the server, and then hang on these connections or initialize a new connection for each request to and from devices? Will the support and management of 140 TCP connections overload the server?

When the server detects new data in the database, it needs to send this new information to 1 .. * devices (the information is aimed at specific devices), if I support 140 connections, I will need to search for the correct connection every time I had to send information, and not just send to IP: PORT associated with the new data.

I assume that perhaps a stupid question is that it is actually possible to connect to 140 TCP connections on one port?

Any suggestions and comments appreciated!

+4
source share
3 answers

Usually you can connect much more than 140 "clients" to the server (that is, with a decent network / HW / RAM) ...

I recommend always testing such things with real scenarios (loading, etc.) to solve, since there are aspects such as network (performance, stability ...), HW (server RAM, etc.) and SW (what does the server do exactly?), which can only be checked by you.

Depending on the protocol, you may / should even place some timeout / reconnect mechanism.

The search you have in mind will be very fast - just use ConcurrentDictionary to store the necessary information with IP: PORT as the key (assuming the server is running on full .NET 4).

For some links see:

EDIT - according to the comments:

Maintaining TCP / IP communications does not require much client-side processing ... it does require a bit of memory. I would recommend doing a small test (1-2 clients) to test this assumption for your specific case.

+2
source

In general, you better maintain connections for as long as possible. If you have every device that opens a connection every time it sends a message, you can effectively activate the DoS'ing of the server, since it ends up with many sockets in the TIME_WAIT state, taking up space in the tables.

I worked on a system in which there were several clients talking to the server, and although they could be turned on and off regularly, it was even better to keep the connection (and restore it when it fell, and you need to send a new message). You may have to write some more complex code, but I found it worth it to reduce the load on the server.

Modern operating systems may have larger buffers than those on which I really came across the DoS effect, but in principle it is not a good idea to use many such connections.

Everything can be relatively complex on the client side, especially when the device tends to transparently transition to the application, because this means that the connections will be disconnected while the application considers that they are still open. When we did this, we ended up with relatively complex network code, because we needed to deal with the fact that sockets could (and would) fail, for granted, and we just needed to establish a new connection and try sending again messages, you just drop this code into your libraries and forget about it when it does.

In fact, in practice, our original application had even more complex code, since it dealt with a network library that was half-purpose with respect to starting devices to stop and tried to resend failed messages, sometimes this means that the same message was sent twice. We have finished work on an additional layer of communication to avoid duplication. If you use C # or regular BSD style sockets, you should not have this problem, although I assume. It was a proprietary library that managed recreations, but caused headaches on retries and inappropriate default timeouts.

+3
source

If you are talking about a system with hardware devices, I suggest going with the closure of the connection every time the client finishes sending data.

For the client to receive some update from the server, the client can wait 5 seconds to receive any data from the server. If data is received in / before this timeframe, close the connection and process the data. If not, close the connection and wait after sending the next data set.

Thus, scaling becomes much easier. Saving discovery connections always leads to a load on resources and, in my opinion, is not necessary if it is not some kind of rescue device, such as a heart rate monitor, oxygen supply monitor, etc.,

+2
source

Source: https://habr.com/ru/post/1379278/


All Articles