In general, you better maintain connections for as long as possible. If you have every device that opens a connection every time it sends a message, you can effectively activate the DoS'ing of the server, since it ends up with many sockets in the TIME_WAIT state, taking up space in the tables.
I worked on a system in which there were several clients talking to the server, and although they could be turned on and off regularly, it was even better to keep the connection (and restore it when it fell, and you need to send a new message). You may have to write some more complex code, but I found it worth it to reduce the load on the server.
Modern operating systems may have larger buffers than those on which I really came across the DoS effect, but in principle it is not a good idea to use many such connections.
Everything can be relatively complex on the client side, especially when the device tends to transparently transition to the application, because this means that the connections will be disconnected while the application considers that they are still open. When we did this, we ended up with relatively complex network code, because we needed to deal with the fact that sockets could (and would) fail, for granted, and we just needed to establish a new connection and try sending again messages, you just drop this code into your libraries and forget about it when it does.
In fact, in practice, our original application had even more complex code, since it dealt with a network library that was half-purpose with respect to starting devices to stop and tried to resend failed messages, sometimes this means that the same message was sent twice. We have finished work on an additional layer of communication to avoid duplication. If you use C # or regular BSD style sockets, you should not have this problem, although I assume. It was a proprietary library that managed recreations, but caused headaches on retries and inappropriate default timeouts.
source share