How many requests can the port handle in "time"

I am creating a web application having a login page where the number of users can log in at the same time. so here I need to handle the number of requests at the same time.

I know that this has already been implemented for a number of popular sites such as G talk .

So, I have some questions in my mind.

"How many requests can a port handle at one time?"

How many sockets can I create (server)? are there any restrictions?

For example, as we know, when we implement communication with a client server using Socket (TCP) programming, we transfer the "port number (unreserved port number) to the server to create a socket.

So, I want to say that if 100,000 requests were received at a time, then what would be the port's approach to these requests.

Is he manitains some queue for all these requests, or does he just accept the number of requests according to his limit? if so, what is the size of the port request processing restriction?

Summary: I want to know how the server serves several requests at the same time? I don't know anything about this. I know that we connect to the server through its ip address and port number. Therefore, I thought that there is only one port, and many requests come to this port only through different clients, since the server manages all the requests?

That is all I want to know. If you explain this concept in detail, it will be very helpful. Thanks anyway.

+4
source share
4 answers

The port does not process requests; it accepts packets. Depending on the server implementation, these packets can be processed by one or several processes / threads, therefore, theoretically this is not possible. But you will always be limited by bandwidth and performance.

If the number of packets arrives at one port and cannot be processed in a timely manner, they will be buffered (by the server, operating system or equipment). If these buffers are full, then congestion can be handled by network components (routers, switches) and the protocols on which network traffic is based. TCP, for example, has some methods to avoid or control congestion: http://en.wikipedia.org/wiki/Transmission_Control_Protocol#Congestion_control

+3
source

This is usually configured on the application web server / web server used. How do you limit the number of concurrent requests by limiting the number of concurrent worker threads that you allow the server to submit requests. If there are more requests than there are threads available to process them, they will start to queue. This is the second thing you usually configure, the size of the socket gaps. When the back-log is full, the server will begin to respond to a “connection failure” when new requests appear.

+1
source

Then you will probably be limited by the number of file descriptors supported by os (in the case of * nix) or the number of concurrent connections supported by your web server. The maximum OS on my machine is 75,000.

+1
source

100,000 concurrent connections should be readily available in Java if you use something like Netty .

You must be able to:

  • Accept incoming connections quickly enough. The NIO framework really helps here, which Netty uses internally. There is a small queue for incoming requests, so you need to be able to process them faster than the queue can fill.
  • Create connections for each client (this implies some memory overhead for things like connection information, buffers, etc.) - you may need to configure your virtual machine to have enough free memory for all connections.

See an article from 2009 for a discussion of achieving 100,000 concurrent connections with approximately 20% CPU usage on a quad server.

0
source

Source: https://habr.com/ru/post/1398992/


All Articles