Servers and thread models

The following concept worries me: Most books / documents describe how reliable servers are multithreaded and the most common is to start a new thread to serve each new client. For instance. stream of each new connection. But how is this implemented in large systems? If we have a server that accepts requests from 100,000 clients, has it started 100,000 threads? Is this realistic? Are there no limits on the number of threads that can be executed on the server? In addition, the overhead of switching and synchronizing contexts does performance impair? Is it implemented as a mixture of queues and threads? In this case, the number of queues is fixed? Can someone enlighten me about this and maybe give me a good link that describes them?

Thank!

+3
source share
6 answers

A common method is to use thread pools. A thread pool is a collection of already created threads. When a new request arrives at the server, it is assigned a spare thread from the pool. When the request is processed, the thread returns to the pool.

. , , , , . , DB IO , , . , .

Google " ", , .

+3

SEDA ,

+3

, , - IO, select() . , , , .

, .

+3

. , . , .

, . . , .

, . , .

, , . , , , , , .

, - . , .

+1

, ( " " " " ), C10K, , .

+1

10k , .

10 . , , .

Depending on the client-side implementation, it may be that 10,000 clients do not need to support an open TCP connection - depending on the purpose, the protocol design can significantly increase the implementation efficiency.

I think a suitable solution for high-performance systems is probably extremely domain specific, and if you would like to receive an offer, you would need to explain more about your problem domain.

0
source

Source: https://habr.com/ru/post/1761016/


All Articles