You have few problems.
2) I have a thread pool. When the game client connects to the server, I create an instance of the client and bind them to one of the threads from the pool. So, we have a one-to-many relationship: one thread - many customers. Round to select the thread to bind.
You did not specify asynchronous I / O in any of the items, I believe that your true bottleneck here is not the number of threads, but the fact that the thread is blocked due to I / O. using asynchronous IO (which is not an IO action in another thread) - the speed of your server is increased by huge magnets.
3) I use Libev to manage all events inside the server. This means that the client instance receives some data from the game client via the network or to process some request or try to send some data through the network so that it blocks the hi thread to the game client. While he is doing something else, Clients who use the same thread will be blocked.
again, without asynchronous I / O, this architecture has a very large server-side architecture (a-la Apache style). For maximum performance, your threads should only perform CPU-related tasks and should not wait for any I / O.
So, the thread pool is the bottleneck for the application. Increase the number of simultaneous players on the server who will play without delay I need to increase the number of threads in the thread pool.
Wrong. read about 10k concurrency issue.
Now the question. If I increase the number of threads in the thread pool, is it an increase or decrease in server performance (Context switching voltage vs blocked clients in the thread)?
So, the joke about the number of threads in the number of cores is valid only when your threads perform only cpu related tasks, and they are never blocked, and they are 100% staurated with cpu tasks. if your threads are also blocked by locks or IO actions, this fact breaks.
If we look at common server-side architectures, we can determine which best design we need.
Apache style architecture:
with a pool of fixed-size threads. sending a stream to each connection in the connection queue. not asynchronous IO.
Pros: Not.
minus: extremely poor bandwidth
Architecture of NGNix / Node.js:
with a single-threaded multiprocessor application. using asynchronous I / O.
Pros: A simple architecture that fixes multithreaded issues. Great for servers serving static data.
Cons: if processes must process data, a huge amount of processor time is burned based on serialization-transfer-deserialization of data between processes. In addition, a multi-threaded application can improve performance if performed correctly.
Modern .Net architecure:
with multi-threaded processing application. using asynchronous I / O.
Pros: If done right, performance could explode!
Cons: it is somewhat difficult to configure multi-threaded application and use it without distorting the data on the screen.
So, to summarize, I think that in your particular case, you should defenitly use only asynchronous IO + having threadpool with the number of threads equal to the number of cores.
If you use Linux, Facebook Proxygen can perfectly manage everything we talked about (multi-threaded code with asynchronous IO). hey facebook use it!