Websockets has several big scalability flaws that avoid ajax. Since ajax sends a request / response and closes the connection (.. or soon after), if someone stays on the web page, it does not use server resources when idle. Websockets are designed to transfer data back to the browser, and for this they connect server resources. Servers have a limit on the number of simultaneous connections that they can open at the same time. Not to mention that depending on your server technology, they can bind a thread to handle a socket. Therefore, websites have more demanding requirements for both parties for each connection. You can easily exhaust all your clients servicing topics, and then new clients cannot log in if users simply sit on the page. This is where nodejs, vertx, netty can really help, but even those have upper limits.
Also there is the problem of the state of the base socket and the writing of code on both sides, which leads to a conversation with the state, which is not something that you need to do with the ajax-style, because it is stateless. Websockets require you to create a low-level protocol that is solved for you using ajax. Things like beating a heart, closing idle connections, reconnecting to errors, etc. are vital right now. This is something you did not have to decide when using AJAX because it was stateless. Health is very important for the stability of your application and, more importantly, for the health of your server. This is not trivial. In the preliminary HTTP protocol, we built many stateful TCP protocols (FTP, telnet, SSH), and then HTTP. And no one else did this, because even with its limitations, HTTP was surprisingly simpler and more reliable. Websockets return good and bad stateful protocols. You will find out soon enough if you do not receive a dose of this last time.
If you need real-time streaming, this additional overhead is justified, because polling the server to receive streaming data is worse, but if all you do is user interface-> request-> response-> update interface, then ajax itβs simpler and will use fewer resources, because after sending a response, the conversation ends and no additional server resources are used. Therefore, I think this is a compromise, and the architect must decide which tool is suitable for their problem. AJAX has its place, and Web sites have their place.
Update
So, the architecture of your server matters when we talk about threads. If you use a traditionally multithreaded server (or processes), where each socket connection receives its own thread to respond to requests, then web servers are of great importance to you. Therefore, for each connection we have a socket, and eventually the OS will fall if you have too many of them, and the same applies to threads (especially for processes). Threads are heavier than sockets (in terms of resources), so we try and save how many threads we execute at the same time. This means creating a thread pool, which is simply a fixed number of threads, which is common to all sockets. But as soon as the socket opens, the thread is used for the entire conversation. The length of these conversations determines how quickly you can reassign these threads to incoming new sockets. The length of your conversation determines how much you can scale. However, if you download this model, it is not suitable for scaling. You must break the thread / socket design.
The HTTP request / response model makes it very efficient in switching flows for new sockets. If you're just going to use a request / response, use the HTTP that it has already built, and is much easier than redefining something like this in websockets.
Since websockets should not be a request / response like HTTP and can transmit data if your server has a fixed number of threads in the thread pool, and you have the same number of websites linking all your threads to active chains, you cannot serve new customers! You have reached maximum capacity. That the protocol is also important with websockets and threads. Your protocol can allow you to weaken the stream for each socket on each conversation model so that the people sitting there do not use the stream on your server.
What is where there are asynchronous servers with a single thread. In Java, we often call this NIO for non-blocking I / O. This means that it is another socket API where sending and receiving data does not block the thread making the call.
So traditionally in blocking sockets, when you call socket.read () or socket.write (), they wait for data to be received or sent before returning control to your program. This means that your program is stuck waiting for socket data to come in or out until you can do something else. That's why we have threads, so we can work simultaneously (at the same time). Send this data to client X while I wait for data from client Y. Competitors are the name of the game when we talk about servers.
In the NIO server, we use one thread to process all clients and register callbacks that will be notified of the arrival of data. for example
socket.read( function( data ) {
Socket socket.read () will immediately return without reading any data, but our function that we provided will be called when it appears. This project is radically changing the way you create and archive your code, because if you hang yourself, expecting that you cannot get new customers. You have a thread that you cannot do right away. You must keep this flow in motion.
NIO, Asynchronous IO, an event-based program, since all of this is known, is a much more complex system design, and I would not suggest you try and write this if you are starting out. Even very senior programmers find it very difficult to create reliable systems. Because you are asynchronous, you cannot call blocked APIs. Like reading data from a database or sending messages to other servers, you must do it asynchronously. Even reading / writing from the file system can slow down your single stream, reducing scalability. When you go asynchronously, everything is asynchronous all the time if you want one thread to move. This is where it gets tricky because you end up with an API like a DB, which is not asynchronous, and you need to accept more threads at a certain level. Thus, hybrid approaches are common even in the asynchronous world.
The good news is other solutions that use this lower level API, already built, that you can use. NodeJS, Vertx, Netty, Apache Mina, Play Framework, Twisted Python, Stackless Python, etc. There may be some obscure library for C ++, but honestly, I would not bother. Server technology does not require the fastest languages, because it is associated with binding more than to the CPU. If you can die, use Java. It has a huge community of code to pull, and this speed is very close (and sometimes better) than C ++. If you just hate it using Node or Python.