Does Vert.x have real concurrency for single vertices?

the question may look like a troll, but actually it is about how vert.x controls concurrency, since the vertex itself works in a dedicated thread.

Look at this simple vert.x http server written in Java:

import org.vertx.java.core.Handler; import org.vertx.java.core.http.HttpServerRequest; import org.vertx.java.platform.Verticle; public class Server extends Verticle { public void start() { vertx.createHttpServer().requestHandler(new Handler<HttpServerRequest>() { public void handle(HttpServerRequest req) { req.response().end("Hello"); } }).listen(8080); } } 

As far as I understand the docs, this whole file is the top. Thus, the start method is called vertically in the selected thread, so good. But where is the Handler request called? If it is called in this thread, I don’t see where it is better than node.js.

I am very familiar with Netty, which is based on the vert.x-based / concurrency network. Each incoming connection is mapped to a dedicated stream, which scales very well. So ... does that mean incoming connections are also peaks? But how then can a Verticle Server instance interact with these clients? Actually, I would say that this concept is limited to node.js.

Please help me understand the principles correctly!

Regards, Chris

+6
source share
3 answers

I spoke with someone who is pretty involved in vert.x, and he told me that I'm mostly right about the "concurrency" problem.

BUT: He showed me the section in the documents that I completely skipped, which explains "Server scaling" in detail.

The basic concept is that when you write a vertical, you have only one kernel performance. But you can start the vert.x platform using the -instance parameter, which determines how many instances of the given vertex are started. Vert.x does a little magic under the hood so that 10 instances of my server do not try to open 10 server sockets, but in fact a single instead. Thus, vert.x scales horizontally even for single vertices.

This is a really great concept and especially a great structure!

+4
source

As you correctly answered yourself, the top really uses asynchronous non-blocking programming (for example, node.js), so you cannot do blocking operations, because otherwise you will stop the whole world (applications).

You can scale servers, as you correctly stated, by creating more instances (n = processor cores), each of which tries to listen on the same TCP / HTTP port.

Where it shines compared to node.js is that the JVM itself is multithreaded, which gives you more advantages (in terms of runtime, not counting the security of Java types, etc.):

  • Multithreaded (transverse) communication, while still being tied to a thread-safe model, similar to an actor, does not require IPC (Inter Process Communication) for transferring messages between verticals - everything happens inside the same process, in the same memory area. This is faster than node.js spawns every forked task in a new system process and uses IPC for communication
  • The ability to perform complex and / or blocking tasks within a single JVM process: http://vertx.io/docs/vertx-core/java/#blocking_code or http://vertx.io/docs/vertx-core/java/# worker_verticles
  • JVM HotSpot speed compared to V8 :)
+1
source

Each vertex is single-threaded; at startup, the vertx subsystem assigns an event loop to this vertex. Each code in this vertical will execute in this event loop. Next time you should ask questions at http://groups.google.com/forum/#!forum/vertx , the group is very lively, your question will most likely be answered immediately.

0
source

Source: https://habr.com/ru/post/949985/


All Articles