Max. MQTT Connections

I need to create a server farm that can handle 5 million connections, more than 5 million topics (one for each client), process messages 300 sec / sec.

I tried to figure out which different message brokers were capable of, so I am currently using two instances of RHEL EC2 (r3.4xlarge) to create the many available resources. Thus, you do not need to look for it, it has 16vCPU, RAM 122GB. I do not see this limit in use.

I can’t pass the limit of 600 thousand connections. Since, apparently, there are no restrictions on O / S (a lot of RAM / CPU / etc.) Neither on the client nor on the server, what limits me?

I edited /etc/security/limits.conf as follows:

* soft nofile 20000000 * hard nofile 20000000 * soft nproc 20000000 * hard nproc 20000000 root soft nofile 20000000 root hard nofile 20000000 

I edited the /etc/sysctl.conf file as follows:

 net.ipv4.ip_local_port_range = 1024 65535 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 5242880 5242880 5242880 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 20000000 fs.nr_open = 20000000 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_syn_backlog = 10000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn=65536 net.core.netdev_max_backlog=100000 net.core.optmem_max = 20480000 

For Apollo: export APOLLO_ULIMIT = 20000000

For ActiveMQ:

 ACTIVEMQ_OPTS="$ACTIVEMQ_OPTS -Dorg.apache.activemq.UseDedicatedTaskRunner=false" ACTIVEMQ_OPTS_MEMORY="-Xms50G -Xmx115G" 

I created 20 additional personal addresses for eth0 on the client, and then assigned them: ip addr add 11.22.33.44/24 dev eth0

I FULLY know the limits of port 65k, so I did it above.

  • For ActiveMQ I received: 574309
  • For Apollo, I got to: 592891
  • For Rabbit, I got to 90k, but the registration was terrible and I could not figure out what to do to go higher, although I know that this is possible.
  • For Hive, I got a trial limit of 1000. Waiting for a license
  • IBM wants to trade the value of my home to use them - nah!
+6
source share
1 answer

ANSWER: By doing this, I realized that I have a spelling error in my client settings in the /etc/sysctl.conf file for: net.ipv4.ip_local_port_range

Now I can connect 956 591 MQTT clients to my Apollo server in 188 sec.


Additional information: Trying to isolate, if this is an O / S or Broker connection restriction, I decided to write a simple client / server.

Server:

  Socket client = null; server = new ServerSocket(1884); while (true) { client = server.accept(); clients.add(client); } 

Customer:

  while (true) { InetAddress clientIPToBindTo = getNextClientVIP(); Socket client = new Socket(hostname, 1884, clientIPToBindTo, 0); clients.add(client); } 

With 21 IPs, I would expect the border to be 65535-1024 * 21 = 1354731. Actually, I can reach 1231734

 [ root@ip ec2-user]# cat /proc/net/sockstat sockets: used 1231734 TCP: inuse 5 orphan 0 tw 0 alloc 1231307 mem 2 UDP: inuse 4 mem 1 UDPLITE: inuse 0 RAW: inuse 0 FRAG: inuse 0 memory 0 

So, the material socket / kernel / io is developed.

I NEVER can achieve this using any broker.

Again, right after the client / server test, these are the kernel settings.

Customer:

 [ root@ip ec2-user]# sysctl -p net.ipv4.ip_local_port_range = 1024 65535 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 5242880 5242880 15242880 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 20000000 fs.nr_open = 20000000 [ root@ip ec2-user]# cat /etc/security/limits.conf * soft nofile 2000000 * hard nofile 2000000 root soft nofile 2000000 root hard nofile 2000000 

Server:

 [ root@ ec2-user]# sysctl -p net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_mem = 5242880 5242880 5242880 net.ipv4.tcp_tw_recycle = 1 fs.file-max = 20000000 fs.nr_open = 20000000 net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_max_syn_backlog = 1000000 net.ipv4.tcp_synack_retries = 3 net.core.somaxconn = 65535 net.core.netdev_max_backlog = 1000000 net.core.optmem_max = 20480000 
+4
source

Source: https://habr.com/ru/post/984468/


All Articles