Many UDP requests lost on a UDP server with Netty

I wrote a simple UDP server with Netty that simply prints messages (frames) in the logs. To do this, I created a simple frame decoder decoder and a simple message handler. I also have a client that can send multiple requests sequentially and / or in parallel.

When I configure my client tester to send, for example, several hundred requests in series with a slight delay between them, my server written with Netty gets them all correctly. But at the moment I am increasing the number of simultaneous requests in my client (for example, 100) in combination with consecutive and several retries, my server starts to lose a lot of requests. For example, when I send 50,000 requests, my server receives only about 49,000 when it only uses a simple ChannelHandler, which displays the received message.

And when I add a simple frame decoder (which prints the frame and copies it to another buffer) before this handler, the server processes only half of the requests!

I noticed that regardless of the number of workers I tell the created NioDatagramChannelFactory, there is always one and only one thread that processes requests (I use the recommended Executors.newCachedThreadPool () as another parameter).

I also created another similar simple UDP server based on the DatagramSocket supplied with the JDK, and it handles all requests perfectly with 0 (zero) lost !! When I send 50,000 requests in my client (for example, with 1000 threads), I received 50,000 requests on my server.

Am I doing something wrong when setting up my UDP server using Netty? Or maybe Netty is simply not designed to support such a load? Why is there only one thread used by this pool of cached threads (I noticed that only one thread and always the same one is used when searching in the JMX jconsole and by checking the thread name in the output logs)? I think that if there are more threads where it is used, as expected, the server will be able to easily handle such a load, because I can do it without any problems if I do not use Netty!

See my initialization code below:

... lChannelfactory = new NioDatagramChannelFactory( Executors.newCachedThreadPool(), nbrWorkers ); lBootstrap = new ConnectionlessBootstrap( lChannelfactory ); lBootstrap.setPipelineFactory( new ChannelPipelineFactory() { @Override public ChannelPipeline getPipeline() { ChannelPipeline lChannelPipeline = Channels.pipeline(); lChannelPipeline.addLast( "Simple UDP Frame Dump DECODER", new SimpleUDPPacketDumpDecoder( null ) ); lChannelPipeline.addLast( "Simple UDP Frame Dump HANDLER", new SimpleUDPPacketDumpChannelHandler( lOuterFrameStatsCollector ) ); return lChannelPipeline; } } ); bindChannel = lBootstrap.bind( socketAddress ); ... 

And the contents of decode () method in my decoder:

 protected Object decode(ChannelHandlerContext iCtx, Channel iChannel, ChannelBuffer iBuffer) throws Exception { ChannelBuffer lDuplicatedChannelBuffer = null; sLogger.debug( "Decode method called." ); if ( iBuffer.readableBytes() < 8 ) return null; if ( outerFrameStatsCollector != null ) outerFrameStatsCollector.incrementNbrRequests(); if ( iBuffer.readable() ) { sLogger.debug( convertToAsciiHex( iBuffer.array(), iBuffer.readableBytes() ) ); lDuplicatedChannelBuffer = ChannelBuffers.dynamicBuffer( iBuffer.readableBytes() ); iBuffer.readBytes( lDuplicatedChannelBuffer ); } return lDuplicatedChannelBuffer; } 

And the contents of the messageReceived () method in my handler:

 public void messageReceived(final ChannelHandlerContext iChannelHandlerContext, final MessageEvent iMessageEvent) throws Exception { ChannelBuffer lMessageBuffer = (ChannelBuffer) iMessageEvent.getMessage(); if ( outerFrameStatsCollector != null ) outerFrameStatsCollector.incrementNbrRequests(); if ( lMessageBuffer.readable() ) { sLogger.debug( convertToAsciiHex( lMessageBuffer.array(), lMessageBuffer.readableBytes() ) ); lMessageBuffer.discardReadBytes(); } } 
+6
source share
1 answer

You have not configured the instance of ConnectionlessBootstrap.

  • You need to configure the following parameters with optimal values.

    Size SO_SNDBUF, Size SO_RCVBUF and ReceiveBufferSizePredictorFactory

     lBootstrap.setOption("sendBufferSize", 1048576); lBootstrap.setOption("receiveBufferSize", 1048576); lBootstrap.setOption("receiveBufferSizePredictorFactory", new AdaptiveReceiveBufferSizePredictorFactory(MIN_SIZE, INITIAL_SIZE, MAX_SIZE)); 

    check the DefaultNioDatagramChannelConfig class for more details.

  • The pipeline does everything using the Netty workflow. If the workflow is overloaded, it delays the selector events cycle and will be the bottleneck in the read / write channel. You must add an execution handler, as shown in the pipeline. This frees up the workflow to do its own work.

     ChannelPipeline lChannelPipeline = Channels.pipeline(); lChannelPipeline.addFirst("execution-handler", new ExecutionHandler( new OrderedMemoryAwareThreadPoolExecutor(16, 1048576, 1048576)); //add rest of the handlers here 
+7
source

Source: https://habr.com/ru/post/910361/


All Articles