Tornado, Nginx, Apache ab - apr_socket_recv: Connection reset by peer (104)

I run nginx and tornadoes in c1.medium instances.

When I run ab, below is my output. Nginx will not work. I tried setting up the config file for ninx to no avail. If I start only one port, passing nginx, for example. `

http://127.0.0.1:8050/pixel?tt=ff` 

then its fast. Look at the very bottom. This should be a nginx problem, so how can I solve this problem? Also below is the conf file for nginx.

 root@ip-10-130-167-230 :/etc/service# ab -n 10000 -c 50 http://127.0.0.1/pixel?tt=ff This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 127.0.0.1 (be patient) Completed 1000 requests Completed 2000 requests Completed 3000 requests Completed 4000 requests Completed 5000 requests Completed 6000 requests Completed 7000 requests Completed 8000 requests Completed 9000 requests apr_socket_recv: Connection reset by peer (104) Total of 9100 requests completed 

It should smoke, but it is not.

I installed the following parmamerts

 ulimit is at 100000 # General gigabit tuning: net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_syncookies = 1 # this gives the kernel more memory for tcp # which you need with many (100k+) open socket connections net.ipv4.tcp_mem = 50576 64768 98152 net.core.netdev_max_backlog = 2500 

Here is my nginx conf:

 user www-data; worker_processes 1; # 2*number of cpus pid /var/run/nginx.pid; worker_rlimit_nofile 32768; events { worker_connections 30000; multi_accept on; use epoll; } http { upstream frontends { server 127.0.0.1:8050; server 127.0.0.1:8051; } sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; # Only retry if there was a communication error, not a timeout # on the Tornado server (to avoid propagating "queries of death" # to all frontends) proxy_next_upstream error; server { listen 80; server_name 127.0.0.1; ##For tornado location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_pass http://frontends; } 

if I ran ab by passing nginx:

 ab -n 100000 -c 1000 http://127.0.0.1:8050/pixel?tt=ff root@ip-10-130-167-230 :/home/ubuntu/workspace/rtbopsConfig/rtbServers/rtbTornadoServer# ab -n 100000 -c 1000 http://127.0.0.1:8050/pixel?tt=ff This is ApacheBench, Version 2.3 <$Revision: 655654 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ Benchmarking 127.0.0.1 (be patient) Completed 10000 requests Completed 20000 requests Completed 30000 requests Completed 40000 requests Completed 50000 requests Completed 60000 requests Completed 70000 requests Completed 80000 requests Completed 90000 requests Completed 100000 requests Finished 100000 requests Server Software: TornadoServer/2.2.1 Server Hostname: 127.0.0.1 Server Port: 8050 Document Path: /pixel?tt=ff Document Length: 42 bytes Concurrency Level: 1000 Time taken for tests: 52.436 seconds Complete requests: 100000 Failed requests: 0 Write errors: 0 Total transferred: 31200000 bytes HTML transferred: 4200000 bytes Requests per second: 1907.08 [#/sec] (mean) Time per request: 524.363 [ms] (mean) Time per request: 0.524 [ms] (mean, across all concurrent requests) Transfer rate: 581.06 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 411 1821.7 0 21104 Processing: 23 78 121.2 65 5368 Waiting: 22 78 121.2 65 5368 Total: 53 489 1845.0 65 23230 Percentage of the requests served within a certain time (ms) 50% 65 66% 69 75% 78 80% 86 90% 137 95% 3078 98% 3327 99% 9094 100% 23230 (longest request) 2012/05/16 20:48:32 [error] 25111#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1" 2012/05/16 20:48:32 [error] 25111#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1" 2012/05/16 20:53:48 [error] 28905#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1" 2012/05/16 20:53:48 [error] 28905#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1" 2012/05/16 20:55:35 [error] 30180#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8051/", host: "127.0.0.1" 2012/05/16 20:55:35 [error] 30180#0: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 127.0.0.1, server: 127.0.0.1, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8050/", host: "127.0.0.1" 

Oupput when using the -v 10 option to ab:

 GIF89a LOG: Response code = 200 LOG: header received: HTTP/1.1 200 OK Date: Wed, 16 May 2012 21:56:50 GMT Content-Type: image/gif Content-Length: 42 Connection: close Etag: "d5fceb6532643d0d84ffe09c40c481ecdf59e15a" Server: TornadoServer/2.2.1 Set-Cookie: rtbhui=867bccde-2bc0-4518-b422-8673e07e19f6; Domain=rtb.rtbhui.com; expires=Fri, 16 May 2014 21:56:50 GMT; Path=/ 
+6
source share
2 answers

I had the same problem using apache test in sinatra application using webrick. I found the answer here .

Actually the problem is your Apache server.

The error has been removed in higher versions of apache. Try to download here .

+2
source

I had the same problem and searching for information in the logs I got the following lines:

 Oct 15 10:41:30 bal1 kernel: [1031008.706185] nf_conntrack: table full, dropping packet. Oct 15 10:41:31 bal1 kernel: [1031009.757069] nf_conntrack: table full, dropping packet. Oct 15 10:41:32 bal1 kernel: [1031009.939489] nf_conntrack: table full, dropping packet. Oct 15 10:41:32 bal1 kernel: [1031010.255115] nf_conntrack: table full, dropping packet. 

In my particular case, the conntrack module is used inside iptables because the same server has a firewall.

One fix solution that unloads the conntrack module, and another simple one, with these two lines used in the firewall policy:

 iptables -t raw -I PREROUTING -p tcp -j NOTRACK iptables -t raw -I OUTPUT -p tcp -j NOTRACK 
0
source

Source: https://habr.com/ru/post/915945/


All Articles