100% CPU usage after installing logstash

I completed this tutorial on installing the Logstash / ES / Kibana software stack on my Ubuntu server. I changed logstash configuration to check everything locally before trying to send logs. Therefore, I have one node running ES / Kibana and Logstash, configured as follows:

input { file { path => "/var/log/syslog" type => "syslog" } } output { elasticsearch { host => localhost } } 

Everything works as far as possible from what I see on Kibana, but I have a background process that eats a 100% processor. Top tells me that this is a java job running under the logstash user. sudo service logstash stop does not stop the process. I also tried removing the web service after this without success. I do not know where to look. Any help is appreciated.

+5
source share
2 answers

The Digital Ocean tutorial uses nginx in front of Kibana and listens on port 80. logstash ships with logstash-web, which also want to listen on port 80.

Since Ubuntu uses an upstart, an attempt to kill java processes will fail, as they will continue to regenerate in accordance with /etc/init/logstash*.conf . High CPU utilization comes from the fact that at startup the logstash uses a lot of processor time and should calm down after a couple of seconds, but since it dies, being unable to bind to port 80 and continues to revive, it looks like it constantly uses Resources.

If you have the same problem as mine, look at the PST identifiers and you will notice that they change. You should also see Address already in use - bind - Address already in use at the end of /var/log/logstash/logstash.log .

So, we just need to disable logstash-web. On Ubuntu, this can be done with:

$ echo manual | sudo tee / etc / init / logstash-web.override

To stop logstash-web without rebooting, we use

$ sudo stop logstash-web

+21
source

You can kill logstash processes with skill -u logstash . Launch logstash in the foreground with increased detail.

If you change (temporarily) the output as the result of only stdout output, what do you notice?

Please note that if you are likely to get a connection with other nodes; saying host => localhost does not mean that you are just getting a connection to port 9300 (I suggest testing with tcpdump on lo and eth0 (or something suitable). So check your firewall and maybe take a firewall temporarily.

Also note that localhost can give you the result of IPv6; you can say 127.0.0.0 instead.

Elasticsearch exit documentation can be seen in logstash docs

You do not say whether you use the built-in elastics search or not; the default value is false, so I think you are not doing this.

I remember that the problem was in my own deployment, where logstash and elasticsearch were present on the same host, and a collision occurred for port 9300; I decided that using logstash, use port 9301 (bind_port).

I suggest you also install a "cluster". By default, the "protocol" will be "node", which means that it will try to become part of the cluster (although not for node data), you can try changing this to "transport" or http and observe the behavior change.

It was very useful for me to carefully look at network traffic when I started to carefully check the behavior.

FWIW, I found the "Logstash Book" very worthwhile (and cheap).

0
source

Source: https://habr.com/ru/post/1202665/


All Articles