As for Redis, it acts as a buffer in case the logstash and / or elasticsearch are slow or slow. If you use a full logstast or logstast forwarder as a shipper, it detects that logstash is unavailable and stops sending logs (remembering where it left off, at least for a while).
So, in a clean logstash / logstash-forwarder environment, I see no reason to use a broker like redis.
When it becomes important, for sources that do not care about start status and do not buffer them by side. syslog, snmptrap and others fall into this category. Since your sources include syslog, I would drop brokers in your setup.
Redis is an RAM-intensive application, and the amount of memory you have will determine how much time you can withstand due to a malfunction in the log. On a 32 GB server (along with logstash), how much memory would you give yo redis? How big is your average document size? How many documents will it take to fill memory? How long does it take to create many documents? In my experience, redis fails when memory is full, but it can only be me.
Logstash is a CPU intensive process as all filters are executed.
Regarding elasticsearch cluster size, @magnus has already pointed you to some information that might help. Starting with 64 gigabyte machines, fine, and then scale horizontally as needed.
You should have two client (non-data) nodes that are used as the access point for the inserts (efficiently sending requests for the correct node data) and search (the βreduceβ phase processing with the data returned from the data nodes). Two of these failover configurations will be a good start.
Two kibana machines will provide you redundancy. Including them in a failover configuration is also good. In my opinion, nginx was used more with kibana3. I do not know if people use it with kibana4 or switched to a "shield".
Hope this helps.