Which webserver / mod / technique should be used to serve everything from memory?

I have many search strings from which I will create my answer.

I think IIS with Asp.net allows me to store static lookuptables in memory, which I can use to quickly answer my answers.

Are there also non.net solutions that can do the same?

I looked at fastcgi, but I think this starts X processes, each of which can handle Y requests. But processes are, by definition, protected from each other. I can configure fastcgi to use only 1 process, but does it have scalability effects?

Anything using PHP or any other interpreted language will not fly because it is also associated with cgi or fastcgi?

I understand that memcache may be an option, although it will require a different (local) socket connection, which I would prefer to avoid, since everything in memory will be much faster.

The solution can work under WIndows or Unix ... it does not really matter. The only thing that matters is that there will be many requests (100 / sec now and will increase to 500 / sec per year), and I want to reduce the number of web servers needed to process it.

The current solution is implemented using PHP and memcache (and accidentally getting to the SQL server server). Although this is fast (for php anyway), Apache has real problems transferring 50 seconds.

I put generosity to this question, since I did not see enough answers to make the right choice.

At the moment I am considering Asp.net or fastcgi with C (++).

+4
source share
2 answers

It looks like you should use a data store with a key in memory, such as Redis , if you intend to use several server websites in the future, than you should definitely use a centralized memory store. Redis is especially ideal in this scenario because it supports advanced data structures such as lists, sets, and ordered sets. Its also pretty fast, it can get 110,000 SETs / second, 81,000 GET / sec in an entry-level Linux box. Check the tests . If you go down this route, I have a C # redis client which will simplify access.

To use shared memory, you need an application server that "always works" in the same process. In these cases, you can use static classes or a common application cache. The most popular "application servers" are any Java servlet containers (such as Tomcat) or ASP.NET.

Now, switching to access to memory rather than to disk will bring significant benefits if this performance is important to you, than I don’t think you want to consider using an interpreted language. When working with a request, there will always be overhead, a network I / O protocol, a parsing protocol that establishes a workflow, etc. When deciding to use shared memory storage (in the same host), compared with the memory, the comparison with the total time required to complete the request is insignificant.

+5
source

First of all, let me try to think with you on your direct questions:

“For the performance you are aiming for, I would say that demanding access to shared memory for search tables is excessive.” For example, memcache developers on the expected performance: "On a fast machine with a very high network speed (or local access - Ed.), Memcached can easily handle 200,000 requests per second."

- Currently, you are probably limited by CPU time, since you generate each page dynamically. If at all possible: cache, cache, cache! Cache your main page and rebuild it only once per minute or five minutes. For registered users, specify the cache pages that they can visit in their session. For example: where 50 requests are not too bad for a dynamic page, a reverse proxy, such as varnish, can serve thousands of static pages of the second - on a rather mediocre server. My best advice would be to study reverse proxy setup using varnish or squid .

- if you still need to dynamically generate many pages, use php accelerator to avoid having to compile php code every time the script is executed. According to Wikipedia, this is a 2-10 times increase in productivity right now.

- mod_php is the fastest way to start php.

- In addition to using fastcgi, you can write an apache module and have your data in a common memory space with the web server itself. It can be very fast. However, I have never heard anyone do this to improve performance, and this is a very inflexible solution. - If you switch to more static content or go along the fastcgi path: lighthttpd is faster than apache.

“Not fast enough yet?” in the core of webservers , for example TUX can be very fast.


Secondly: you are not the first to face this challenge, and, fortunately, some of the larger fish are kind enough to share their “tricks” with us. I suppose this is beyond the scope of your question, but it can be really inspiring to see how these guys solved their problems, and I decided to share the material I know. Check out this presentation on facebook architecture and this presentation, “Creating Scalable Web Services,” which contains some flickr design notes.

In addition, facebook lists an impressive set of tools that they developed and contributed to it. In addition, they share notes on their architecture . Some of their performance enhancing tricks are:
- Some performance-enhancing settings for memcache , such as memcache-over-udp.
- hiphop is a php-to-optimized-C ++ compiler. Facebook engineers claim a 50% reduction in CPU usage.
- Introduce computationally intensive services into a “faster language” and lay everything together using thrift .

+1
source

Source: https://habr.com/ru/post/1301795/


All Articles