The article " http-implementation of fasthttp in golang " from husobee mentions:
Well, this is much better for several reasons:
- The worker pool model is a zero-allocation model, since the workers are already initialized and ready to serve, while in the implementation of stdlib
go c.serve()
you have to allocate memory for goroutine. - The working pool model is easier to configure, since you can increase / decrease the buffer size of the number of working units that you can accept, compared to the fire model and forget about stdlib
- The working pool model allows handlers to be more connected with the server through a channel connection, if the server needs to, for example, disable the work, it will be able to more easily exchange information with workers than in the stdlib implementation.
- The best signature for the definition of a handler function, since it only accepts a context that includes both the request and the writer needed by the handler. this is better than the standard library, since everything you get from stdlib is the author of the request and response ... Working in go1.7 to include context in the request is pretty much a hack to give people what they really are want (context) without disturbing anyone.
In general, it is simply better to write a server with a working pool model for serving requests, as opposed to simply spawning a "stream" for each request, without the possibility of throttling out of the box.
source share