Heroku, H12 and end-to-end download timeouts

Overview:

I have photo paper that takes photos and sends them to my web application. Then my web application stores user data and sends the image to the user profile / user fan page.

My web application runs the Ruby on Rails @Heroku cedar stack.

Flow:

  • My webapp receives a photo from photo paper through POST, as a web form.
  • The stand is waiting for a server response. If the download fails, it will send the image again.
  • The response from webapp will only start after the facebook download is complete.

Problems:

Webapp sends data only to the photoblock after all processing has been completed. Many times this will happen in 30 seconds. This causes Heroku to launch H12 - Timeout.

Solutions?

Save the request while downloading the file (return some response data to prevent the H12 hero from starting - https://devcenter.heroku.com/articles/http-routing#timeouts ). - Is it possible? How to achieve this in Ruby?

Change to Unicorn + Nginx and activate the download module (since dyno receives a request only after the download is completed - Unicorn + Rails + Large Uploads ). Is it really possible?

Use a timeout gem . This would cause my end-to-end downloads to fail, so the photos would never be posted to Facebook, right?

Change the architecture. Download directly to S3, run an employee to check for new images uploaded to the S3 bucket, upload them and send them to Facebook. β€œThis may be the best, but it takes a lot of time and effort.” I can go for it in the long run, but now I'm looking for a quick solution.

Other ...

+4
source share
2 answers

Additional information on this issue.

From Rapenie: http://rapgenius.com/Lemon-money-trees-rap-genius-response-to-heroku-lyrics

Ten days ago, caused by a small problem serving our compiled javascript, we started to run many ab tests. We noticed that the numbers we received were consistently worse than the numbers Heroku and their analyst partner New Relic told us. For a static author page, for example, Heroku reported an average response time of 40 ms; Our tools said 6330ms. What could explain such a big difference?

"Requests are waiting in line at the dyno level," the Heroku engineer told us, "after that it is quickly serviced (thus Rails logs appear quickly), but the overall time is slower due to waiting in line."

Waiting for the line at the dinosaur level? What?

From Geroku: https://blog.heroku.com/archives/2013/2/16/routing_performance_update

Over the past few years, Heroku customers have occasionally reported an unexplained delay on Heroku. There are many reasons for latency - some of them have nothing to do with Heroku, but so far this week, we have not seen a general stream among these reports. We are now that our mechanism for balancing routing and load on bamboo and Cedar stacks has created delay problems for our Rails clients, which manifested themselves in several ways, including:

  • Fancy, high latency for some requests
  • Inconsistency between the indicated queue and service time labels and the observed reality.
  • Discrepancies between documented and observed behavior

For applications running on the Bamboo stack, the main reason for these problems is the nature of routing on the Bamboo stack combined with the gradual, horizontal expansion of the routing cluster. On the cedar stack, the main reason is the fact that Cedar is optimized for parallel query routing, while some structures, such as Rails, are not simultaneously in the default configuration.

We want Heroku to be the best place to create, deploy and scale your network and mobile applications. In this case, we did not live up to this promise. We failed:

  • Document correctly how routing works on the Bamboo stack
  • Understand the deterioration of service experienced by our customers and take corrective action.
  • Identify and fix confusing metrics specified at the routing level and display with third-party tools
  • Clearly communicate the product strategy for our routing service.
  • Provide customers with an upgrade path from noncompetitive applications in Bamboo to concurrent Rails applications on Cedar
  • Give Heroku a promise to give you the opportunity to focus on application development while we worry about the infrastructure.

We immediately take the following actions:

  • Improving our documentation so that it accurately reflects how our service works with both bamboo and cedar drains.
  • Removing incorrect and confusing metrics reported by Heroku or partner services such as New Relic
  • Adding metrics that enable customers to determine the impact of priority on application response time
  • Providing additional tools that developers can use to increase our latency indicators and queues
  • Work to improve support for Rails applications while querying in Cedar
  • The remainder of this blog post explains the technical details and the history of our routing infrastructure, the decision-making goal we made along the way, the mistakes we made, and what we think is the way forward.
+1
source

1) You can use the Unicorn as an application server and set a timeout before the unicorn master kills the employee for several seconds, which is more than your needs. Here is an example installation example where you can see a timeout of 30 seconds.

Nginx does not work on heroku, so this is not an option.

2) Changing the architecture will also work well, although I would choose an option rather than when the download traffic does not block my own server, for example TransloadIt . They will help you get images on S3 for examples and perform custom conversions, cropping, etc., without adding additional speakers, because your processes are blocked by downloading files.

Addition: 3) Another change in architecture will consist in processing the receiving part in only one action and uploading the background worker to the facebook task (using, for example, Sidekiq ).

0
source

Source: https://habr.com/ru/post/1435969/


All Articles