How to run InfluxDB on Heroku?

Is it possible, and if so, how? I would like to access it from Heroku's existing infrastructure.

Do I need a Procfile ? From what I understand, this is just a standalone binary file written in Go! therefore, it should not be so difficult to deploy, I just wonder how to deploy it, because I do not think that I understand all the inputs to the Heroku deployment.

+6
source share
2 answers

Heroku Dynos should not be used to deploy a database application such as InfluxDB.

Dynos are ephemeral servers. Data is not saved between dyno restarts and cannot be shared with other speakers. In fact, any database application deployed to dyno is pretty much useless. That's why Heroku databases (like Postgres) are all add-ons. InfluxDB must be configured on a different platform (such as AWS EC2 or VPS) because Heroku add-on is not available.


However, you can deploy InfluxDB to the Heroku dinar.

To get started, it’s important to understand the concept of “slug”. Slides are containers (similar to Docker images) that contain everything you need to run the program on the Heroku infrastructure. To deploy InfluxDB, you must create an InfluxDB pool. * There are two ways to create a pool for Go libraries:

  • Create a pool directly from the Go executable, as described here . **
  • Create a pool from the source code using the Heroku Go building package (explained below).

To build a pool from source using buildpack, first clone the InfluxDB Github repository. Then add the Procfile to the root of the repo, which tells Heroku team that it starts when dyno starts.

 echo 'web: ./influxd' > Procfile 

Go buildpack requires all dependencies to be included in the directory. Use the godep dependency tool to export all dependencies to a directory.

 go get github.com/tools/godep godep save 

Then copy the changes above to the git repository.

 git add -A . git commit -m dependencies 

Finally, create a new application and tell him to compile it using Go buildpack.

 heroku create -b https://github.com/kr/heroku-buildpack-go.git git push heroku master heroku open // Open the newly created InfluxDB instance in the browser. 

Heroku will display an error page. An error message will appear, because the Heroku Internet process type requires an application to listen for incoming requests on the port described by the $PORT environment variable, otherwise it will kill dino. InfluxDB API and administration panel work on ports 8086 and 8083 respectively.

Unfortunately, InfluxDB does not allow setting these ports from environment variables only through the configuration file ( /etc/config.toml ). A small bash script that was run before running InfluxDB can set the correct port in the configuration file before running InfluxDB.

Another problem: Heroku provides only one port per speaker, so the API and admin panel cannot open simultaneously on the Internet. A smart reverse proxy could work around this problem using the Heroku X-Forwarded-Port request header .

On the bottom line, do not use Heroku dynamons to run InfluxDB.


* This means that the benefits of the standalone Go executable are lost when deployed to Heroku, as it needs to be recompiled for the Heroku stack.

** Creating a bullet directly from the InfluxDB executable does not work, because there is no built-in way to listen to the right port given by Heroku in the $PORT environment variable.

+6
source

I like to think that on Heroku node it is possible to use something when using a custom buildpack , but there are some considerations when hosting with Heroku:

  • ops for example. backup, monitoring (this is associated with the installation of additional services, the opening of additional ports, etc. - Heroku may interfere here).
  • given the size of the speaker
  • and if you need a larger dinosaur, cost becomes a problem. When you take the IaaS route, you will get more hits for your dollar.
  • other dino "functions", for example. disc ephemerality

I highly recommend hosting InfluxDB or deploying your own on VPS, all of which can point to existing Heroku-based applications. Then it will help to get these instances as close to each other as possible (i.e., the same region or, if possible, to connect with each other), assuming the need for low latency between the DB and the application stack.

+1
source

Source: https://habr.com/ru/post/982880/


All Articles