How often to synchronize market data and display them as chronological time data

http://pubapi.cryptsy.com/api.php?method=marketdatav2

I would like to synchronize market data on an ongoing basis (for example, creeps and other exchanges). I would like to show the latest purchase / sale price of the relevant orders on these exchanges on a regular basis as a historical time series.

What basic database should I use to store and display or build any parameter from the received data in the form of historical timeseries data.

+6
source share
3 answers

I would suggest you look at the database configured to process time series data. One that comes to mind is InfluxDB . This question has a more general approach to time series databases.

0
source

I think this requires more detailed information about the requirement. He simply describes: "he needs synchronization time series data." What is a script? What is a data source and purpose?

Option 1.

If we are only talking about data synchronization problems between two data, the simplest solution is the CouchDB NoSQL Series (CouchDB, CouchBase, Cloudant)

All of them are based on CouchDB, one way or another they provide data replication functionality at the data center level (XCDR). This way you can copy the date to another couchDB in another data center or even to couchDB on mobile devices.

Hope this will be helpful for you.

Option 2

Another approach is a data integration approach. You can synchronize data using the ETL batch job. A batch worker can periodically copy data to a destination. This is the most common way to replicate data to another destination. There are many tools that support the Pentaho ETL ETL line, Spring integration, Apache Camel.

If you provide me with a more detailed script, I can help you in more detail.

Enjoy-Terry

0
source

I think mongoDB is a good choice. That's why:

  • You can easily scale and thus be able to store a huge amount of data. By using the appropriate shard key, you can even place the shards close to the exchange they follow in order to improve speed if this should become a problem.
  • Replica sets offer an automatic switch to another resource, which implicitly can be a problem
  • Using the TTL function, data can be automatically deleted after their TTL, effectively creating a database with a cyclic cycle.
  • Both aggregation and map / reduction structure will be useful
  • There are several free classes at MongoDB University that will help you avoid the most common mistakes.
0
source

Source: https://habr.com/ru/post/970843/


All Articles