Couch DB Scaling and Performance

I am considering implementing a CouchDB server to provide an ad-hoc search for some of the metadata that we store for an internal business operation.

We store a number of “attributes”, such as size, source, submission date, and URLs for “jobs” in our internal process.

This is good and good in our relational database, but our users would like to create lists of similar jobs by providing “search criteria” similar to google searches. Thus, the user can say: "Show me all the tasks that are larger than XXX and submitted after YYY" and return a list of descriptions and URLs.

That sounds perfect for Couch, and from what I researched, it looks like it will work well.

My question is how well will it scale using appropriate equipment? We have from 150 to 200 million such documents and between 11-30 attributes per document. Metadata is no more than a few kilobytes.

Initially, I assumed that the Quadcore (VM) server serves this for testing, but I need to scale it to support between 100-250 users at a time.

I know I can do this with most db servers, but I'm looking for something that provides an ad-hoc request (more than REST or HTTP is fine, we have our own search tools).

Does anyone have experience setting up a sofa and using it for workloads at this level?

+3
source share
1 answer

, erlang CouchDB .

, , ?

, , .

erlang, , javascript, JSON, .

, .

+4

Source: https://habr.com/ru/post/1726421/


All Articles