Database Architecture Using Redis + Node.js

Following some other SO questions, I am developing a website monitoring application as a pet project to learn more about Node.js + Redis.

I planned for users to add URLs and add them to Redis SET . Every minute I get SET results, execute an HTTP Get request and print the response.

Everything seems to be working fine, however I have a couple of questions:

  • Given that Redis SET does not allow duplicate keys (which will not allow me to execute a request to the same URL), how do I manage when a user removes a URL from his account and another user has the same URL? Can I have an INCR value in the URL key so that I know how many users have a URL in my account?

  • Given that I execute an HTTP request every minute, and I want to use Redis to save the results (response time, up / down, etc.) , which is the best way to save all this data in Redis (the results of requests to each URL address every minute)? Should I store each response in a unique Redis key?

  • To show results to the user in real time, what's the best way to request results and analyze them in real time?

Thanks for the help.

+4
source share
1 answer

I think you should start writing a prototype in redis-cli . I would also like to mention this very good article from Simon Willison explaining redis .

Given that Redis SET does not allow duplicate keys (which will save me from making a request to the same URL), how can I control when a user removes a URL from his account, but another user has the same URL? Can I have an INCR value in the URL key so that I know how many users have a URL in their account?

I would use SADD + INCR for this.

 SADD urls http://www.google.com INCR http://www.google.com 

To remove http://www.google.com , I would simply do:

 DECR http://www.google.com #Only if DECR http://www.google.com => 0, then you should remove from SET SREM urls http://www.google.com 

Given that I execute an HTTP request every minute, and I want to use Redis to save the results (response time, up / down, etc.), which is the best way to save all this data in Radish (the results of the requests for each URL address every minute)?

I would use a unique key for each URL and write the data back to redis as json ( JSON.stringify(obj) ) using MSET .

 MSET data:http://www.google.com "{json for google}" data:http://www.yahoo.com "{json for yahoo}" 

To show the results to the user in real time, what is the best way to request results and analyze them in real time?

I would get the results via MGET and parse json ( obj = JSON.parse(json-string) ).

+5
source

Source: https://habr.com/ru/post/1336300/


All Articles