What is the best way to handle commands in a disruptive system at the same time?

I am developing an SVN-like system (which basically has input / output / put / get / list / delete commands) that should accept commands from multiple clients. My idea was to put all the received commands in a structure (in the form of a lock), similar to a queue, which would execute them sequentially in another thread.

Later, with more time, I could develop some clever mechanisms to understand if one or more of the following queue messages can be started at the same time, so that I can speed up the process (for example, if both teams that I have on the queue to work over different projects, there are no problems with the state of the race / joint data).

Now I am faced with the question of whether this is really the best solution to the problem. I could just lock the files because I use them, but I am afraid that this may lead to deadlocks in some specific cases (although tbh I still could not imagine a specific case where this could happen) .

This project should be used in the network security class. Thus, this SVN thing is just the basis on which I will later describe the essence of my security issues. I would not consider that performance is of paramount importance (but to avoid deadlocks, YES!), Therefore, the goal is the correctness of the algorithm.

How do you approach this situation?

+4
source share
1 answer

I have done several projects of a similar nature. In one iteration, we copied the files on the server before transferring them to the client. And in the opposite direction, we switched to a temporary file before overwriting the primary file. It was a mess. Do not do that! A typical Internet connection these days is not much slower than a regular disk, and a typical disk transfer is usually slower than an average LAN connection. Also, do you store (usually small) source code files in this article? File locks will be enough to protect files. Each library these days supports locking files at the system level. (Look at overloading the constructor for FileStream in .Net if you don’t believe me.) If you need to write data, you are trying to acquire write locks for all the necessary files. If this is not possible, you will return a timeout error to the user. One system I was working on had persistent, read-only files, so if you want people to be able to stream a specific file all day, you can do this with this flag. If the file locks are not sufficient, you can look at the .Net ReaderWriterLock class.

I see no advantage for manually waiting for put / get / delete requests. This is a server job, not your job. You do not want to store all your β€œtagged” data in RAM. And you definitely don't want to manage your download streams, if possible. (We had to write our own IIS 7 handler to upload files larger than 2 GB. It was PITA.) Are you familiar with Concurrency and instance models in WCF? (If not, you can start here: http://msdn.microsoft.com/en-us/library/ms731193.aspx ) I am familiar with this, but I believe other server frameworks have a similar configuration. I could see some of the benefits of caching list data to save a potential drive in this scenario. In one of the projects I was working on, we cached NT NT login logins. It ended up being pain and responsibility and completely unnecessary. I think that existing frameworks for managing server-side sessions and permissions are enough for this.

+1
source

Source: https://habr.com/ru/post/1402508/


All Articles