Is fopen () a streaming security feature in Linux?

If I use the fopen () call to open the same file in multithreaded mode and write data to the file. Should I use a mutex to ensure that data is not messy?

+6
source share
4 answers

If two streams open the same file with fopen() , each of them will have independent file streams ( FILE * ), supported by independent file descriptors that reference the same file. You can write independently in two file streams, but the final result in the file will depend on where the streams write and when they erase the file stream. The results are unpredictable unless you control where each thread writes. The simplest thing is to make sure that both streams use the same file stream, but you probably still need to coordinate between the streams. Note that POSIX requires that C functions provide coordinated access to the file stream - see flockfile() , which sets the requirement that

All functions that reference objects (FILE *) , except those that have names ending in _unlocked , must behave as if they were using flockfile() and funlockfile() internally to obtain ownership of those objects ( FILE *).

If you open the file in add mode in both streams, then the records will be safe at the end of the file each time, but you still have to worry about clearing the data before the buffer is full.

+11
source

As far as I know, you should use mutexes .

I have not tried this C , but in Java , if you open file more than one thread , both threads can write in it, and file really confused.

So, I think the situation in C will be equivalent to Java .

+1
source

fopen() reenterable , and you can have as many descriptors pointing to the same file that you like.

What you get as a result of reading / writing from / to a file using several descriptors is not a matter of thread safety, but rather parallel access to the file, which in most cases (except when the file is read-only) " "

+1
source

Below you can open the file with an open stream, you can open several files, and it is simply written to the file sequentially. I think the code below can still be optimized with time synchronization and popping unused files to support cache

Any suggestion is welcome.

 class OpenFile { string fileName; static map<string, unique_ptr<mutex>> fmap; bool flag; public : OpenFile(string file) : fileName(file) { try { if(checkFile(file)) { flag = false; fmap.emplace(file, make_unique<mutex>()); } else { flag = true; } } catch(string str) { cout << str << endl; } } void writeToFile(const string& str) const { if (flag) { lock_guard<mutex> lck(*fmap.find(fileName)->second); ofstream ofile(fileName, ios::app); ofile << "Writing to the file " << str << endl; ofile.close(); } else { ofstream ofile(fileName, ios::app); ofile << "Writing to the file " << str << endl; ofile.close(); } } string ReadFile() const { string line; if (flag) { lock_guard<mutex> lck(*fmap.find(fileName)->second); ifstream ifile(fileName, ios::in); getline(ifile, line); ifile.close(); } else { ifstream ifile(fileName, ios::in); getline(ifile, line); ifile.close(); } return line; } OpenFile() = delete; OpenFile& operator=(const OpenFile& o) = delete; static bool checkFile(string& fname); }; bool OpenFile::checkFile(string& fname) { if (fmap.find(fname)==fmap.end()) { return true; } else return false; } 
+1
source

Source: https://habr.com/ru/post/946058/


All Articles