Recommended FHS test / installation workflow for Linux?

I am in the process of switching to Linux for development, and I am puzzled by how to maintain good FHS compliance in my programs.

For example, on Windows, I know that all resources (bitmaps, audio data, etc.) that my program will need can be found with relative paths from the executable, so the same thing if I run the program from my development directory or from the installation (for example, in the "Program Files" section), the program will be able to find all of its files.

Now, on Linux, I see that usually the executable goes under / usr / local / bin and its resources to / usr / local / share. (And the truth is that I'm not even sure about this)

For convenience (for example, version control), I would like to have all the files related to the project in one path, for example, project / src for the source and project / data for the resource files.

Is there any standard or recommended way that allows me to simply rebuild the binary for testing and use the files in the project / data directory, as well as the ability to find files when they are under / usr / local / share?

I thought, for example, to set a symbolic link in / usr / local / share, pointing to my dir resource, and then just hardcode that path inside my program, but I feel that it is pretty hacky and not very portable.

Also, I was thinking of starting a setup script that copies all resources to / usr / local / share every time I change or add resources, but I also feel that this is not a good way to do this.

Can someone tell me or point me where he says how this problem is usually resolved?

Thanks!

+4
source share
3 answers

For convenience (for example, version control), I would like to have all the files related to the project in one path, for example, project / src for the source and project / data for the resource files.

You can organize your source tree at your discretion - it should not have any resemblance to the FHS layout installed for the installed software.

I see that usually the executable goes under / usr / local / bin and its resources to / usr / local / share. (And the truth is that I'm not even sure about this)

The standard prefix is /usr . /usr/local for, well, "local installations," as the FHS specification repeats.

Is there a standard or recommended way that allows me to simply rebuild the binary for testing and use the files in the project / data directory

Sure. Run ./configure --datadir=$PWD/share , for example, this is a way to point your assembly to data files from the source tree (replace it with the appropriate path) and use something like -DDATADIR="'${datadir}'" in AM_CFLAGS to make the value known (presumably C) code, (All this provided that you use autoconf / automake. Similar options may be available on other build systems.)

This type of hardcoding is what is used in practice, and that’s enough. Having a hard-coded path should not be a problem to build development within your own working copy, and final builds (made by the packer) will simply use standard FHS paths.

+2
source

You can just check out a few places. For example, first check if there is a data directory in the directory in which you are now running the program. If so, just keep using it. If not, try /usr/local/share/yourproject/data , etc.

For development / testing, you can use the data directory in the project folder and for deployment use the material in /usr/local/share/ . Of course, you can test even more places (e.g. /usr/share ).

The main requirement of this method is that you have a function that builds the correct paths for all calls to the file system. Instead of fopen("data/blabla.conf", "w") use something like fopen(path("blabla.conf"), "w") . path () will build the correct path from the path determined by the directory tests when the program starts. For instance. if the path was /usr/local/share/yourproject/data/ , the line returned by path("blabla.conf") would be "/usr/local/share/yourproject/data/blabla.conf" - and your good absolute way.

This is how I do it. NTN.

0
source

My preferred solution in such cases is to use a configuration file, as well as a command line option that overrides its location.

For example, a configuration file for a fully deployed application named myapp might be in /etc/myapp/settings.conf , and part of it might look like this:

 ... confdir=/etc/myapp/ bindir=/usr/bin/ datadir=/usr/share/myapp/ docdir=/usr/share/doc/myapp/ ... 

Your application (or script launcher) can analyze this file to determine where to find the rest of the necessary files.

I believe that you can reasonably assume in your code that the location of the configuration file is fixed in /etc/myapp - or any other location specified at compile time. Then you provide a command-line option to override this location:

 myapp --configfile=/opt/myapp/etc/settings.conf ... 

It may also make sense to have options for some directory paths, so the user can easily override any configuration file settings. This approach has several advantages:

  • Your users can easily move the application - simply moving files, changing paths in the configuration file, and then using, for example, a script wrapper to invoke the main application with the corresponding --configfile option.

  • You can easily support FHS, as well as any other circuitry that you need.

  • When developing, you can use your testuite specially created configuration file, and the paths should be where you need to.

Some people advocate checking the system at runtime to solve such problems. I usually suggest avoiding such solutions for at least the following reasons:

  • This makes your program non-deterministic. You can never tell at a glance which configuration file it is typing, especially if you have several versions of the application on your system.

  • With any installation, the application will remain bold and happy - and so will the user. In my opinion, the application should look at one specific and well-documented location and interrupt the informational message if it cannot find what it is looking for.

  • It is very unlikely that you will always be all right. There will always be unexpected rare environments or corner cases that the application will not handle.

  • This behavior is contrary to Unix philosophy. Even comamnd shells check for multiple locations because a file can be stored in all places that needs to be analyzed.

EDIT:

This method is not backed up by any formal standard that I know of, but it is a common solution in the Unix world. Most of the main daemons (e.g. BIND, sendmail, postfix, INN, Apache) will look for the configuration file in a specific location, but allow you to override this location and - through the file - any other way.

This basically allows the system administrator to implement the whetever scheme they want, or to install several parallel installations, but it also helps during testing. This flexibility makes it best practice if it is not an appropriate standard.

0
source

Source: https://habr.com/ru/post/1334874/


All Articles