Is it good practice to create an application through multiple .exe?

in the book - I think it was Eric S. Raymonds "the art of Unix programming" - I read sth. similar applications should be created by combining several small tools.

So, I would like to know if it is a good idea to develop a Windows application, for example, by creating one small .exe for each task?

For example: you have a document management system:

This can be combined, for example:

  • .exe for searching and displaying documents (GUI)
  • .exe to index a document (place documents in a database)
  • .exe to delete documents
  • and etc.

Do you think this would be a good idea or it should be placed in one large .exe file with several DLLs (this would be the way I saw most built applications)?

What would be the pros and cons?

+4
source share
2 answers

To my mind:

Yes and no. Several factors are deciding whether to parse .exe. In the grand scheme of things, you need to decide whether the task in progress is executing its own file. The big factor is the decision, whether it serves an independent purpose from your central application. After that, you also need to decide whether your application will be localized in one package or more from a series of utilities that can be used independently.

From the point of view of the user, people do not like to open several programs at once. Back to your example, here's how I do it. I would have one .exe as the main program for:

* an .exe for searching and displaying documents (GUI) * an .exe for deleting documents * etc. 

And then add a separate .exe file that will act as a background pointer by doing the second functionality that you specified.

Indeed, it should be something evaluated on an individual basis, bearing in mind the intelligent design of the user interface from several points of view.

Edit: In addition, you may find yourself in situations where alternative methods of separation are more suitable (for example, tabs, dialog boxes, etc.).

+1
source

There are really a lot of pros and cons when we consider the detailing of components, services, deployable blocks, etc.

In the traditional world of the unix command line, we manipulated data as text files, each line was a record, and possibly with fields separated by commas or tabs or spaces. Then it would be possible to develop many small utilities, and we had a lot of fun with β€œcut”, β€œjoin”, β€œtr”, β€œgrep”, β€œsed”, β€œhead”, etc. Personally, I still make sure that I have these tools in my Windows environments.

Why does it work so well? Fundamental file format: rows and columns. We can add new utilities and know that they will integrate. You never need to change the old utility, because we added a new one. There is also no graphical interface, we expect to work on the command line. We just run task pipelines to get the job done.

Now, what do you do when things get complicated? When you want to offer graphical interfaces. A simple approach: a bullet bit releases one application. New feature, new version of the application. It may not be so bad, you are changing some aspect of the user interface, you need to change some file formats or APIs, so other parts are changing too. It is much easier to release a single, consistent (possibly even) tested whole.

However, as the volume of user interfaces grows, it is pretty destructive to free up everything. Consequently, the emergence of component models, such as OSGi, are used by Eclipse, so that a single user interface can be assembled from many separately developed and evolving parts.

+2
source

Source: https://habr.com/ru/post/1334924/


All Articles