In my opinion, implementing an automated build server will be the purest solution to what you are trying to achieve. With the added benefit of ... continuous integration! (I will touch CI a bit later).
There are many tools to use. @Clifford already mentioned CMake. But some others:
- Hudson (open source)
- CruiseControl (open source)
- TeamCity (Commercial - but it has a rather large free version that allows up to 3 build agents and 20 build configurations. The corporate version of TeamCity is what my company uses, so my answer will be focused on this, as I know, but the concepts will most likely be applied to several instruments.
So, first of all, I’ll try to explain what we are doing and suggest how this might work for you. I do not suggest that this be an accepted way of doing something, but it worked for us. As I said, we use TeamCity for our build server. Each software project is added to TeamCity and build configurations are configured. Build configurations tell TeamCity when to build, how to create and where your project's SCM repository is located. We use two different build configurations for each project, which we call integration, which controls the SCM repository of the project and runs the incremental build when a registration is detected. Another configuration, which we call "night", which runs at a specific time every night and performs a completely clean build.
By the way, just a brief note on SCM. For this to work cleaner, I think that SCM for each project should be used in a stable trunk topology. If your developers work from their own branches, you will probably need separate build configurations for each developer, which, I think, will become unnecessarily dirty. We created our build server with our SCM user account, but with read-only access.
Therefore, when the assembly is launched for a specific assembly configuration, the server captures the latest files from the repository and sends them to the "assembly agent", which performs the assembly using the build script. We used Rake to script our builds and automated testing, but you can use anything. The build agent can be located on the same PC as the server, but in our case we have a separate computer, because our build server is located in the center with the ICT department, while we need our build agent, which will physically reside with my team (for automated on-target testing). Thus, the toolbox that you use is installed on your build agent.
How can this work for you?
Suppose you are working in TidyDog and you have two projects on the go:
- "PoopScoop" is based on a PIC18F target compiled using the C18 compiler, has its own trunk located in your SCM, in
//PoopScoop/TRUNK/ - "PoopBag" is based on the ColdFire target compiled with GCC, has its own trunk, located in
//PoopBag/TRUNK/
The compilers you need to create all the projects are installed in your build agent (we will call it TidyDogBuilder). Regardless of whether the same PC depends on the build server or a separate unit, depending on your situation. Each project has its own script construct (for example, //PoopScoop/Rakefile.rb and //PoopBag/Rakefile.rb ), which processes the dependencies of the source file and the calls of the corresponding compilers. For example, you can go to // PoopScoop / on the command line, enter rake , and the build script will take care of compiling the PoopScoop project on the command line.
Then you have configured the assembly configurations on the assembly server. For example, the assembly configuration for PoopScoop indicates which SCM tool you are using and the repository location (for example, //PoopScoop/TRUNK/ ), specify which assembly agent to use (for example, TidyDogBuilder), specify where to find the corresponding script assembly and any (for example , //PoopScoop/Rakefile.rb called with rake incremental:build ), and indicate which event the assembly is triggering (for example, registering //PoopScoop/TRUNK/ ). Therefore, the idea is that if someone sends changes to //PoopScoop/TRUNK/Source/Scooper.c , the build server detects this change, captures the latest versions of the source files from the repository, and sends them to the build agent, which must be compiled with using the build script and to the end of the email of each developer who has changes in the build with the build result.
If your projects need to be compiled for several purposes, you simply change the design of the project script to deal with this (for example, you can have commands like rake build:PIC18 or rake build:Coldfire ) and configure a separate build configuration to build a server for each goal .
Continuous integration
Thus, with this system you enable and perform continuous integration. Modify build scripts to run unit tests as well as compile your project, and you can automatically test your device after each change. The motive for this is to try to solve the problems as early as possible, as you are developing, and not surprised during the checks.
Final thoughts
- Developers who do not have the entire toolchain setup will be somewhat dependent on what kind of work they do most often. If this was my and my work, as a rule, they were basically low level, interacting a lot with equipment, without compilers on my workstation, this would annoy me. If, on the other hand, I mainly worked at the application level and could drown out the hardware dependencies, this cannot be such a problem.
- TeamCity has a plugin for Eclipse with a pretty interesting feature. You can make personal assemblies, which means that developers can start assembling a pending list of changes for any build configuration. This means that the developer initiates the assembly of the predefined code on the build server without actually presenting its code in SCM. We use this for trial changes compared to our current unit tests and static analysis, as our expensive test tools are installed only on the build agent.
- Regarding access to build artifacts, when on the go I agree, something like a VPN on your intranet is probably the easiest option.