How to install and use compilers for embedded C on an external server?

Short question
Is there an accepted way to run compilers / linkers for firmware projects on a remote server and still be able to program and debug the software on the local computer.

Note. I know that each IDE will be different, so I want to define a workflow for this task, assuming the IDE can be started using .o / .elf files created from a remote server.

Areas of concern
1) Connect to a Windows virtual machine.
2) How / when to transfer the source code to the server for assembly.

Background
Each microprocessor family that our software development team works with requires its own compiler, IDE, and programmer. This overtime creates many difficulties to overcome.

1) Each developer requires his own, often expensive license.
2) To get a project started by another developer, extra care is required to make sure that all compiler options are the same.
3) To support outdated software, older compilers may be needed that conflict with the currently installed compiler.
... This list has no end.

Edit: 7-10-2011 1:30 PM CST
1) The compilers I'm talking about are really cross compilers
2) A short list of processor families that this system would ideally support: Motorola Coldfire, PIC and STM8.
3) Our Coldfire compiler is a variant of GCC, but we must support several versions of it. All other compilers use a target compiler that does not provide a floating license.
4) To turn to littleadv, what I would like to do is an external build server.
5) We are currently using a combination of SVN and GIT hosted in an online repository for version control. This is actually the way I thought I would transfer files to the build server.
6) We are stuck on Windows for most compilers.

Now I believe that the direction for the transition is an external build server. There are still a few obstacles. I assume that we will have to transfer the source files to the server using version control software. Seeing how several products require access to the same compilers, having an instance for each project does not seem practical.

It would be advisable to create a repository for each compiler, which will include folders for assembly, source, inclusion, output, etc. .... then there are scripts at the end of the user that take care of moving files from the IDE file structure to the necessary structure for the compiler? This approach will not allow overloading the project repository and gives on how many times the compiler was used. Thanks for all the great answers!

+6
source share
5 answers

In my opinion, implementing an automated build server will be the purest solution to what you are trying to achieve. With the added benefit of ... continuous integration! (I will touch CI a bit later).

There are many tools to use. @Clifford already mentioned CMake. But some others:

  • Hudson (open source)
  • CruiseControl (open source)
  • TeamCity (Commercial - but it has a rather large free version that allows up to 3 build agents and 20 build configurations. The corporate version of TeamCity is what my company uses, so my answer will be focused on this, as I know, but the concepts will most likely be applied to several instruments.

So, first of all, I’ll try to explain what we are doing and suggest how this might work for you. I do not suggest that this be an accepted way of doing something, but it worked for us. As I said, we use TeamCity for our build server. Each software project is added to TeamCity and build configurations are configured. Build configurations tell TeamCity when to build, how to create and where your project's SCM repository is located. We use two different build configurations for each project, which we call integration, which controls the SCM repository of the project and runs the incremental build when a registration is detected. Another configuration, which we call "night", which runs at a specific time every night and performs a completely clean build.

By the way, just a brief note on SCM. For this to work cleaner, I think that SCM for each project should be used in a stable trunk topology. If your developers work from their own branches, you will probably need separate build configurations for each developer, which, I think, will become unnecessarily dirty. We created our build server with our SCM user account, but with read-only access.

Therefore, when the assembly is launched for a specific assembly configuration, the server captures the latest files from the repository and sends them to the "assembly agent", which performs the assembly using the build script. We used Rake to script our builds and automated testing, but you can use anything. The build agent can be located on the same PC as the server, but in our case we have a separate computer, because our build server is located in the center with the ICT department, while we need our build agent, which will physically reside with my team (for automated on-target testing). Thus, the toolbox that you use is installed on your build agent.

How can this work for you?

Suppose you are working in TidyDog and you have two projects on the go:

  • "PoopScoop" is based on a PIC18F target compiled using the C18 compiler, has its own trunk located in your SCM, in //PoopScoop/TRUNK/
  • "PoopBag" is based on the ColdFire target compiled with GCC, has its own trunk, located in //PoopBag/TRUNK/

The compilers you need to create all the projects are installed in your build agent (we will call it TidyDogBuilder). Regardless of whether the same PC depends on the build server or a separate unit, depending on your situation. Each project has its own script construct (for example, //PoopScoop/Rakefile.rb and //PoopBag/Rakefile.rb ), which processes the dependencies of the source file and the calls of the corresponding compilers. For example, you can go to // PoopScoop / on the command line, enter rake , and the build script will take care of compiling the PoopScoop project on the command line.

Then you have configured the assembly configurations on the assembly server. For example, the assembly configuration for PoopScoop indicates which SCM tool you are using and the repository location (for example, //PoopScoop/TRUNK/ ), specify which assembly agent to use (for example, TidyDogBuilder), specify where to find the corresponding script assembly and any (for example , //PoopScoop/Rakefile.rb called with rake incremental:build ), and indicate which event the assembly is triggering (for example, registering //PoopScoop/TRUNK/ ). Therefore, the idea is that if someone sends changes to //PoopScoop/TRUNK/Source/Scooper.c , the build server detects this change, captures the latest versions of the source files from the repository, and sends them to the build agent, which must be compiled with using the build script and to the end of the email of each developer who has changes in the build with the build result.

If your projects need to be compiled for several purposes, you simply change the design of the project script to deal with this (for example, you can have commands like rake build:PIC18 or rake build:Coldfire ) and configure a separate build configuration to build a server for each goal .

Continuous integration

Thus, with this system you enable and perform continuous integration. Modify build scripts to run unit tests as well as compile your project, and you can automatically test your device after each change. The motive for this is to try to solve the problems as early as possible, as you are developing, and not surprised during the checks.

Final thoughts

  • Developers who do not have the entire toolchain setup will be somewhat dependent on what kind of work they do most often. If this was my and my work, as a rule, they were basically low level, interacting a lot with equipment, without compilers on my workstation, this would annoy me. If, on the other hand, I mainly worked at the application level and could drown out the hardware dependencies, this cannot be such a problem.
  • TeamCity has a plugin for Eclipse with a pretty interesting feature. You can make personal assemblies, which means that developers can start assembling a pending list of changes for any build configuration. This means that the developer initiates the assembly of the predefined code on the build server without actually presenting its code in SCM. We use this for trial changes compared to our current unit tests and static analysis, as our expensive test tools are installed only on the build agent.
  • Regarding access to build artifacts, when on the go I agree, something like a VPN on your intranet is probably the easiest option.
+5
source

One part of your question that has not been addressed very much in the other answers (at least since I write this) is the transfer of files to the build server. I can give some experience on this, as my own development process is close enough to that part of your situation.

In my case, I use the Unison utility to mirror the section of my home directory on my development laptop in the home directory section of the build servers. From a programmer’s point of view, Unison is basically a wrapper around rsync, with a set of checksums stored to determine if the files at both ends of the connection have been modified. It uses this for bi-directional synchronization; any changes I make are locally pushed to the remote end and vice versa. The usual mode of operation is a request for confirmation of all transfers; you can disable this, but I find it convenient as a check to make sure that I have changed what I think I changed.

So my normal workflow:

  • Edit files locally on my development machine.
  • Run unison to synchronize these changes with the build server.
  • On the ssh connection to the build server, run the compiler.
  • Also in this ssh connection, run the program by creating an output file. (This, of course, is slightly different from your situation.)
  • Run "unison" again to sync the modified output file back to my development machine. (Here you must synchronize the compiled program.)
  • Check out the output to my development machine.

This is not as fast as the edit-compile cycle on the local machine, but I believe that it is still fast enough so as not to annoy. And this is much less than using a source of control as an intermediary; you don’t need to check all the changes you made and write them down for posterity.

In addition, Unison has the advantage of working well across multiple platforms; you can use it on Windows (most easily with Cygwin, although this is not required), and it can either tunnel through an SSH connection if you start the SSH server on your Windows machine, start your own connection service on the build server, or simply use Windows file sharing and consider the build server as a "local" file. (Or you can put files on the server side of the Linux-based Samba in the server farm and mount them on your Windows virtual machines, which might be easier than having local files in the virtual machines.)

Edit: Actually, this is a kind of modification of option 2 in the littleadv file transfer discussion; it replaces editing files directly on the server via Samba / NFS. And it works quite well in parallel with this - I believe that having a local file cache is ideal and avoids network delays when working remotely, but other engineers at my company prefer something like sshf (and, of course, -site using Samba or NFS is fine). All this gives the same result from the point of view of the build server.

+2
source

I'm not sure I understand what you mean, but I will try to answer what I think the question is :-)

First of all, you are talking about cross-compilers, right? You are compiling code on one system that must be running on another system.

Secondly, you are looking for a “floating” license model, instead of having a separate compiler license for each developer.

Thirdly, you want to create an assembly machine where everyone will compile, and not every developer who compiles on his machine.

These problems do not match. I will try to cover them:

  • Cross compilers - some of them are free, some of them are licensed. Some of them come with an IDE, some of them are command line compilers that can be integrated into Eclipse / VS / SlickEdit / vi / whatelse. I don’t know which one you are using, so let's take a look at Tornado (the VxWorks compiler). This has its own terrible IDE, but can be integrated into others (I used SlickEdit with the project files directly, it works like a charm). The Tornado compiler requires a license and has several different models, so we will look at it for the following two points.

  • Floating licenses. For example, Tornado may come with one license for each installation model or with a floating license that will distribute the license for the request. If you use an assembly machine, you will need either one license (in this case you can only run one instance at a time, which defeats the target) or a floating license to run several instances at a time. Some cross-compilers / libraries do not require licenses at all (for example, various versions of GCC).

  • Machine assembly - I experienced using VxWorks Tornado as a dedicated compiler on my PC, as well as installing the assembly.

For an assembly machine, you need a way to pass code to the assembly. This is problem # 2 for you:

2) How / when to transfer the source of code to the server for assembly.

Network sharing is not a good idea, since network latency will make compilation time unbearable. Instead, do one of the following:

  • Install the source control on the build machine, ask the developers to pass the code through the source control. This poses a risk of overstepping the control source, so we did the following:

  • All developers checked the files on the assembly directly (a large UNIX system with a large amount of memory), and each developer edited the files using a shared network resource (SAMBA or NFS), which LAN was ok for 1 GB, and compiled locally on the assembly machine. Some of them would be edited directly on a Unix system using the version of vi / emacs / Unix for the Tornado IDE that I hated. This also answers your problem # 1:

1) Connect to a virtual Windows machine.

(It does not have to be a Windows virtual machine, Linux can also work if you have a cross-compiler from Linux to the embedded system).

Hope this helps.

+1
source

I am not an expert here, but is it possible to create a virtual machine for each target environment. Then, the developer can run the required virtual machine image on his local machine.

0
source

It seems that you are trying to do this distributed assembly , where there is only one build server. It might be possible with something like CMake . However, although this may "save" your use of compiler licenses, for many tool chains, a single-user license also (or in some cases exclusively) applies to the debugger; this may not be particularly useful if your developers cannot debug and test their code.

One solution is perhaps to use one machine with the necessary development tools and target equipment, as well as access to it from several workstations via a remote desktop or even from Telnet, if you only need a command line interface. Remote Desktop will only allow access to one user at a time, but in any case, multiple simultaneous access will in many cases be contrary to EULA.

DIY solution is to create your own TCP / IP application that accepts commands and assembly files, executes a local compiler or linker, and returns the results (assembly log and / or object file, etc.).

Perhaps you can avoid the problem altogether by moving your entire development to open source tools such as GCC or SDCC; it will depend on what goals you need to support.

0
source

Source: https://habr.com/ru/post/892427/


All Articles