Creating a compiled application using Docker

I am creating a server written in C ++ and want to deploy it using Docker with docker-compose . What is the "right path"? Should I call make from Dockerfile or create manually, upload to some server, and then COPY binaries from Dockerfile?

+11
source share
4 answers

I had difficulty automating our build using docker-compose , and I ended up using docker build for everything:

Three layers for construction

Run β†’ develop β†’ build

Then I copy the build results to the 'deploy' image:

Run β†’ Expand

Four layers for the game:

To run
  • Contains all the packages needed to run the application - for example, libsqlite3-0
develop
  • FROM <projname>:run
  • Contains packages needed for assembly
    • e.g. g ++, cmake, libsqlite3-dev
  • Dockerfile does any external builds
    • e.g. steps to build boost-python3 (not in package manager repositories)
build
  • FROM <projname>:develop
  • Contains source
  • Dockerfile performs internal assembly (code that changes frequently)
  • Embedded binaries are copied from this image for deployment use.
deployment
  • FROM <projname>:run
  • The assembly output is copied to the image and installed.
  • RUN or ENTRYPOINT used to launch an application.

The folder structure is as follows:

 . β”œβ”€β”€ run β”‚ └── Dockerfile β”œβ”€β”€ develop β”‚ └── Dockerfile β”œβ”€β”€ build β”‚ β”œβ”€β”€ Dockerfile β”‚ └── removeOldImages.sh └── deploy β”œβ”€β”€ Dockerfile └── pushImage.sh 

Setting up the build server means doing:

 docker build -f run -t <projName>:run docker build -f develop -t <projName>:develop 

Every time we make an assembly, this happens:

 # Execute the build docker build -f build -t <projName>:build # Install build outputs docker build -f deploy -t <projName>:version # If successful, push deploy image to dockerhub docker tag <projName>:<version> <projName>:latest docker push <projName>:<version> docker push <projName>:latest 

I refer people to Dockerfiles as documentation on how to build / run / install a project.

If the assembly failed and the output file is insufficient for investigation, I can run /bin/bash in <projname>:build and <projname>:build what went wrong.


I put together a GitHub repository around this idea. This works well for C ++, but you can probably use it for anything.


I did not investigate this function, but @TaylorEdmiston pointed out that my template here is very similar to multi-stage assemblies that I did not know about when I came up with this. It looks like a more elegant (and better documented) way to achieve the same.

+19
source

I recommend fully developing, assembling and testing the container itself. This confirms the Docker philosophy that the development environment is similar to the production environment, see Workstation of a modern developer on MacOS with Docker .

Especially in the case of C ++ applications, where there are usually dependencies with shared libraries / object files.

I don’t think there is a standardized development process for developing, testing and deploying C ++ applications in Docker so far.

To answer your question, the way we do it now is that the container is perceived as a development environment and a number of practical measures are applied in a team, such as:

  1. Our code base (except for configuration files) always lives on a shared volume (on a local machine) (version of Git)
  2. Shared / dependent libraries, binaries, etc. Always in the container
  3. Create and test in the container, and before committing the image, clean unnecessary object files, libraries, etc. And make sure the changes in docker diff as expected .
  4. Changes / updates to the environment, including shared libraries, dependencies, are always documented and reported to the team.
+6
source

The way I will do this is to run the assembly outside of your container and only copy the output of the assembly (your binary and any necessary libraries) into your container. Then you can upload your container to the container registry (for example, use a hosted one or run your own), and then pull it out of this registry onto your production machines. Thus, the stream may look like this:

  • build binary
  • checking / checking the operability of the binary file itself
  • create binary container image
  • test / sanity - check container image using binary
  • upload to container registry
  • stage deployment / testing / qa, registry extraction
  • deployment in prod, registry extraction

Since it is important that you test before the production deployment, you want to test exactly the same that you will deploy during the production process, so you do not want to retrieve or modify the Docker image in any way after creating it.

I would not start the assembly inside the container that you plan to deploy in prod, since then your container will have all kinds of additional artifacts (such as temporary assembly outputs, tools, etc.) that you do not need in production and increase without need An image of a container with things you won’t use for deployment.

+5
source

Although the solutions presented in the other answers - and, in particular, the proposal Misha Brookman in the comments to this answer about using one Dockerfile for development and one for production - will be considered idiomatic in the time of writing, the issue should be noted that the problems that they are trying to solve, and, in particular, the problem of cleaning the assembly environment to reduce the size of the image while maintaining the possibility of using the same container environment in the development and production, were effectively solved by several up builds that were introduced in Docker 05.17.

The idea is to split the Dockerfile into two parts, one of which is based on your favorite development environment, such as a full-fledged Debian base image, which is associated with creating binaries that you want to deploy at the end of the day, and the other that just launches embedded binaries in a minimal environment such as Alpine.

Thus, you avoid possible inconsistencies between the development environment and production, which blueskin refers to in one of the comments, while ensuring that your production image is not contaminated with development tools.

The documentation contains the following example of a multi-stage build of a Go application, which you then transfer to the C ++ development environment (with one caveat that Alpine uses musl, so you should be careful when building in the development environment).

 FROM golang:1.7.3 WORKDIR /go/src/github.com/alexellis/href-counter/ RUN go get -d -v golang.org/x/net/html COPY app.go . RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app . FROM alpine:latest RUN apk --no-cache add ca-certificates WORKDIR /root/ COPY --from=0 /go/src/github.com/alexellis/href-counter/app . CMD ["./app"] 
+2
source

Source: https://habr.com/ru/post/1239444/


All Articles