The best hardware / software solution for parallel operations?

We have a Linux-based build system in which the build consists of many different built-in targets (with different drivers and features enabled respectively), each of which is built using one single main source tree.

Instead of trying to transform our code-based system into something more multi-processor, we just want to find the best way to run assemblies for all these purposes at the same time. I am not sure how to achieve maximum performance.

I considered the following possible solutions:

  • Many individual assembly machines. Disadvantages: many copies of the general code or work with a (slow) shared disk. More systems to service.
  • Fewer multiprocessor machines (possibly two quadrocks), with fast alternating RAID storage. Disadvantages: I'm not sure how it will scale. Volume seems to be the bottleneck, but I don't know how well Linux handles SMP these days.
  • A similar SMP machine, but with a hypervisor or Solaris 10 running VMware. Is this stupid, or will it provide some planning benefits? Disadvantages: does not resolve the bottleneck storage problem.

I'm going to just sit and experiment with these features, but I wanted to check if I missed something. Thanks!

+3
source share
3 answers

, Icecream. SUSE distcc.

, , .

+1

, , , , / .

, (, ), rsyncing . CI-, Hudson, .

0

make , -j -, (-) . , -. , , , .

, make , - , .

0
source

Source: https://habr.com/ru/post/1710353/


All Articles