I agree with ethimu, you are talking about the concept of Single System Image. In addition to the OpenMosix project, several commercial implementations of the same idea were implemented (one modern example is ScaleMP). This is not a new idea.
I just wanted to dwell on some of the technical aspects of SSI.
Basically, the reason this is not done is because performance is generally completely unpredictable or terrible. There is a concept in computer systems known as [NUMA] [3], which basically means that the cost of accessing different parts of the memory is uneven. This can be applied to huge systems in which processors can access memory for different microcircuits or in cases where memory is accessed remotely over the network (for example, in SSI). Typically, the operating system will try to compensate for this by laying out programs and data in memory so that the program can run as quickly as possible. Ie, code and data will be placed in the same NUMA region and should be scheduled on the nearest possible CPU.
However, in cases where you run large applications (trying to use all the memory in your SSI), the operating system can do little to reduce the impact of remote memory retrievals. MySQL does not know that access to page 0x1f3c will cost 8 nanoseconds, and access to page 0x7f46 will stop it for hundreds of microseconds, possibly milliseconds, while memory is loaded over the network. This means that applications that do not support NUMA will work like shit (seriously, really bad) in such conditions. As far as I know, most modern SSI products rely on the fastest interconnects (like Infiniband) between machines to achieve even missed performance.
That is why frameworks that reveal the true cost of accessing data to a programmer (e.g. MPI: messaging interface) have achieved more traction than SSI or DSM (Distributed Shared Memory) approaches. In fact, it is not possible for a programmer to optimize an application to work in an SSI environment that simply sucks.
source share