Store more than 3 GB of video frames in memory on a 32-bit OS

At work, we have an application for playing 2K (2048 * 1556px) OpenEXR films. It works well. In addition, when sequences that exceed 3 GB (quite often), it should unload old frames from memory, despite the fact that all machines have 8-16 GB of memory (which is addressed via Linux BIGMEM material).

Frames must be cached into memory for real-time playback. The OS is a multi-year 32-bit Fedora Distro (in the foreseeable future, it cannot be updated to 64 bits). The limit for each process is 3 GB per process.

Basically, is it possible to cache more than 3 GB of data in memory? My initial idea was to distribute data between several processes, but I have no idea if this is possible.

+4
source share
5 answers

How about creating a RAM disk and loading the file into it ... provided that the RAM disk supports BIGMEM material for you.

You can use several processes: each process loads a file view as a shared memory segment, and the player process then displays the segments as necessary.

+2
source

One possibility might be to use mmap. You could display / unmount various parts of your data into the same virtual memory area. You could only display one set at a time, but as long as there is sufficient physical memory, the data should remain resident.

+3
source

I assume you can change the application. If so, it’s easiest to run the application several times (once for each 3 GB video fragment), so that everyone holds a piece of video and uses another program to synchronize them, so that each of them takes control of the framebuffer (or other video output) in turn .

Synchronization will be a bit confusing, perhaps, but it can be simplified if each application has its own framebuffer, and the synchronization program points to the video controller on the correct framebuffer between frames when switching to the next application.

+1
source

My, what an interesting problem :)

( EDIT : Oh, I just read Rob ram drive's message ... I was all excited about this problem ... but a little more to offer, so I won't delete)

Is it possible ...

  • set up a drive with several gigabytes and then
  • to change the program to do everything that it reads from the "disk"?

I would suggest that part of the ram disk is a problem, as the RAM disk size will be OS and file system dependent. You may need to create several RAM disks and jump the code between them. Or maybe you can configure a RAID-0 set on multiple RAM disks. Or, if there are still OS limitations, and you can afford to drop a couple of grandiose (4k?), Set up a hardware RAID-0 set with some of these new glowing fast solid state drives. Or...

Fun, fun, fun.

Be sure to follow!

+1
source

@dbr said:

There is an overview machine with an absurd fiber-channel RAID array that can easily play 2K files directly from the array. The problem is with artist workstations, so this will not be one RAID array for $ 4,000, it will be hundreds.

Well, if you can accept the ~ 30 GB limit, then maybe a single 36GB SSD would be enough? I think it will take about 1 thousand US dollars, and there may be enough data. This is very good, maybe cheaper than a simple approach to RAM. Smaller sizes are also available. If ~ 60 GB is enough, you could probably get away with a JBOD array of 2 for double the cost and skip the RAID controller. Make sure you look at the higher level SSD options - the lower end is filled with illustrious memory sticks .: P

0
source

Source: https://habr.com/ru/post/1276530/


All Articles