I donβt know, someone did this and made it publicly available. Just IMHO, this does not sound very promising.
As Martinus points out, some compression algorithms are very consistent. Block compression algorithms, such as LZW, can be parallelized by encoding each block independently. Ziping a large file tree can be parallelized at the file level.
However, none of them is truly SIMD-style parallelism (Single Instruction Multiple Data), and they are not massively parallel.
GPUs are mostly vector processors where you can execute hundreds or thousands of ADD instructions at the blocking stage and execute programs that have very little dependency on these branches.
Compression algorithms in general sound are more like the programming model SPMD (Single Program Multiple Data) or MIMD (multiple instructions with multiple data), which is better suited for a multi-core processor.
Video compression algorithms can be combined by GPGPU processing, for example CUDA, only to the extent that there are a very large number of pixel blocks that are cosine-transformed or collapsed (for motion detection) in parallel, and IDCT or convolution routines can be expressed using unallocated code.
GPUs are also similar to algorithms with high numerical intensity (the ratio of mathematical operations to memory access). Algorithms with low numerical intensity (for example, with the addition of two vectors) can be massively parallel and SIMD, but still slower on gpu than cpu because they are memory related.
Die in Sente Jan 20 '09 at 22:41 2009-01-20 22:41
source share