It depends on the compression standard. Modern standards, such as H.264, have strict specifications and reference decoders, and any compatible decoder should output the exact same bit by bit as the reference decoder (modulo errors, of course). Older video codecs (such as MPEG-4 Part 2) do not describe the complete process in sufficient detail, so different implementations may output slightly different data (looking about the same, but with slight differences in rounding). With inter-frame codecs, where future frames depend on earlier ones, such rounding errors can accumulate.
For audio codecs, a compatible decoder should basically come close to the reference signal, taking into account some implementation / rounding differences.
In most cases, quality trading for speed is done in the encoder, but some decoders also have options for deviating from standards to increase decoding speed but not display the exact correct image.
So, it all depends on what codec standards you use (regardless of whether they are written accurately enough to provide independent implementations of the bytexate) and the actual decoder implementations.
source share