Before we talk about video, let’s talk about the shipping industry. Gross oversimplification to follow — see The Box: How the Shipping Container Made the World Smaller and the World Economy Bigger by Marc Levinson for more.
Before the shipping container [link] was standardized, moving stuff by ship, rail, and truck was a serious hassle. Loose cargo had to be manually unpacked and repacked as it was moved from one mode of transportation to another. Cargo stored in boxes or crates could be moved by machine, but since every crate might be a different size, shape, or weight, or since they may have had different loading points, not all machines were appropriate for moving all crates.
When the modern shipping container was standardized across the industry, you could use a universal set of machines like cranes over a universal set of transportation modes like ships, rail cars, and trucks to simplify loading, unloading, and transport. Any shipping yard anywhere in the world could easily work with cargo by using tools built around a common standard.
This exists in the world of digital media, too. MXF (Material Exchange Format) and MOV (QuickTime movie) are called container formats. A video application which supports a specific container format can use standard tools to manipulate the containers — open the file, seek to a specific point in time, play, fast-forward, rewind, etc. — by using commonly-available standards and libraries. There is no need to re-invent the wheel by writing a new file format or new media manipulation libraries.
That said, just as in the case of the shipping container, what’s inside a media container may require additional tools. While all MOV files may have the same structure, the AV contents inside are stored using a codec [link].
While the container provides for standardized file-handling and navigation, the codec is what actually allows you to represent the image or sound as stored bits and bytes, and then interpret those bits and bytes to get an image or sound.
Lossy video compression is all about compromise: you’re throwing away information in order to make the file size smaller. The more you throw away, the smaller the file gets — but the less image quality you retain. (A third factor, decode complexity, also factors in a bit.) Generally speaking, if you want a good-looking file, it’s going to be very large. If you want a very small file, it’s not going to look as good.
On to your specific question: IMX50 is a standard definition digital video format based on MXF-wrapped MPEG-2 video data, with a data rate of 50 megabits per second, and which uses only intraframe compression.
H.264 is a video codec (like MPEG-2 in the IMX50 example above). It does not define a format like IMX50 does. It’s highly scalable, with many different profiles available (depending on your playback device) and supporting many resolutions, frame rates, and data rates. There’s no reason why H.264 can’t yield a visual result comparable to your IMX50 encode.
However, video compression is some mix of art and science, and you’ll have to spend some time learning and experimenting.
Walter Soyka
Principal & Designer at Keen Live
Motion Graphics, Widescreen Events, Presentation Design, and Consulting
RenderBreak Blog – What I’m thinking when my workstation’s thinking
Creative Cow Forum Host: Live & Stage Events