As resolutions continue to go higher, there is greater need for data compression to deliver video files faster and without affecting their quality. Here are a few codecs that are commonly used in the industry While some applications use uncompressed video, in general, that means very high bitrates which are difficult to handle. Basic HD is […]
As resolutions continue to go higher, there is greater need for data compression to deliver video files faster and without affecting their quality. Here are a few codecs that are commonly used in the industry
While some applications use uncompressed video, in general, that means very high bitrates which are difficult to handle. Basic HD is 1.5 Gb a second; if we were to move to high dynamic range, 120 frames a second 4K video, we would be looking at a native data rate of 18 Gb a second.
The solution is to compress the video. Data compression is a well-known technique and we see it every day in zip files, for example. But that sort of compression doesnt work in video. First, we need to know how big the output files will be, because we plan our infrastructures around bitrates. Second, we need the compression time to be precisely predictable, because we need a new picture 25 times a second (or in Ultra HD 120 times a second).
Dedicated teams of mathematicians have developed techniques to compress video in predictable ways. These fall into three categories:
Mathematically lossless when decoded, the files are identical to the original (zip files are mathematically lossless)
Visually lossless when decoded, the files appear to an experienced viewer to be the same
Lossy some degradation and artefacts are created in the encoding and decoding, the level of which depends on the application.
We are probably most familiar with MPEG-2 and MPEG-4, the codecs which bring digital television to our homes. On premium channels, with a healthy bit budget, these look good, although some artefacts are clearly visible when you know what to look for. Sports pitches look like felt when the camera moves, or areas of similar colour break up into bands, for example. These are lossy compression schemes.
They are also asymmetric compression schemes. MPEG-4/H.264 (the ITU standard number) is designed for one to many a broadcaster to millions of receivers. The encoding process can be complex because it is only done once, and a cost of say $25,000 for an encoder is a relatively trivial sum for the broadcaster.
The decoding, however, has to happen in every television receiver or set-top box. The manufacturers of those devices are under pressure to keep the factory prices down, so the stream needs to be capable of being decoded in a chip costing $5 or less. The complex encoding allows simple decoding.
The natural successor to the MPEG family is H.265, sometimes called HEVC (high-efficiency video codec). This set out with the aim of doubling the compression efficiency of MPEG-4 achieving the same visual quality in half the bitrate. In time, this has allowed 4K video to be compressed to twice or less the bitrate of HD.
The MPEG family, including H.265, has two disadvantages though. The first is that the compression schemes are based on a mathematical technique, the discrete cosine transform that has been so developed over the 20 years or so of MPEG schemes that it is hard to see how it can advance much further. The second is that its intellectual property is owned by a body which needs to charge royalties on each device which uses it. MPEG and H.265 devices each have a licence cost.
The search is on to find newer compression schemes that can find wide acceptance. One of the first to emerge was JPEG2000, which as the name suggests is a development of the JPEG still picture format. The JPEG2000 standard was developed to include a motion element, so it is ideal for video. It also processes images of up to 4,000 pixels square, so can take even 4K video in a single tile.
JPEG2000 uses a different mathematical technique: wavelet compression. This has the advantage that it degrades very gently when under pressure, unlike MPEG, which breaks up into visible blocks. Mild JPEG2000 compression provides good, visually lossless streams, and is now widely used for applications like IP contribution circuits.
The downside is that JPEG2000 is not royalty-free either. In point-to-point applications like contribution circuits, paying a licence for two devices may not be economically impractical though.
Two other, more recent compression standards are moving into prominence, and offer the advantages of more recent design (thus taking advantage of the latest in processing power), better compression capabilities and perhaps most important, no royalty payments.
The first is VC-2. This started life as a research project led by the BBC, which produced a codec called Dirac. Like JPEG2000, this is based on wavelet compression, and was finalised in 2008. Later that year, an I-frame only version of the codec (one which does not need to compare successive video frames) was developed, called Dirac Pro, and this was passed to SMPTE for ratification, becoming VC-2.
One of the challenges with professional codecs is that, as content moves around production and post-production, material may be encoded, decoded then recoded a number of times. VC-2 is particularly resilient to recoding. The design can be implemented in software, keeping costs down for the technology, and encode times are measured in lines rather than frames, making for low latency. Taken together, you can build around 10 channels of VC-2 for the cost of a single JPEG2000 channel.
The newest compression scheme on the block is the work of the Tico Alliance. This is a grouping of a large number of major manufacturers including EVS, Grass Valley, Imagine Communications, Nevion and Ross which have come together with the specific aim of creating the next generation compression scheme. The alliance aims for a high performance, low latency codec which provides the performance needed for live IP-based production.
Tico has been demonstrated to deliver visually lossless processing at 4:1 compression and mathematically lossless performance at lower compression ratios. It too is robust for multiple generations of encoding, and is designed for implementation in FPGA chips with no external memory for very fast processing.
Because it is new, it is designed for resolutions from HD up to 4K and 8K, including high dynamic range and high frame rate Ultra HD systems. Finally, the aim of the alliance is to create a widely recognised and freely available standard which is not burdened by complex licensing and royalties, and will be readily interoperable between vendors.
In the immediate future, it looks like H.265 will maintain the MPEG tradition of being the delivery standard, and it is certainly capable of making 4K to the home a practical proposition. JPEG2000 already has a place in providing very high quality contribution links, over dedicated circuits and over telco commodity IP fibre.
For broadcast infrastructures, at least at present, there is a choice of two royalty-free codecs. VC-2 is already an SMPTE standard; Tico comes from a broad alliance of vendors working in coalition to create a future-proof, widely recognised codec. These represent a strong platform for the future.
Source: CABSAT