by david austerberry - penton media

17

Upload: others

Post on 03-Feb-2022

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: BY DAVID AUSTERBERRY - Penton Media
Page 2: BY DAVID AUSTERBERRY - Penton Media

1 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

Sony’s recent release of multi-codec cameras (the F5 and F55) is a reminder that there is no such thing as the ideal

codec. Not only do the cameras sup-port RAW and existing codecs, but Sony introduced XAVC codec to give additional capability and 4K com-pressed recording.

The choice the Director of Photography (DoP) uses ultimately de-pends on a number of issues particular-ly the program genre. An observational documentary is less demanding on final picture quality than prime-time drama.

In an ideal world one codec would be used from camera to master con-trol. That would avoid concatenation of coding artifacts. However in most ap-plications it is just not practical. In any workflow there are touchpoints:1) camera files2) editing files3) delivery broadcast transmission master4) playout server files

Each of these may use a differ-ent codec. For example: shoot HD at MPEG-2 4:2:2, 50Mb/s, long GOP; edit and finish at DNxHD 145, deliv-er as HDCAM-SR

Codecs and workflow: Making the optimum choice

and ingest to the playout server as 25Mb/s 4:2:0 long GOP at 20Mb/s. This is decoded to HD-SDI for the master control switcher and keying, and then finally encoded to HD at a nominal 6Mb/s. variable bit rate in a statistical multiplexer.

HD VTRsThe first generation of HD cam-

corders based on videotape had a choice of DVCPRO100 or HDCAM

recording. WIth data rates of 100 and 140 Mb/s respec-tively not only was com-pression a necessity, but the images were down-sampled from 1920 to 1440 line lengths to lower band-

width. HDCAM-SR allowed recording at 440 or 880Mb/s, and re-solved many of the quality issues.

Before considering the compression format, what is being compressed?

Camera DataHow much data does a sensor cre-

ate? Consider high frame rate record-ing at 60fps (progressive). A three chip, HD camera (1920 x 1080), with a 16-bit sample, generates 99.5Mb per frame, a stream of 6Gb/s or 746MB of data every second.

A 4K (3840 x 2160) single-chip Bayer array generates 133Mb per frame or 8Gb/s.

Few sensors have a bit depth of 16, so these numbers can be reduced for the 14 bits typical for cameras, but that is still 5.2Gb/s for the 3-chip HD camera.

Single Sensor Color ImagingAs DoPs demand super 35mm sized

sensors, more cameras are now using the single chip design. A three- chip camera with super 35mm sized sensor would be very large. It would also have a long back focus, meaning that regu-lar cine lenses would be unsuitable. It is

Automation can be the element that takes media workflow to the next level of efficiency. Image courtesy Harris.

BY DAVID AUSTERBERRY

The Sony F55 supports RAW, MPEG-2, MPEG-4 SStP and XAVC recoridng.

Page 3: BY DAVID AUSTERBERRY - Penton Media

2 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

more cost-effective to use a single sensor with a color filter array (CFA) for large sensor designs. However, it is accepted that the resolution of a single sensor with a Bayer CFA, with a good demo-saicing algorithm, is around 30 percent lower than a 3-chip imager of the same pixel dimensions.

For an HD camera , a single chip camera needs a resolution of around 2300 x 1300 to equal a 3-chip 1920 x 1080 imager. It should be noted that many single chip cameras have sensors exceeding this size, so should achieve good resolution in HD.

The single chip camera can ease the issues of mov-ing data to the post house. If the raw data from the chip is recorded, rather than the demosaiced RGB or YUV signals, then there can be considerable saving in data. Table 1 shows some numbers for a 14-bit sensor shooting 60P. A three sen-sor design outputs 5.2Gb/s against only 1.74 Gb/s from a RAW sensor. Note the RAW signal is a lower bit rate than a sub-sampled 4:2:2 8-bit signal.

In most cameras, the sub-sampled YUV signal is truncated to 10 or 8 bits then encoded using one of the MPEG standards at 1080i 25/29.96 or 720P 50/59.94. I-frame or long GOP. For ex-ample AVC-I at 100Mb/s and XDCAM at 50Mb/s. Such data rates are similar to existing SD rates, so easily handled by networks and storage. Converting from linear to log sampling helps, saving two bits for a given dynamic range, with 12-bit log considered equivalent to 10-bit

linear. Many digital cinema shoot use log coding.

The goal of one codecBut what of the target of using one

codec all the way through to master control?

To improve the performance of their NLEs, Apple and Avid developed edit-specific codecs: ProRes and DNxHD. For good performance the NLE requires the minimum rendering to display clips on the timeline. The editor’s work will be slowed for a codec that requires consid-erable rendering. Moore’s Law will fix

the rendering issues over time, but 4K waits in the wings with four times the data rate. Edit workstations always seem to be short on power!

Adding the edit codec immediately introduced two transcodes on the input and output of post.

One route popular with camera man-ufacturers is to include onboard record-ing in a compressed, 8-bit. format, and supply an uncompressed output for one of the many after-market recorders.

Some of these recorders now support encoding direct to DNxHD and ProRes, neatly circumventing the need for cod-ing or transcoding during ingest to the NLE.

Different NeedsMost camera codecs use one of the

ISO/IEC MPEG family: MPEG-2, MPEG-4 part 2 or AVC. Some DSLRs record motion JPEG to ease the inter-nal processing requirements. To meet the conflicting requirements of produc-tions for different genres, camera man-ufacturers are offering more choices. These include:• RAWoutput• uncompressed,RGBorYUV,logandlinear, 12, 10 or 8 bits• compressed

The next step is the introduction of high-efficiency video coding (HEVC). Most likely this will first appear as a delivery codec, to stream 4K to home viewers and lower resolutions to mobile devices.

Looking at all this, it looks like a multi-codec camera is going to add choices to support dif-ferent workflows. One camera can create large files for grad-ing, or small files for a straight-though edit. One camera can be used for different produc-tion genres. If a camera has an uncompressed output, HD-SDI or HDMI, then after-market field recorders achieve much the same aim.

Although one codec from acquisition to the air server works for news and docu-mentaries, but for other genres that require a higher picture

quality, then transcoding becomes a necessity.

A director has all the choices avail-able, but choosing the right one needs a joint decision from the DoP, digital intermediate technician, colorist and editor to achieve the optimum picture quality within the many constraints of production, not least being cost. BE

Format W H Bit-depth Bits per frame Mb/s @ 60fps

HD 444 1920 1080 14 87,091,200 5,225HD 444 10-bit 1920 1080 10 62,208,000 3,732HD 422 10-bit 1920 1080 10 41,472,000 2,488HD 422 8-bit 1920 1080 8 33,177,600 1,990HD 420 8-bit 1920 1080 8 24,883,20 1,492

RAW HD 1920 1080 29,030,400 1,741

Table 1

The ARRI Alexa can encode to DNxHD and ProRes with suitable options installed.

Page 4: BY DAVID AUSTERBERRY - Penton Media

3 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

There is no doubt that we are in the midst of a rapid evolution of codec design. Traditional codecs, some might call them

legacy codecs, are gaining evolutionary improvements. These codecs include HDCAM, AVC-Intra 50 and 100 as well as AVCHD 1.0. This article will, after a brief overview of AVC-Intra and ProRes 422 as well as the new sensors that drive codec evolution, focus on AVC-Ultra. There are five flavors of ProRes 422 in comparison to uncompressed video. (See Table 1.)

Although ProRes 422 codecs are 10-bit codecs, they may carry 12-bit data values. However, they vary in terms of color space and compression ratios. ProRes 4444, however, has additional functionality. The first three 4’s indicate that the codec is capable of carrying ei-ther RGB values or luminance plus two chroma components, with all three val-ues present for each pixel. The fourth 4

Codec evolution: Panasonic’s AVC-Ultra is the

latest iteration of AVC.BY STEVE MULLEN

indicates that an alpha value can be car-ried along with each pixel. When camer-as record ProRes 4444, the fourth value is not present, making the data stream simply 4:4:4.

The advantage of the ProRes proxy codec is best experienced in Final Cut X. When you import any type of data, you have the option of automatically, in the background, creating a ProRes 422 or proxy version of the original file. You then edit the 4:2:2 10-/12-bit proxy video, which allows real-time editing of most any format on almost any Mac. During

export, the original file is used as a source of all image data.

AVCHD has evolved to version 2, which has two new features: the abil-ity to record at frame rates of 50fps or 60fps, and to record at 28Mb/s at these higher frame rates. To date, the AVCHD specification has not been enhanced to support Quad HD or 4K2K images. For this reason, cameras, such as the JVC HMQ10, record Quad HD in generic AVC/H.264. Using Level 5.1 or Level 5.2, 24fps or 60fps respectively can be recorded.

Panasonic’s AVC-Intra is avail-able in two formats: a 50Mb/s codec and a 100Mb/s codec. AVC-Intra re-cords a complete range of frame rates. At 1920 x 1080: 23.98p, 25p, 29.97p, 50i and 59.94i. At 1280 x 720: 23.98p, 25p, 29.97p, 50p and 59.94p. The characteristics of each of these two fla-vors differ. (See Table 2.)

Codec parametersAll codecs have a similar set of param-

eters. These include image resolution, image composition (single frame versus two fields), de-Bayered versus raw (pro-gressive-only), image frame rate or field rate, color sampling (4:4:4, 4:2:2, 4:1:1 or 4:2:0), RGB versus YCrCb, compression ratio, and bit depth.

3000

2250

1500

750

0

4:4:4 formats 4:2:2 formats

2237

330

1326

220 147 102 45

Uncompressed12-bit 4:4:4

ProRes4444

[no alpha]

Uncompressed10-bit 4:2:2

ProRes422 (HQ)

ProRes422

ProRes422 (LT)

ProRes422 (Proxy)

Mb/

s

Data ratesUncompressed and Apple ProRes at 1920 x 1080, 29.97fps

Table 1: ProRes 422 formats

Page 5: BY DAVID AUSTERBERRY - Penton Media

4 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

Traditional codecs employ bit depths of either 8 or 10 bits. The number of bits used for recording is independent of the number of bits output by the sensor’s an-alog-to-digital converter.

Nevertheless, a camera’s dynamic range is a function of sensor perfor-mance (low noise is critical), number of A/D bits and the number of codec recording bits. Each stop requires a doubling of sensor output voltage, and each bit represents a doubling of voltage. Therefore, a 12-bit A/D has the potential to capture a 12-stop dynamic range.

As a camera’s bit depth increases, the smoothness of the camera’s gray scale increases. (See Figure 1.) (Banding is reduced.) Therefore, the A/D and post A/D processing traditionally has more bits than necessary to capture the sen-sor’s dynamic range — thereby realiz-ing the sensor’s potential.

Both ProRes 4444 and AVC-Ultra can provide 12-bit sample depth. Alternately, data can be converted to log values. In this case, 16 bits can be represented by only 10 bits. Thus, when looking at bit depth spec-ifications, it’s important to know whether it’s log data.

Consider an illumination range of 18 stops. Assuming older sensor technol-ogy, at best only 12 stops can be cap-tured by the sensor. However, these 12 stops are not all usable. Low illumina-tion causes several stops to be lost be-cause of high levels of noise. Likewise, at high illumination, several stops are lost due to clipping under extreme light levels. The effective dynamic range is only about six stops. (See Figure 2.)

In Figure 2, the brown diagonal line shows a perfectly linear gamma. In order for a video signal to be displayed cor-

rectly on a monitor, a nonlinear gamma must be applied to the signal from the A/D. In the HD world, it’s called Rec. 709. (Red curve.) This curve provides the video image that we are used to look-ing at. When video will be transferred to film, a lower contrast video image is required. (Blue curve.) The “X” marks the point where the filmic curve yields a brighter mid-tone image that reduces apparent contrast.

Consider a contemporary sensor. (See Figure 3.) The illumination range remains the same at 18 stops. The potential sensor range, however, has

AVC-Intra 50

Nominally 50Mb/s, size of each frame is fixed

CABAC entropy coding only

1920 x 1080 formats are High-10 Intra Profile, Level 4

1280 x 720 formats are High-10 Intra Profile, Level 3.2

4:2:0 chrominance sampling

60Hz video: frames are horizontally downscaled scaled by ¾: (1920 x 1080 is scaled to 1440 x 1080, while 1280 x 720 is scaled to 960 x 720). 50Hz video is not downscaled.

10-bit luma and chroma

AVC-Intra 100

Nominally 100Mb/s, size of each frame is fixed

CAVLC entropy coding only

1920 x 1080 formats are High 4:2:2 Intra Profile, Level 4.1

1280 x 720 formats are High 4:2:2 Intra Profile, Level 4.1

4:2:2 chrominance sampling

Frames are not downscaled

10-bit luma and chroma

Table 2: AVC-Intra formats

Figure 1. Grayscale smoothness as a function of bit depth

Vmax

Vmin1-bit 2-bit 3-bit

1

2

3

4

5

6

7

8

1

1

1

2

2

23

34

4

5

6

V max

V min

Usable sensor range

Sensor range

Illumination range

1 2 3 4 5 6

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

1 2 3 4 5 6 7 8 9 10 11 12

X

Figure 2. Legacy video sensor and processing

Figure 3. Contemporary cinema sensor and processing

V max

V min

Usable sensor range

Sensor range

Illumination range1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

1 2 3 4 5 6 7 8 9 10 11 12

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

Y Z

Page 6: BY DAVID AUSTERBERRY - Penton Media

5 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

increased to 15 stops. Because of im-proved technology, fewer stops are lost to noise and bright light clipping. Thus, the sensor is able to capture a usable 12-stop dynamic range.

Once again, the brown diagonal shows a linear gamma curve, and the red curve shows Rec. 709 gamma. To record a 12-stop signal, a 12-bit codec

can be employed. Alternately, some cinema cameras utilize a logarithmic gamma (green curve) that is applied to sensor data. At point “Y,” the logarith-mic curve yields a brighter picture that reduces apparent contrast. Likewise, at point “Z,” the logarithmic curve yields a darker picture that also reduces ap-parent contrast.

This explains why a logarithmic image looks so much “flatter” than a Rec. 709 image. After log conversion, only 10 bits are required to carry the 12-stop signal range.

AVC-UltraToday’s sophisticated sensors de-

mand a recording system that is capable of carrying a much higher-level quality image. For this reason, Panasonic has announced AVC-Ultra. AVC-Ultra is backward compatible with AVC-Intra. That means that an AVC-Ultra decoder can decompress all of Panasonic’s P2 codecs. AVC-Ultra offers several qual-ity levels. (See Table 3.)

The Panasonic AVC-Ultra family de-fines three new encoding parameters

from the MPEG-4 Part 10 standard. Unlike the Intra codecs, Ultra co-decs can utilize the AVC/H.264 4:4:4 Predictive Profile.

AVC-Intra Class 50 and 100 are ex-tended to Class 200 and Class 4:4:4. The Class 200 mode extends the bit rate to 226Mb/s for 1080/23.97p, while Class 4:4:4 extends the possible resolution

from 720p to 4K with value depths of 10 and 12 bits. It’s possible Class 4:4:4 at 10 or 12 bits with a 4K frame size will be employed in the 4K camera Panasonic showcased at NAB2012. The Class 4:4:4 bit rate varies between 200Mb/s and 440Mb/s depending on resolution, frame rate and bit depth.

There is also a new 8-bit AVC-Proxy mode that enables offline edits of 720p and 1080p video at bit rates varying be-tween 800kb/s and 3.5Mb/s.

Both the Class 200 and the Class 4:4:4 are intra-frame codecs. Although Panasonic has always promoted intra-frame encoding, its new AVC-LongG is an inter-frame codec. AVC-LongG en-ables compression of video resolutions up to 1920 x 1080 at 23.97p, 25p and 29.97p. Amazingly, 4:2:2 color sampling with 10-bit pixel depth can be recorded at data rates as low as 25Mb/s. BESteve Mullen is the owner of DVC. He can be reached via his website at http://home.mindspring.com/~d-v-c.

Class 4:4:4 Class 200 AVC-LongG AVC-Proxy

Bit rate200Mb/s

to400Mb/s

226Mb/s@

1080/23.98p

As low as25Mb/s

800kb/sto

3.5Mb/s

Frame size

720p1080p

2K4K

720p1080p1080i

720p1080p1080i

720p1080p

Bit depth10- to 12-bit

pixel depth at 4:4:4

10-bit pixel depth at

4:2:2

10-bit pixel depth at 4:2:2

8-bit pixel depth at 4:2:0

Codec Intra Intra Inter Inter

Table 3: AVC-Ultra formats

Page 7: BY DAVID AUSTERBERRY - Penton Media

6 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

Broadcast contribution appli-cations like newsgathering, event broadcasting or content exchange currently benefit

from the large availability of high-speed networks. These high-bandwidth links open the way to a higher video qual-ity and distinctive operational require-ments such as lower end-to-end delays or the ability to store the content for fur-ther edition.

Because a lighter video compres-sion is needed, the complexity of common long-GOP codecs can be avoided, and simpler methods like intra-only compression can be consid-ered. These techniques compress pic-tures independently, which is highly desirable when low latency and error robustness are of major importance. Several intra-only codecs, like JPEG 2000 or MPEG-2 Intra, are today avail-able, but they might not meet all broad-casters’ needs.

AVC-I, which is simply an intra-only version of H.264/AVC compression, of-fers a significant bit-rate reduction over MPEG-2 Intra, while keeping the same advantages in terms of interoperability. AVC-I was standardized in 2005, but broadcast contribution products sup-porting it were not launched until 2011. Therefore, it may be seen as a brand new technology, and studies have to be per-formed to evaluate if they match current-ly available technologies in operational use cases.

Why intra compression?Video compression uses spatial and

temporal redundancies to reduce the bit rate needed to transmit or store video content. When exploiting temporal

ACV-I: There are benefits to using an intra-based codec for broadcast contribution.

BY PIERRE LARBIER

redundancies, predicted pixels are found in already decoded adjacent pic-tures, while spatial prediction is built with pixels found in the same picture. Long-GOP compression makes use of both methods, and intra-only compres-sion is restricted to spatial prediction.

Long-GOP approaches are more ef-ficient than intra-only compression, but they have also distinctive disadvantages:•Handlingpicture dependenciesmaybe complex when seeking in a file. This makes editing a long-GOP file a complex task.•Anydecodingerrormightspreadfroma picture to the following ones and span a full GOP. This means that a single transmission error can affect decod-ing for several hundred milliseconds of video and, therefore, be very noticeable.•Encodinganddecodingdelaymightbeincreased using long-GOP techniques compared to intra-only because of com-pression tools complexity.

Another problem inherent to long-GOP compression relates to video qual-ity that varies significantly from picture to picture. For example, Figure 1 depicts the PSNR along the sequence ParkJoy when encoding it in long-GOP and in intra-only. While the quality of the long-GOP pictures is always higher than the one of their intra-only counterparts, it varies considerably. On the other hand, the quality of consecutive intra-only coded pictures is much more stable.

Therefore, intra-only compression might be a better choice than long-GOP when:• Enoughbandwidthisavailableonthenetwork;• Lowend-to-endlatencyisadecisiverequirement;• Streamshavetobeedited;and• Theapplicationissensitivetotrans-mission errors.

Several intra-only codecs are cur-rently available to broadcasters to

Engineers have a variety of video coding tools available. Choosing a “best fit” involves making choices between multiple performance factors.

Page 8: BY DAVID AUSTERBERRY - Penton Media

7 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

serve the needs of contribution applications:• MPEG-2 Intra — This version of MPEG-2 compression is restricted to the use I-frames, removing P-frames and B-frames. • JPEG2000 — This codec is a signifi-cantly more efficient successor to JPEG that was standardized in 2000. • VC-2 — Also known as Dirac-Pro, this codec has been designed by BBC Research and was standardized by SMPTE in 2009. Like JPEG 2000, it uses wavelet compression.

Older codecs like MPEG-2 Intra ben-efit from a large base of interoperable equipments but lack coding efficiency.

On the other hand, more recent formats like JPEG 2000 are more efficient but are not interoperable. Consequently, there is a need for a codec that could be at the same time efficient and also ensure in-teroperability between equipment from various vendors.

What is AVC-I?AVC-I designates a fully compliant

variant of the H.264/AVC video codec restricted to the intra toolset. In other words, it is just plain H.264/AVC using only I-frames. But, some form of uni-formity is needed in order to ensure interoperability between equipment provided by various vendors. Therefore,

ISO/ITU introduced a precise definition in the form of profiles (compression toolsets) in the H.264/AVC standard.

H.264/AVC intra profilesProvision to using only I-frame cod-

ing was introduced in the second edi-tion of the H.264/AVC standard with the inclusion of four specific profiles: High10 Intra profile, High 4:2:2 Intra profile, High 4:4:4 Intra profile and CAVLC 4:4:4 Intra profile. They can be described as simple sets of constraints over profiles dedicated to professional applications. Table 1 gives an overview of the main limitations introduced by these profiles.

Because the intra profiles are defined as reduced toolsets of commonly used H.264/AVC profiles, they don’t intro-duce new features, technologies or even stream syntax. Therefore, AVC-I video streams can be used within systems that already support standard H.264/AVC video streams. This enables the usage of file containers like MPEG files or MXF, MPEG-2 TS or RTP, audio codecs like MPEG Audio or Dolby Digital, and many metadatastandards.

AVC-I video qualityMany academic papers have compared

the coding efficiency of H.264/AVC in in-tra-only mode versus other intra codecs. But, those performance comparisons are carried out using objective metrics like PSNR or SSIM. (Structural SIMilarity, when referred to as SSIM Index, is based on measuring three components — lu-minance similarity, contrast similarity and structural similarity — and com-bining them into a result value.) It is important to realize that PSNR or SSIM may not reflect actual visual perception. Consequently, studies published to date do not necessarily reflect the visual expe-rience of a given codec in the context of broadcast contribution.

For this reason, we have performed a visual evaluation of various intra co-decs intended for broadcast contribu-tion applications. The tests involved a range of products that could encode and decode AVC-I and MPEG-2 Intra up to 150Mb/s, across multiple vendors, and reference software. This investigation

Figure 1. This figure shows long-GOP versus intra-only compression.

H.264/AVC Intra profiles Based on: Summary of the restrictions to the

base profile

High10 IntraHigh 4:2:2 profile(targets mainly Contribution appli-cations with up to 4:2:2 10-bit pixels)

All pictures are IDR* (no P or B pictures)Limited to 4:2:0 chroma format (no 4:2:2 chroma format)

High 4:2:2 Intra All pictures are IDR (no P or B pictures)

High 4:4:4 Intra High 4:4:4 Predic-tive profile(targets mainly Archiving applica-tions with up to 4:4:4 14-bit pixels)

All pictures are IDR (no P or B pictures)

CAVLC 4:4:4 Intra

All pictures are IDR (no P or B pictures)Only CAVLC entropy coding

Table 1. This shows the different H.264/AVC Intra profiles. (IDR = Instantaneous Decoder Refresh, CAVLC = Context Adaptive Variable Length Coding)

Page 9: BY DAVID AUSTERBERRY - Penton Media

8 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

was done by expert viewers on a large set of test sequences representative of high-definition broadcast contribution content, mostly interlaced.

The outcome of this evaluation is that two codecs are most suitable for high bit-rate intra uses — AVC-I and JPEG 2000. The detail level appears to be about the same with both codecs on bit

rates ranging from 50Mb/s to 150Mb/s. This confirms that the coding efficien-cy of AVC-I and JPEG 2000 is close. However, coding artifacts are different.

AVC-I and JPEG-2000 artifacts

Below 100Mb/s, a problematic defect was observed similarly on both codecs: Pictures can exhibit an annoying flicker. This issue is caused by a temporal in-stability in the coding decisions, ampli-fied by noise. It seems to appear below 85Mb/s with JPEG 2000 and below 75Mb/s with AVC-I. And, it worsens as the bit rate decreases. At 50Mb/s and below, the flicker is extremely problem-atic, and it was felt that the video quality was too low for high-quality broadcast contribution applications, even when the source is downscaled to 1440 x 1080 or 960 x 720.

Around 100Mb/s, both codecs per-form well, even on challenging content. Pictures are flicker-free, and coding

artifacts are difficult to notice. However, noise or film-grain looks low-pass fil-tered, and its structure sometimes seems slightly modified. Even so, it wasn’t felt this was an important issue.

All those defects are less visible as the bit rate is increased. But, while AVC-I picture quality raises uniformly, some JPEG 2000 products may still exhibit

blurriness artifacts, even at 180Mb/s. Using available JPEG 2000 contribution pairs, a bit rate at which compression is visually lossless on all high-definition broadcast content was not found. On the other hand, some encoders appeared visually lossless at 150Mb/s, even when encoding grainy content like movies.

Bit rates in contributionThe subjective analysis of an actu-

al AVC-I implementation on various broadcast contribution content allows us to categorize its usage according to the available transmission bandwidth. On page 48, Table 2 presents findings on 1080i25 and 720p50 high-definition formats.

Because AVC-I does not make use of temporal redundancies, 30Hz con-tent (1080i30 or 720p60) is more dif-ficult to encode than 25Hz material. Additionally, to achieve the same per-ceived video quality level, bit rates have to be raised by 20 percent.

Bit rate in AVC-I Remarks

≤ 50Mb/s Video quality is too low for high-quality broadcast contribution applications

50Mb/s - 75Mb/s Acceptable on low-noise sources but poor on most sequences

75Mb/s - 90Mb/s Acceptable

90Mb/s - 110Mb/s Good

110Mb/s - 150Mb/s Excellent

≥ 150Mb/s Visually lossless

Table 2. This shows AVC-I bit rate versus quality for 1080i25 to 720p50 content.

ConclusionThe availability of high speed net-

works for contribution applications enables broadcasters to use intra-only video compression codecs instead of the more traditional long-GOP for-mats. This allows them to benefit from distinctive advantages like: low encod-ing and decoding delays; more constant video quality; easy edit ability when the content is stored; and lower sensitivity to transmission errors. However, cur-rently available intra-only video codecs require one to choose between interop-erability and coding efficiency.

AVC-I, being just the restriction of standard H.264/AVC to intra-only coding, avoids making difficult com-promises. It is more efficient than other available intra-only codecs, but, more importantly, it benefits from the strong standardization efforts that permitted H.264/AVC to replace MPEG-2 in many broadcastapplications.

Finally, a subjective study across a range of products from multiple ven-dors identified specific coding artifacts that may occur and confirmed the visu-al superiority of AVC-I versus MPEG-2 and JPEG 2000, whenmeasured at high bit rates. BEPierre Larbier is CTO for ATEME.

The outcome of the investigation is that two codecs are most suitable for high bit-

rate Intra uses — AVC-I and JPEG 2000.

Page 10: BY DAVID AUSTERBERRY - Penton Media

9 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

Today’s broadcasters are look-ing for the highest image quality, flexible delivery for-mats, interoperability and

standardized profiles for interactive video transport and workflows. They also have a vested interest in a common high-end format to archive, preserve and monetize the avalanche of video footage generated globally.

This is the story behind the rapid adop-tion of JPEG 2000 compression in the con-tribution chain. Standardized broadcast profiles were adopted in 2010 to match current industry needs (JPEG 2000 Part 1 Amendment 3 — Profiles for Broadcast Application — ISO/IEC 15444-1:2004/Amd3), ensuring this wavelet-based co-dec’s benchmark position in contribution.

In parallel, these broadcast profiles have also filled the industrywide need for compression standards to archive and create mezzanine formats, allowing transcoding to a variety of media dis-tribution channels. The ongoing stan-dardization process of the Interoperable Master Format (IMF) by SMPTE based on JPEG 2000 profiles brings the adop-tion full-circle.

The U.S. Library of Congress, the French Institut National de l’Audiovisuel (INA) and several Hollywood studios have selected the codec for the long-term preservation of a century of audio-visual contents.

JPEG 2000 is different from other video codecs. MPEG and other DCT-based codecs have been designed to optimize the compression efficiency to deliver video to viewers via a pipe with limited bandwidth. JPEG 2000, with its wavelet transform algorithm, brings features not only for image

JPEG 2000, from master to archive: The codec provides

useful features for broadcasters.BY JEAN-BAPTISTE LORENT AND FRANÇOIS MACÉ

compression efficiency, but to give also the user better control and flexibility throughout the image processing chain. The codec provides unique features that are not available in any other compres-sion method.

JPEG 2000 under the spotlight

JPEG 2000 is based on the discrete wavelet transform (DWT) and uses scalar quantization, context modeling, arithmetic coding and post-compres-sion rate allocation. JPEG 2000 provides random access (i.e. involving a minimal decoding) to the block level in each sub-band, thus making it possible to decode a region, a low-resolution or a low-qual-ity image version of the image without having to decode it as a whole.

JPEG 2000 is a true improvement in functionality, providing lossy and

lossless compression, progressive and parsable code streams, error resilience, region of interest (ROI), random access and other features in one integrated al-gorithm. (See Figure 1 on page 10.)

In video applications, JPEG 2000 is used as an intraframe codec, so it close-ly matches the production workflow in which each frame of a video is treated as a single unit. In Hollywood, its ability to compress frame by frame has made this technology popular for digital interme-diate coding. If the purpose of compres-sion is the distribution of essence and no further editing is expected, long-GOP MPEG is typically preferred.

Broadcast processesJPEG 2000 brings valuable features to

the broadcast process, including ingest, transcoding, captioning, quality control or audio track management. Its inherent

When JPEG2000 is used as an intraframe video codec, each frame is treated a single unit, making it well suited for production applications.

Page 11: BY DAVID AUSTERBERRY - Penton Media

10 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

properties fully qualify a codec for creat-ing high-quality intermediate masters.

Post-production workflows consist of several encoding/decoding cycles. JPEG 2000 preserves the highest quality throughout this process, and no block-ing artifacts are created. Moreover, the technology supports all common bit depths whether 8, 10, 12 bits or higher.

JPEG 2000 enables images to be com-pressed in lossy and visually or math-ematically lossless modes for various applications. (See Figure 2.) Additionally, its scalability allows a “create once, use many times” approach to service a wide range of user platforms.

The technology also enables im-proved editing: Even at the highest bit rates, its intrinsic flexibility makes it user-friendly on laptop and worksta-tion editing systems, with a limited number of full bit rate real-time video tracks. Improving computing hardware is certain to increase the number of real-time layers.

Since JPEG 2000 is an intraframe codec, this prevents error propagation over multiple frames and allows the video signal to be cut at any point for editing or other purposes.

Easy transcoding appeals to high-end applications where workflows vastly benefit from transcoding to an inter-mediate version. JPEG 2000 ensures a clean and quick operation when bit rate is at a premium. Professional view-ers have labeled correctly transcoded 1080p JPEG 2000 files compressed at

100Mb/s as “visually identical” to the original 2K footage. Furthermore, the wavelet-based JPEG 2000 compression does not interfere with the final — usu-ally DCT-based — broadcast formats.

Last but not least, several standards specify in detail how the JPEG 2000 video stream should be encapsulated in a number of widely adopted containers such as MXF or MPEG-2 TS.

Professional wireless video transmission

Wireless transmission is often chal-lenged to improve its robustness in broadcast. Uncompressed HD wire-less transmission is often seen as com-plex, for even if a 1080p60 transmission

(3Gb/s) were possible wirelessly, it would be quite difficult to add the nec-essary FEC and encryption to the data stream. Of all the compression algo-rithms available in the market, JPEG 2000 is seen as one of the top contend-ers for the following reasons.

JPEG 2000 is inherently more error resilient than MPEG codecs. The codestream can be configured so the most important data (the lowest

frequency data contains the most visu-ally significant information) is located in the front, while successively higher frequency, less important data can be placed in the back. Using appropriate FEC techniques, the lower frequency data can be protected while less pro-tection can be applied to the higher frequency data, as errors in the higher frequency bands have much less effect on the displayed image quality.

Also, similar to the contribution, the low latency of JPEG 2000 is something that would be practically impossible for wireless systems using an MPEG system based on long GOP coding.

Long-term preservation The broadcasters and video archi-

vists are looking for long-term digital preservation on disk. In most cases, the source material is not digital, but film that needs to be scanned or high-quality analog videotape. As such, a destination digital format must then be selected.

Key requirements often include re-ducing the storage costs of uncom-pressed video while still maintaining indefinite protection from loss or damage. Moreover, the format should preferably enable digitized content to be exploited, which means providing flexibility — workflows again — and se-curity. For these reasons, several stud-ies and user reports claim JPEG 2000 to be the codec for audio-visual archiving.

Several reasons make JPEG 2000 a codec of choice for audio-visual archiving: • TheJPEG2000standardcanbeusedwith two different wavelet filters: the 9/7 wavelet filter that is irreversible and the 5/3 wavelet filter that is fully reversible. The 5/3 wavelet filter offers a pure mathematically lossless compres-sion that enables a reduction in storage requirement of 50 percent on average

Lower quality versions

4K 4K 4K

Pure lossless Visually lossless Lossy

4K losslessJPEG 2000

codestream

(Auto-bandwidth adaptation)

Spatial zoneextractions

(Auto-cropping,pan and scan)

Lower resolution versions

(Auto-proxy, auto-scaler)

HD HD SD

4K HD SD

Figure 1. Many resolutions and different picture quality files can be derived from a single JPEG 2000 master.

Figure 2. Support for lossless or lossy compression gives the broadcaster more options.

Page 12: BY DAVID AUSTERBERRY - Penton Media

11 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

while still allowing the exact original image information to be recovered. The 9/7 wavelet filter can encode in lossy or visually lossless modes.• Thescalabilityallowingproxyextrac-tion, multiple quality layers, is of huge interest to ease client browsing and re-trieval or transcoding and streaming.• JPEG2000isanopenstandardthatsupports every resolution, color depth, number of components and frame rate. • JPEG2000islicense-androyalty-free.

The futureSeveral initiatives are pushing the

industry beyond today’s HD: NHK Super Hi-Vision, also called 8K and UHDTV, the Higher Frame Rates in Cinema initiative by James Cameron and Peter Jackson (up to 120fps), 16-bit color depth, and the numerous manufacturers that are now offering 4K technology.

The need for efficient codecs has gained significant attraction amongst the industry. The future of JPEG 2000 is bright as it is an open standard that requires less power, consumes less space in hardware implementations and generally delivers greater scalabil-ity, flexibility and visual quality than other codecs. An increasing number of manufacturers, broadcasters and pro-ducers are using JPEG 2000 implemen-tations to adapt today’s industry to these new challenges. BEJean-Baptiste Lorent and François Macé are product managers at intoPIX.

Page 13: BY DAVID AUSTERBERRY - Penton Media

12 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

The line of compression schemes is stretching out. Very soon, we could potential-ly have MPEG-2 through to

HEVC in use in a single program chain.All this complicates workflows and

calls for careful planning to avoid un-necessary transcoding.

Do we need all these compression standards? Well, yes. As picture reso-lutions increase, the demand for high-er efficiency compression will only increase in step. MPEG-2 started out for standard definition applications but has been stretched for HD acquisition and delivery.

MPEG-4 was going to be the answer to everything, from small phones up to movie screens. That has only worked by adding a new Part for a new compres-sion scheme. Video started out as Part 2 - Visual Objects. That didn’t prove much more efficient that MPEG-2, so AVC was born — MPEG-4 Part 10. However, the demands for mobile video and the advent of 4K has led to the need for an even more efficient codec than AVC, and that has come to fruition as HEVC or H.265.

Where does that leave camera design-ers? One benchmark in the specification is record time. In the days of tape, shoot-ers came to expect three or four hours of record time for a tape; that’s proba-bly one day’s work. Wind forward a few years, and writing camera files to mem-ory cards gives a record time somewhere between 30 minutes and two hours, all depends on how much compression you use. Some cameras have multiple card slots to give longer record times.

That’s going to mean a handful of cards to manage and offload to backed disk storage each day. I can hear the film guys thinking “a luxury — we had ten-minute reels. We had to stop, change reels and check the gate before you were off again.”

After a period where record times were restricted, the availability of

Will cameras use HEVC coding?BY DAVID AUSTERBERRY

solid–state memory cards, with 128GB and larger capacity, has eased those restrictions.

Camera vendors will design whatever gets the best pictures to sell their camer-as. But, that has led to all manner of cod-ing schemes and compression formats — and there is the matter of contain-ers or wrappers. The rise of the single sensor has added a further choice, raw or coded.

Camera designers have to adopt a codec format that meets a number of, sometimes conflicting, requirements. First, it must meet the quality expecta-tions for the camera, for its price and format. Second, it must not be power hungry. Third, the data rate must be as low as possible to ease demands on the camera storage cards. And fourth, sometimes a little overlooked, it must be compatible with popular NLEs.

The low data rate demands indi-cate an efficient codec design, but the more recent the compression format, the more processing power is needed, immediately conflicting with the low power requirement. Hence the popular-ity of MPEG-2 long after AVC was re-leased. This is where the big engineering

compromise comes in. If the camera has an adequate internal compression format, then uncompressed or raw data can be made available via SDI or HDMI connectors for users who want more of the sensor information. The ex-ternal recorder has become a common sight, especially with single large sensor cameras. Many external recorders also allow encoding into an edit format like DNxHD or ProRes, speeding the ingest process in post.

For the broadcaster, all this choice gives flexibility at the production stage, but does not lead to standardization in the workflow. The edit bay must deal with this plethora of formats, a far step from the days of two primary tape formats, the Betacam family and the DV family. Even a format like AVC I-frame encoding comes in two flavors: Panasonic’s AVC-Intra (and Ultra) plus Sony’s XAVC. The former is high 422 profile, level 4.1 and the latter is level 5.2. So much for interoperability.

The drive to support 4K is one rea-son Sony has adopted 5.2, as lower levels only support up to 2K resolution, and Panasonic have introduced AVC-Ultra to support higher data rates.

The codec you use will be dictated by the camera you choose, so interoperability will remain an issue.

Page 14: BY DAVID AUSTERBERRY - Penton Media

13 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

Editing the AVC formats does require a recent NLE workstation, as it requires considerable processing resources. Many editors prefer to work with DNxHD or ProRes, transcoding every-thing at ingest and this can ease the de-mands on the power of the workstation.

Many productions are using GoPro cameras for point-of-view shots, they are not edit-friendly and require a trans-code before ingest. For example, the Hero 3 uses AVC level 5.1 long GOP to get small files in in the camera. It also uses an MPEG wrapper, so may require rewrapping to MXF or QT at the trans-code stage. It’s just another process that forms part of post-production.

Will there ever be a single codec for cameras? I think not. The requirements of each programming genre are so dif-ferent. Compare newsgathering with a high-end VFX shoot. One needs small files for backhaul, the other needs as much of the original sensor information as possible. And, what of HEVC? So far it’s going to see application as a distribu-tion codec. The processing resources for encoding do not make it practical for current camera electronic, but maybe one day if we get to 4K 3-D newsgath-ering, who knows? BE

Page 15: BY DAVID AUSTERBERRY - Penton Media

14 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

At the 2013 NAB Show, the proliferation of next-gen-eration MPEG compression technology, namely HEVC-

compatible (aka, H.265) software and hardware encoders and decoders, gave the impression that the technology nec-essary to move and then display large data files containing the highest qual-ity HD (1080p/60) and 4K (3840 x 2160 pixels for delivery or 4096 x 2160 pixels for production) content was ready to be deployed. This would clearly improve the value of bandwidth-constrained networks. However, a lack of an indus-try standard, the need for significantly more processing power to accurately compress such files and the ongoing move towards the current state-of-the-art (and standardized in January) AVC (H.264) technology to distribute full HD 1080p files would seem to make High Efficiency Video Coding (HEVC) a far off reality.

This is not to say that it won’t hap-pen, but don’t hold your breath, as most people we spoke to at the show say it’s at least five years off. The benefits to multiplatform delivery associated with the new HEVC is said to be a 50-per-cent improvement in bit-rate efficiency when compared to the Advanced Video Coding (AVC) scheme, while maintain-ing the same image quality or better.

Like H.264 before it, HEVC is the lat-est version of the MPEG standard. It’s uses the same idea of recognizing the difference in motion between frames and finding near-identical areas with-in a single frame. With HEVC, these similarities are subtracted from subse-quent frames, and whatever is left in each partial frame is mathematically transformed to reduce the amount of data needed to store each frame. This results in smaller files with nearly the same quality as the 4K original, for

MPEG HEVC compression not ready for primetime

BY MICHAEL GROTTICELLI

example, which makes them easier to distribute and saves operators in content storage costs.

At the NAB Show, companies such as Elemental Technologies, Ericsson, Fraunhofer Institute, Harmonic, Media Excel, Motorola, Rovi Corp. and Vanguard Video showed hardware and software solutions (in prototype form) and side-by-side demonstrations of H.265 and H.264 encoding to compare

image quality results. In all cases the H.265 decode looked visibly better, even at bit rates as low as 5Mb/s (at the Fraunhofer exhibit booth). Each seem-ingly challenged the other to a compres-sion/image quality contest comparison.

Elemental — which showed the abili-ty to encode 1080p/60 content to HEVC in real time — went so far as to issue a highly public “HEVC Throwdown.” The contest dared competitors and

As image resolution and video screen size continue to increase and viewers increasingly demand their content on a multiple devices, engineers will need to use a variety of video coding tools to serve each device or channel.

Elemental Technologies hosted a “HEVC Throwdown” at NAB to compare compression results across multiple delivery platforms.

Page 16: BY DAVID AUSTERBERRY - Penton Media

15 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

customers to bring content on a USB drive and have it encoded in H.265 by Elemental and then see it simultaneous-ly streamed live (at 500Kb/s) to a tablet device and at 1080p resolution to a high-definition television. Participants were encouraged to visit other video process-ing companies at NAB to request the same demonstration of capabilities. It wasn’t clear how many participated, but it was a small number.

“Those that only had DVD sources weren’t able to attempt the challenge be-cause we didn’t have input for that, but those that had USB drives were able to load onto our server and see the trans-code process at real-time (say 30fps), and now they have those transcoded files for evaluation in their labs,” said Keith Wymbs, vice president of market-ing at Elemental.

The need for HEVC encoding is clear, but at what cost?

Because multichannel operators must process hundreds of files simultaneous-ly to accommodate the different display devices, the need for real-time HEVC encoding (where the money in sponsor-ship is) is key. Most of the companies at the show demonstrated the ability to process on-demand files in real time, but live encoding in real time requires more processing and is harder to accom-plish elegantly.

It was said that HEVC processing re-quires 10 times more processing than AVC on the encode side and two to three times more by comparison for the decode. That foreshadows the need for sophisticated parallel processing al-gorithm designs using multiple gener-al-purpose programmable architectures (GPUs and CPUs) at the same time. Off-the-shelf workstations won’t cut it.

“We are now successfully encoding HEVC in software in real time, but it’s not easy or cheap to do at this point,” said Dustin Encelewski, director of product marketing at Elemental Technologies, which has released a whitepaper on HEVC/H.265. “There’s also the issue of a lack of a standard, so vendors are doing its own software and hardware and experimenting to find the right so-lution that will work for customers. At

the end of the day, the technology has to be configured in such a way that makes it practical and cost-effective to use. As an industry, we’re not there yet.”

Addressing legacy infrastructure

At the show, Ericsson unveiled its AVP 4000 encoder platform for the de-livery of TV services over all networks. The unit is using Ericsson’s in-house

developed programmable video pro-cessing chip, which it said addresses multiple applications, regardless of codec, resolution or network. (Ericsson launched what it called the world’s first HEVC/H.265 encoder at IBC last fall, the 5500 HEVC.)

“By addressing all applications, co-decs, resolutions and profiles, the AVP 4000 single platform addresses legacy equipment while also making it easy to integrate, expand, re-purpose, repair and upgrade,” said Matthew Goldman, senior vice president of technology in Ericsson’s Solution Area TV group.

He added that customers can use the AVP 4000 for H.264 and JPEG2000 en-coding today and then migrate to HEVC via a simple software upgrade.

“This lowers the overall cost of own-ership and makes deployments easier to invest in,” he said.

Goldman said many of the same is-sues that affected the AVC standard in its early deployments would delay full-scale HEVC deployments. HEVC will initially find a home for B2B backhaul and contribution applications.

“At this point, the tricky part is real-time encoding of HEVC, and we be-lieve it can’t be done well in software alone,” Goldman said. “We want to be in control of our own destiny. That’s why

Ericsson, for the first time ever, has de-veloped its own compression chip. It’s a purpose-built, easily programmable design that includes all our own algo-rithms that have been developed over years of lab and real-world testing. The Main 10 profile is key to this.”

Telcos Verizon in the U.S. and TELUS in Canada are now experimenting with the new AVP 4000 to support future TV platforms with high-quality picture and

services. Therein lies the attraction for operators and other types of content dis-tributors. Better image quality (Ericsson said its 4X better) often results in more discerning viewers.

Advanced data rate reduction

Harmonic showed a demo in its both of its ProMedia Live real-time and ProMedia Xpress file-based transcoders using HEVC to compress UltraHD (4K and higher) content supporting a variety of real-world scenarios. The point was to show that UltraHD could be employed at bandwidths currently used for HD, using HEVC technology instead of AVC. UltraHD content was compressed with HEVC at 3Mb/s to 7Mb/s using a Broadcom Ultra HD HEVC chipset decoder. Alongside this, a software de-coder from NTT Docomo was shown to demonstrate Ultra HD HEVC decoding for the multiscreen delivery market.

“The demo outlined the benefits of Harmonic’s superior preprocessing when upconverting HD content for dis-play on an Ultra HD screen,” said Ian Trow, senior director of emerging tech-nology and strategy at Harmonic. “The benefits of the (Harmonic) ProMedia adaptive bit-rate multiscreen plat-form used for HD compression yields

Ericsson’s AVP 4400 encoder employs the company’s new compression chip.

Page 17: BY DAVID AUSTERBERRY - Penton Media

16 broadcastengineering.com

THE VIDEO CODING HANDBOOK BroadcastEngineering

significant quality gains when compared to legacy HD content similarly upcon-verted to Ultra HD.”

Fraunhofer HHI showed a new real-time video compression H.265/ MPEG-HEVC engine, in software, that it said it developed with leading players in mobile technology and consumer elec-tronics. At NAB, the research institute’s exhibit booth included a demonstration of HEVC decoding of 4K content at 5 Mb/s (the lowest with good quality that I saw on the show floor).

“At Fraunhofer, we are dedicated to develop technologies and standards that not only solve the issues faced by the in-dustry today, but anticipate the needs of the future,” said Dr. Thomas Schierl, head of group multimedia communica-

tions at Fraunhofer HHI. “Our role in the development of HEVC and the real-time software decoder showcases our ability to develop innovations that are ahead of the market to enhance the digital media workflow of today and tomorrow.”

Fraunhofer HHI’s HEVC real-time software decoder uses an advanced multi-threaded architecture, which the company says makes it very efficient at low bit rates (with low latency).

As part of its HEVC real-time de-coder demo, Fraunhofer HHI, part of the Fraunhofer Digital Cinema Alliance, also hosted a series of test suites for HEVC-compliant decoder chips and set-top boxes.

Targeting multiplatform delivery

The Motorola Mobility division of Motorola also showed HEVC encoding

and decoding — using a Vanguard soft-ware encoder/decoder at its 2013 NAB Show booth. The company’s stated goal was to highlight how the compression efficiency of HEVC facilitates the de-livery of high-quality video over band-width-constrained networks to multiple platforms. One demo focused on a real-time HEVC encoder delivering stream-ing content to a Google Nexus 10 tablet for real-time decoding and playback. A second demo showcased real-time HEVC HTTP live streaming to an Apple iPad. A third featured an IP set-top box decoding HEVC.

While the talk of this year’s NAB Show was 4K delivery, the technology is not yet ready for real-world deploy-ments. There’s too much legacy equip-

ment that continues to offer operators “good enough” quality using AVC and even previous generation MPEG-2 tech-nology. That’s right: MPEG-2 still works well for most types of HD content deliv-ery — to TVs, the Internet and mobile devices alike.

There was talk that several content distributors will experiment with HEVC compression at the upcoming World Cup soccer tournament in Brazil next summer. As TV screens get larger, the need for better compression is clear. But at what cost?

Then, of course, the industry has to settle on a standard way of H.265 coding, which apparently is part of the ATSC 3.0 specification now being developed and planned for initial deployments in 2016. [There’s patent negotiations occurring right now. The Joint Collaborative Team on Video Coding of the ITU-T Visual

Vanguard’s HEVC software-based encoder chip, now in version 265, provides a powerful toolset for offline encoding that is ideally suited for cloud-based encoding implementations.

Coding Experts Group and the ISO/IEC Moving Pictures Experts Group (MPEG) are working on it. There is now a Recommendation ITU-T H.265 being considered for ratification.] Once a stan-dard is established, vendors will then build products to support it. Satellite- and fiber-based contribution should benefit the most in the short term.

The consensus is that there’s still a lot of improvement being made to the MPEG compression platform, so the de-livery of HD, 4K and even higher image quality content is possible to support widespread consumer consumption. It’s just a matter of how long it will take. BE