MPEG-H AUDIO ALLIANCE - The Next-Generation System for Interactive and Immersive Sound
MPEG-H AUDIO ALLIANCE - The Next-Generation System for Interactive and Immersive Sound

MPEG-H Audio at NAB 2015

MPEG-H Audio Alliance Live Sports Remote Broadcast Demonstration

Fraunhofer booth SU 3714 in the South Upper Hall.

  • Demonstration with today’s broadcast equipment adapted for MPEG-H.
  • Live mixing by broadcast professionals producing a realistic sports show.
  • Complete broadcast network from remote truck to home.
Interaction with on-screen display to adjust audio elements

MPEG-H Audio offers viewers the ability to turn up or down particular audio elements in a program - such as dialogue or sound effects - as they prefer.

The MPEG-H Audio Alliance of Fraunhofer IIS, Qualcomm Technologies, Inc., and Technicolor will showcase the world's first live broadcast demonstration of their new immersive and interactive TV audio system currently proposed for ATSC 3.0 and being developed for over-the-top streaming video services. The demonstration will include:

  • A complete live broadcast signal chain using MPEG-H Audio, from a sports remote, to network operations, to a local affiliate, and then broadcast to a consumer STB for playback. Broadcasters may walk into the NOC and talk to the Alliance's broadcast engineers about how MPEG-H has been integrated into the plant of "The MPEG Network", or visit the local affiliate WMPG-TV and hear a local promo produced in immersive sound.
  • Mixing by our A1's live on the air, just as they do for shows today, except with static objects for dialogue and dynamic objects for sound effects, and of course, immersive sound. Broadcasters will be able to speak directly with them about their views on using dynamic object panning on the air and how it affects their workload.
  • Playback of a wide range of content formats - the demo will feature content stored on the video server in 5.1, stereo, 5.1 + 4H, and even 7.1 + 4H and HOA. All this seamlessly combined with frame-accurate cuts in the audio during automation playout, emission encoding, and decoding in the home.
    All done using existing, unmodified broadcast equipment, except for:
    • Jünger Audio MPEG-H Audio Monitoring and Authoring Units that adapt today's 5.1 consoles for immersive audio production and allow operations to audition MPEG-H audio in the plant as it will be reproduced on viewer's devices. Of course, since the audio signals are carried as embedded PCM on SDI, broadcaster's existing audio panels can be used for confidence monitoring or cues, as well as for stereo or 5.1 monitoring.
    • New AVC video encoders and decoders that include MPEG-H Audio. While the Alliance plans on having commercial hardware at future shows, at NAB it will use Fraunhofer reference designs. We chose H.264 for convenience in this demonstration - MPEG-H Audio will work with H.265/HEVC or other video codecs.
  • Editing of content using standard production tools - Media Composer, Premiere Pro, ProTools, etc.

In addition to the live sports show, visitors will also be able to hear Hollywood movies, broadcast in MPEG-H from the original theatrical or near-field mixes. And, of course, they will be able to hear Fraunhofer's 3D Soundbar prototype, demonstrating the vision of bringing immersive sound to mainstream consumers as a decor-friendly, un-box and plug-in easy installation.

There will also be a Samsung pre-production prototype TV on display decoding MPEG-H Audio, where visitors will be able to adjust the audio elements of a sports show to their preference.

MPEG-H may operate within broadcast plants in two modes: Fixed Mode, using fixed channel assignments and loudness levels as is today's practice, and Control Track Mode, for more complex programming with several channel assignments and audio modes as well as dynamic objects. At NAB, the most advanced mode, Control Track Mode, will be shown, operating through the entire chain with dynamic objects, modes from stereo to HOA and 7.1 + 4H, many different channel assignments, and agile loudness metadata, without any need for "Dolby E or non-audio mode" configuration of the SDI audio channels.

The demonstration will run on a true broadcast schedule under automation control, repeating at :00, :20, and :40 minutes after the hour, and you'll hear local commercials inserted during a network break and join. Of course, in the living room visitors will be able to hear pre-recorded content on demand and experience the "MPEG Movie Channel" as well as The MPEG Network.

 

MPEG-H Audio Alliance Live Scene-Based Audio Demonstration

Qualcomm Technologies, Inc. booth S201LMR in the South Upper Hall.

Live capture, interactivity, flexible rendering in an end-to-end real time Broadcasting workflow with Scene Based Audio (HOA).

This demo will show how various benefits of Scene Based Audio also known as HOA can be fit into current TV broadcasting practices. Advantages of HOA include live capture, flexible/adaptive rendering and various interactive features that can be made available to consumers. Come to the demo to see how truly immersive sound can be captured (live), and transported/distributed through only 7 SDI channels (one more than current 5.1 channel delivery) to be rendered to any number of loudspeakers. We will show every stage of production from live capture, through transport through a TV plant (NOC to Affiliate) and finally through an emission encoder (MPEG-H) to consumers.

MPEG-H Audio Alliance Scene Based Audio Info Session

Scene Based Immersive-Audio (HOA) for next generation broadcasting - benefits and characteristics.

Mon. April 13 | 1:00 PM - 3:00 PM | N239 (open to all)
Moderator/Chair: Rich Chernock
Sponsor: Qualcomm

Description:

Channel-, Object-, and Scene-based audio represent three approaches to broadcast audio. Channel-based audio is overwhelmingly the most pervasive format that is almost synonymous with audio signals. Object-based audio has arguably always been used in early stages of offline mixing, in production studios, and has recently gained popularity as a cinema format. Scene-based audio is an approach that allows the creation of soundscapes using both audio objects and live distributed microphone feeds. While the origins of Scene-based audio can be traced back to the 1970s, the current state of the technology is significantly evolved - providing significant benefits for next generation broadcasting. The recently ratified MPEG-H standard includes support for Scene-based audio (as well as the other two formats) - allowing for a highly efficient means to store and transmit the format independently or in combination of the other two formats. The current session intends to describe Scene-based audio - its benefits and characteristics, how it is used to create content, and how it fits into existing US broadcasting frameworks. A panel of industry experts will provide valuable insight into the use of the format.

Presentations:

  1. What is Scene Based Audio - Nils Peters (Qualcomm).
  2. Scene Based Audio from the perspective of an Oscar winning mixer - Ben Wilkins (Technicolor).
  3. Equipment for Scene Based Audio - Gary Elko (mhAcoustics).
  4. How Scene Based Audio fits into current and next generation broadcasting workflow - Merrill Weiss and Deep Sen (Merill Weiss Group & Qualcomm).

The presentations will be followed by a Q&A panel session.

URL: https://www.qualcomm.com/mpeg-h-scene-based-audio

Fraunhofer logo Qualcomm logo