Streaming Video 2.0: An Innovative Approach

By

(CORDIS) — Streaming video is everywhere these days: in your home, in your office and, increasingly, on your smart phone. But it is far from perfect. Whether being used for entertainment, telepresence or video calling, streaming video remains bandwidth-hungry and error-prone. EU-funded researchers are making the viewing experience smoother, crisper and easier on any device.

Thanks in large measure to video-sharing websites and video-calling applications, streaming video has undergone explosive growth in recent years. It already accounts for around half of all internet traffic and is forecast to exceed 80 percent within the next few years.

And while much of this will still be watched on PCs and televisions in homes and offices, a great deal more will be viewed on an increasingly wide variety of mobile devices: on everything from the two-inch screen of the smallest smart phone, to the nine inches of a tablet or a 21-inch laptop. More significantly, much of it will be delivered wirelessly – on WiFi networks, WiMax or, especially, via 3G and 4G mobile communications – and it will come in a wide range of shapes and sizes, with different formats, codecs, frame-rates, qualities and so on.

This mix of heterogeneous devices, delivery methods and formats for different multimedia content is the greatest challenge facing engineers as they try to meet rising demand for audiovisual applications and services from increasingly mobile users.

It’s a tall order, but one that researchers working in the EU-funded ‘Optimisation of Multimedia over wireless IP links via X-layer design’ ( Optimix) project have shown can be fulfilled by a fundamental rethink of the way streaming video is coded and delivered.

The Optimix researchers developed innovative solutions to enable enhanced video streaming in a point-to-multipoint context for IP-based wireless heterogeneous systems. Tests showed that the resulting improvements in video quality are substantial, to the point where, in many cases, viewers were unable to differentiate between the streamed video and the original video source.

“Any user should benefit from a seamless experience of multimedia applications and services. The integrity of the transmitted media should be maintained anywhere and anytime,” said Roberta Fracchia of the Optimix project, a program manager at Thales Communications and Security in France. “Connected consumers will also expect to move content freely and easily between their devices and share content with friends. They will require that content can be controlled and played-back in a future-proof format on any device, regardless of the device used to capture, store or edit the content.”

“Today’s approach relies on traditional separation techniques, but focusing on services delivered over homogeneous networks will not meet the ongoing demands to maintain the required quality of service for users who have different needs and requirements,” Dr. Fracchia notes.

Making the best of network and device resources

Ever since the first video was streamed, separation between the coding, the transmission medium and the end device on which it is viewed has been standard practice. The approach works fine over homogenous point-to-point networks (such as from a server to a home PC), but it cannot meet the demands of a point-to-multipoint delivery environment in which networks, devices and content are increasingly heterogeneous.

In grossly simplified terms, streaming video using the separation approach consists of encoding the content at the source into a format such as MPEG for transmission, and a channel code that sits on the physical device and corrects errors, such as those caused by noise or fluctuations in bandwidth. Typically, source coding and channel coding are performed separately: essentially, the left hand does not know what the right hand is doing.

The upshot is that the quality of the streamed video is not always optimal for the device you are viewing it on because the source has no idea about the device or the network capabilities, and neither can you seamlessly pass the video stream from one device to another, such as from your home PC to your mobile device when you go out. For that to happen, the two ‘hands’ need to know what each other are doing.

A first step in that direction was achieved through the perfection of a technique called Joint Source Channel Coding/Decoding (JSCC/D) by the Phoenix project, a predecessor to Optimix. The Phoenix researchers developed a comprehensive joint source and channel coding and decoding system that is able to adjust coding on the fly in order to offer either better quality for the same bandwidth or the same quality over lower bandwidth.

The Optimix researchers took that joint approach even further by making all the major elements in the transmission chain – from video coding and networking modules to the MAC layer and physical layer – communicate together through the use of joint controllers at the server side and mobile unit observers at the client side.

On the server side, streaming is controlled by a Master Application Controller (MAC), while on the client side the data is handled by a Base Station Controller. These are coupled with a Triggering Engine that communicates between the server, base stations and clients.

By jointly optimising all the different transmission layers, the controllers attune the multimedia stream parameters and provide the best performance that can be achieved for the different receivers by efficiently utilising the available network and device resources.

“The foreseen increased communication capabilities of future mobile devices are conjugated with them having different specifications and parameters regarding their multimedia presentation capabilities (e.g. multimedia processing power, screen size or audio capabilities). Such parameters will determine the multimedia quality that the mobile device is able to achieve,” Dr. Fracchia says.

Tests using content encoded with the MPEG-4 standard demonstrated remarkable improvements following the implementation of the Optimix system. The researchers looked in particular at ‘peak signal-to-noise ratio’ (PSNR), the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation, measured in decibels (dB).

Using the Master Application Controller by itself, 90 percent of frames had a PSNR above 25dB, while a fully optimised scheme resulted in 90 percent of frames above 30dB – far better than any traditional streaming techniques. Test viewers at Kingston University in London and the University of Budapest who were asked to watch both the original source video and the streamed version often could not tell any difference between them.

“Results showed that with all the considered video sequences the advantages of the Optimix solutions are evident,” Dr. Fracchia says.

On the basis of their work, the Optimix researchers are contributing to several audiovisual standards bodies, including the Joint Collaborative Team on Video Coding (JCT-VC) of ISO/IEC MPEG and ITU-T VCEG, and the Internet Engineering Task Force (IETF).

They have also launched a follow-up project, Concerto, which focuses on optimising wireless video streaming for healthcare applications for which reliability, video quality and delay are critical issues in order to ensure a flawless medical diagnosis.

Optimix received research funding of EUR 3.71 million under the Networked Media strand of the European Commission’s Seventh Framework Programme (FP7).

Leave a Reply

Your email address will not be published. Required fields are marked *