Opendata, web and dolomites

Report

Teaser, summary, work performed and final results

Periodic Reporting for period 2 - ImmersiaTV (Immersive Experiences around TV, an integrated toolset for the production and distribution of immersive and interactive content across devices.)

Teaser

The majority of European TV consumers now watch TV programs in a multi-display environment. Second screens -mostly smartphones, tablets or laptops- are generally used to check information not directly related to the events in the TV content being watched. As a result, the...

Summary

The majority of European TV consumers now watch TV programs in a multi-display environment. Second screens -mostly smartphones, tablets or laptops- are generally used to check information not directly related to the events in the TV content being watched. As a result, the attention of the audience is generally divided between these different streams of information. Broadcasters have tried to orchestrate all these different rendering platforms to complement each other consistently. However, their success has been limited due, at least in part, to the very different formats in which information is delivered (web-based texts, mobile apps, traditional broadcast television, etc.).

In this context, the arrival of immersive head-mounted displays to the consumer market introduced new possibilities, but also new challenges. Immersive displays impose radically different audience requirements compared to traditional broadcast TV and social media. They require a constant, frequently refreshed, omnidirectional audiovisual stream that integrates sensorimotor information. This means that, at minimum, the visual perspective rendered changes consistently with changes in head position and rotation. In addition, immersive displays challenge the conventions of traditional audiovisual language. For example, cuts between shots, which constitute the very basic fabric of traditional cinematic language, do not work well in immersive displays. From a user perspective, omnidirectional TV offers a new user experience and a different way of engaging with the audiovisual content.

As an answer to the need of exploring this new context, ImmersiaTV has explored new forms of digital storytelling and broadcast production by putting omnidirectional video at the center of the creation, production and distribution of content, delivering an all-encompassing experience that integrates the specificities of immersive displays within the contemporary living room. We have proposed a form of broadcast omnidirectional video that offers end-users a coherent audiovisual experience across head mounted displays, second screens and the traditional TV set, instead of having their attention divided across them. This new experience seamlessly integrates with and further augment traditional TV and second screen consumer habits. In other terms: the audience is still be able to watch TV sitting on their couch, or tweet comments about it. However, the audience is also able to use immersive displays to feel like being inside the audiovisual stream.

The primary goal of IMMERSIATV has been to create an end to end toolset for the creation of multiscreen immersive experiences, addressing the different phases of content creation: ideation, production, distribution and consumption. This resulted in the 5 main project objectives described here:

Objective 1, the creation of a new immersive cinematographic language.

Objective 2, to adapt the production pipeline.

Objective 3, to Re-design the distribution chain.

Objective 4, to maximize the quality of the end-user and professional-user experience.

Objective 5, to maximize the market impact of the ImmersiaTV solutions and to ensure ImmersiaTV has a determining impact on the European and global audiovisual market.

Work performed

ImmersiaTV has created a platform for omnidirectional video content production and delivery that offers end-users a coherent audiovisual experience across head mounted displays, second screens and the traditional TV set, instead of having their attention divided across them. The work performed in the project has resulted in an end-to-end toolset that enables professionals to create and distribute 360 video experiences: omnidirectional video capture, immersive production tools for live and on demand contents supporting omnidirectional and directive videos, adaptive content coding and delivery, synchronization mechanisms and omnidirectional video players. As result of the work done in the project with regards to content ideation, different immersive experiences have been created. The plan of ImmersiaTV was to run 3 pilots, first one focusing in a prerecorded content, second one focusing in a live content, and third pilot, considered as an iteration of the previous 2, making a total of 4 different content typologies. This variation in the content typologies helped to understand the particularities of each scenario, the different production and technical requirements corresponding to each one of the productions.

Final results

Although ImmersiaTV is an innovation action type of project, relevant progress beyond the state of the art has been achieved in different components of the content chain:

-Capture: Videostitch partner delivered one of the first 360 cameras in the market that embedded a stitching process and a binaural audio mic providing standard H.264 and H.265 output streams (opposite to other solutions at that time, like Ozo from Nokia, requiring proprietary software to manage and edit output streams). In addition to that, the other capture partners (IMEC-Uni Hasselt and Azilpix) improved usability, capacity and transcoding features of Studio.One system. Studio.One is a black box video capture solution that can manipulate multiple video streams (conventional, panoramic and 360) providing a set of functionalities (image manipulation, transcoding, camera sync, etc.) that facilitates, among other things, the usage of multiple 360 cameras, not only for the creation of 360 content, but also for the creation of directive content resulting from 360 capture devices.

-Production: i2CAT ,PSNC and CGY have researched and developed new production tools for multiscreen environments that allow the seamless integration of directive and omnidirectional contents in editing or live production scenarios. To the extent of our knowledge there are still no other tools that can achieve such results. Both tools allow content creators to design multiscreen interactive and immersive experiences.

-Encoding: EPFL developed a new methodology for saliency estimation in 360 content that can improve compression efficiency for this type of content by taking into account the relative importance of different areas of a video frame when observed by human subjects. The results were presented at ICME in 2017 (https://ieeexplore.ieee.org/document/8026231/).

-Distribution: i2CAT has analyzed different omnidirectional projections and tested them in different content scenarios. A novel 360 streaming method has been implemented and evaluated. It is based on dividing the cube (in a cubic projection of 360 video) into two (H.264) tiles, adaptively streaming them based on the users\' viewpoint, and playing them out in a synchronized manner in web-based players.The results were presented in NOSDAV workshop in 2018 (https://dl.acm.org/citation.cfm?id=3210456&dl=ACM&coll=DL).

Website & more info

More info: http://www.immersiatv.eu.