Exhibition informationNHK (Japan Broadcasting Corporation)
Science & Technology Research Laboratories
http://www.nhk.or.jp/strl/open2016/en/
I would like to express my gratitude for your cooperation and support of Japan Broadcasting Corporation (NHK).
NHK Science & Technology Research Laboratories (STRL) was established in Setagaya Ward, Tokyo, in 1930, fi ve years after the start of radio broadcasting in Japan, as a center for broadcasting technology research and embarked on research into TV broadcasting. Since then, STRL has continuously been at the forefront of cutting-edge broadcasting technologies and contributed to the development of broadcasting technology both in Japan and around the world, through its pioneering efforts on satellite broadcasting, HDTV, and digital broadcasting services.
To implement broadcasting and services of the highest standard by 2020, the year of the Tokyo Olympic and Paralympic Games, NHK is actively working on the creation of services that can off er viewers new values. The test broadcasting over satellite channels of Super Hi-Vision, the next generation of broadcasting delivering a strong sense of presence, on which STRL began R&D in 1995, is scheduled to start on August 1 of this year. This will be an important step toward the launch of broadcasting to the public in 2018.
Furthermore, STRL is looking ahead to the media environment in 20 years' time, when Super Hi-Vision is expected to be widely used. We plan to focus on research on the technologies to deliver a more enriched and extensive content to all people including the elderly and disabled through diverse channels using broadcasting and the Internet, as well as research on new broadcasting services such as 3D television.
This year's NHK STRL Open House marks its 70th anniversary. It introduces you to 27 of our latest research achievements, featuring fi ve technologies: the ever-evolving "Super Hi-Vision," "Internet utilization technology" to provide new broadcasting services, "smart production" technology for producing content by utilizing user-friendly technologies, "3D television" for providing natural 3D images, and "Future devices" that support advanced broadcasting systems. We are making progress toward our goal of creating broadcasting and services that open up new possibilities.
I sincerely hope that you will continue to support our activities in the future.
May, 2016Toru Kuroda
Director of NHK Science & Technology Research Laboratories
Greetings
From 1F
Craft Workshop(Sat/Sun)
LecturesSpecial PresentationsResearch Presentations
Live Video(Thu.)
7FRestrooms Elevator
Tour Route
1F
B1F
C スマートプロダクション
入口出口
D1D-P1
D2
F4
T4 A
B
C
D
D-P2
B1F T2
A8 B1 B2 B3A7
A6
A3
A2
F3
F2 F1
A1
A9 A10
A4
C1C2
C4C5
C6
C3 C-P1
A-P3A-P2
B-P1B-P2
T1 T1
T3
A5 A-P1
E1
E2
E3
E-P1E-P2
E-P3E-P4E-P5From 1F
To 1F
A Super Hi-Vision
Experience Zone
C Smart ProductionLounge
LoungeNHK Engineering System
EngineeringAdministrationDepartment
E Future Devices
B Internet Techunology for Future Broadcast Servies
To B1F
To 7F
From B1F
A Super Hi-Vision
B Internet Technology for Future Broadcast Services
C Smart ProductionQuestionnaires&
Lounge NHK Museum ofBroadcasting
Guided ToursReception (Sat. Sun.)
D 3D Television
D 3D Television
INOUTLectures/Special PresentationsResearch Presentations8K Super Hi-Vision Theater
Experience Zone
Entrance HallEntrance Hall
Auditorium
Floor MAP
Integral 3D Television
3D Television
Depth-compressed Expression Technology
Device Technologies for Future 3D Display
Optical Beam Steering Device
Future Image Sensor Technologies
Solid-state Image Sensor Overlaid with Photoelectric Conversion Layer
Organic Image Sensors
Elemental Technologies for Sheet-type Displays
Solution-processed Oxide TFTs (Thin-film Transistors)
Inverted OLED (Organic Light-emitting Diode)
Driving Technology for Enhanced Video Image Quality and Longer Lifetime
Magnetic Nanowire Memory
Utilization and Development of NHK’s Technologies
Live Streaming Service for Smartphones
The Time Has Come to Launch Super Hi-Vision Broadcasting
IEEE Milestones Awarded for NHK's Technical Achievements
8K-HDR Live Program Production
New Development of Super Hi-Vision
Technologies to Realize a "New Television Experience" Spread by the Internet
Internet Technology for Future Broadcast Services
Developments Toward Full-featured Super Hi-Vision
New User Experience by Hybridcast for a Live Sport Program
Encryption Scheme for Privacy Protection
Full-resolution Single-chip 8K Camera System
Video Distribution Technologies Adapted to Diverse Viewing Style
Content Search Behaviors on Time Shift Zapping System
Holographic Memory for Archival Use
Scene Text Detection to Automatically Generate Metadata for Videos
Smart Production
Three-dimensional Sound Production Equipment
Studio Robot for Joint Performance with CG
Retransmission Technology of Super Hi-Vision Satellite Broadcasting for Cable TV Networks
Automatic Sign Language Animation System to Express Weather Warnings
Advanced CG Sign Language Technology
8K/4K Video Coding System with Super-resolution Reconstruction
News Service with Reading Assistance
Next-generation Terrestrial Broadcasting Systems
Haptic Presentation Technology for Conveying 3D Shapes of Object
Delivery Technology Using MMT for 8K Super Hi-Vision
Three-dimensional Information Analysis for Live Sports Graphics
8K Super Hi-Vision Wireless Links for Program Contribution
Experience Full-featured 8K Super Hi-Vision
Let’s Move and Watch! Integral 3D Quiz
Immerse Yourself in a 3D Sound System
Augmented TV Ultra-reality Meter
Future Video Coding Technologies
Super-resolution Technique for Full-featured 8K Video
Exhibition List1F
1F
B1F
A
B
C
D
T 1
T 2
A 1
A 2
A 3
A 4
A 5
A-P1
A 6
A 7
A-P2
A-P3
A 8
A 9
A 10
B 1
B 2
B 3
B-P1
B-P2
D 1
D-P1
D 2
D-P2
C 1
C 2
C 3
C-P1
C 4
C 5
C 6
T 3
E 1
E-P1
E-P2
E 2
E-P3
E-P4
E-P5
E 3
F 1
F 2
F 3
T 4
F 4
A Super Hi-Vision
B
C Smart Production
C Smart Production
D 3D TelevisionInternet Technology forFuture Broadcast Services
Open House 2016STRL's latest research and development are centered on 8K Super Hi-Vision, Internet technology for future broadcast services, and 3D television. These three focal points are complemented by additional research, including the development of smart production technologies for more viewer-friendly broadcasting services and the creation of future devices aimed at improving broadcast equipment. These areas of study are part of our effort to remain relevant and further evolve as a public media in the appropriate and most-advanced way.
The exhibit at the entrance hall is an overall introduction to how STRL foresees the future of broadcast services. Visitors are invited to explore the various exhibition booths that follow, which illustrate each of the elemental technologies in detail.
Entrance Hall Exhibition
C C Smart ProductoddddddddddddddddddddddduuuuuuuuuuuuuuuuuuuuSSSSSSSSSSSSSSSSSSSSSSSSSSSmmmmmmmmmmmmmmmmmmm
D A B
C Smart Production
E Future Devices
Evolving to“Public Media”
8K test satellitebroadcasting
8K satellitebroadcasting
8K widespreadadoption
Around 2030 Construction of3D Television system
3D Television
Super Hi-Vision
Internet Technology for
Future Broadcast Services
Higher quality Advanced functions
Outline
Features
Future plans
The History and Future of 8K Super Hi-Vision
We will continue our R&D to enhance Super Hi-Vision quality by improving the production system, the reproduction system and display, transmission and various other devices.
※ OLED: Organic Light-Emitting Diode. Organic devices which emit the light by the recombination of holes and electrons.
● Full-featured 8K Super Hi-VisionWe are showcasing the technology behind our full-featured 8K Super Hi-Vision, which has the image with a frame frequency of 120 Hz, a wide color gamut, and an HDR (high dynamic range), and a 22.2 multichannel sound.
● Next-generation terrestrial broadcastingWe are exploring how we can deliver Super Hi-Vision to homes over the terrestrial broadcasting. We are studying ways of sending more information than current ISDB-T as well as a combination of next-generation modulation and video encoding technology.
● Sheet-type OLED※ displayLarge, ultrathin and ultralight displays will be the key to encouraging viewers to enjoy Super Hi-Vision at home. We are working on a novel, sheet-type OLED display that will make immersive images of Super Hi-Vision part of daily life.
August 1st, 2016, will mark the start of 8K and 4K Super Hi-Vision test satellite broadcasting in Japan. We plan to begin satellite broadcasting by 2018 and achieve widespread adoption by 2020, the year of the Olympic and Paralympic Games in Tokyo, while we continue our attempts to bring an even better, full-featured Super Hi-Vision to the world.
Bringing "Ultimate TV" to the home
New Development of Super Hi-VisionA
Screening atthe Expo 2005,Aichi, Japan
Widespreadadoption
Tokyo OlympicGames
Full-featured 8K Super Hi-Vision
Full-resolution 8K camera
Next-generation terrestrialbroadcasting
Live transmission ofNHK's year-end
song festival programfrom Tokyo to Osaka
Sheet-type displays85-inch LCD
Full-featured 8K camera
Satellitebroadcastingexperiment
8K HDR LCD
Live public viewingsof the 2012
London Olympic Games
Internationalstandardization
Image sensor withthe frame frequency
of 120 Hz
Transmissionexperiment
via terrestrial waves
145-inchplasma display
Rio OlympicGames
Test satellitebroadcasting
Full -featured8K Super Hi-Vision
Large-capacitytransmission
Large, ultrathin, ultralight screen
Satellitebroadcasting
1995
Start of research onultra high-definition TV
system
2005 202020152012 2016 2018
Production systemfor 22.2 multichannel
Sound
International transmission experimentover satellite and IP networks atthe IBC2008 in Amsterdam
Super Hi-Vision
Outline
Internet Technology for Future Broadcast Services
The Internet is adding a whole new dimension to our lives with new services and possibilities. We at STRL are working on technologies to bring the broadcasting and the Internet together, creating newly evolved "television" services that assist and enrich our everyday lives.
Toward a "New TV Experience"
Internet Technology for Future Broadcast ServicesB
Features
Future plans
How the internet can bring new services to "television"
We will develop a number of elemental technologies to realize these innovative services.
● See Exhibit B1 for details.
●Media unifying technology to provide access to contents anytime and anywhereTo allow viewers to watch the programs they want regardless of what they are doing without a complex procedure, we are developing a system that takes viewers’ circumstances into account so that it can automatically select and offer appropriate services from choices such as over-the-air broadcasting, internet streaming, or video-on-demand.
● Content matching technology that allows television to enrich our livesWe hope to turn viewing into something that offers insight and motivation in different situations of everyday life. Hence, we are exploring the best way to offer relevant contents and services depending on what the viewer is doing or where they are.
Contents and information services catering to various everyday situations
Programs Related information
Providing easy access tocontents regardless ofviewer's situation
[Media Unifying Technology]
Providing information inaccordance with user activities
[Content Matching Technology]
Existing broadcast services
Broadcaster Other serviceproviders
Outline
Smart Production
Big-data analysis
Production
Society “Public media”
Viewers
Video andspeechrecognition
Estimating informationabout shooting environment
Delivering accurate andimmediate information to everyone
Sign language,subtitles Simplified commentary
Tactile service
Audio description
16:45
今日もよく晴れています。
Features
Future plans
To enhance our programming and expand our human-friendly services, we will continue our research on elemental technologies such as recognition, analysis, and content production, which will lead to the development of new systems.
● Gathering and analyzing societal information for program productionProgrammers are faced with the need to gather informative data and make use of the massive and diverse information around us. We are working on technologies such as for automatically detecting objects, text within filmed footage, and speech and estimating shooting environment including camera positions and lighting condition. Analyzing such information can help to identify social phenomena and assist in program production.
● Delivering information to every member of societyWe provide audio information by subtitles and sign language and visual information by audio description and tactile devices. We also offer information using simpler Japanese expressions for non-native Japanese speakers.
We are working on technologies such as video and speech recognition along with big-data analysis to immediately and accurately convey a wide range of societal information, and various accessibility technologies that aim to provide information to every member of the society including those with differing needs.
Content production taking full advantage of a wide range of societal information
Smart ProductionC
Outline
3D Television
Features
Future plans
R&D Activities to Create a More Natural 3D images
● 3D Video SystemWe are attempting to establish an integral 3D system that uses lens arrays to produce natural 3D images. We are mainly focusing on the capturing, encoding, and display technologies needed to create a high-quality 3D video system.
● Evaluation and Improvement of 3D imagesWe are exploring a new 3D evaluation technology that is based on human perception. Our aim is to defi ne the requirements for integral 3D images to appear more natural and to incorporate these findings into our future system design.
● Device Technologies for Future 3D DisplaysOur research includes optical beam steering devices to improve the viewing zone and the resolution of integral 3D displays as well as ultrahigh-density display devices that broaden the viewing zone of holographic 3D images.
We will continue to improve our 3D video technologies with the intention of establishing a practicable system by around 2030.
We have been working on developing three-dimensional television that can be enjoyed without special glasses. Our research includes improving video systems, establishing quality evaluation technology, and creating higher performance display devices.
Creating a more natural three-dimensional images
3D TelevisionD
3D Video System
・3D capturing・Encoding・3D display
・Quality evaluation・Quality improvement
Integral 3Dsystem
Holographicimage
Research onThree-Dimensional
Television
Integral 3Dpresentation
DisplayDisplay
CameraLens array
Object
Reproduced 3D imageReproduced 3D imageLens arrayLens array
Quality evaluation
Subjectivemeasures
Biologicalmeasure
Natural Unnatural
or
Evaluation and Improvement of 3D images
・Optical beam steering device・Ultrahigh-density display device
Device Technologies for Future 3D Displays
Control the direction and shapes of light beamsControl the direction and shapes of light beams
Outline
Features
Future plans
Exhibit outlining 8K HDR live program production
※ 1 HLG (Hybrid Log-Gamma): Major parameters for HLG are specifi ed in ARIB STD-B67, a domestic standard published in July 2015. Domestic standards of video coding and multiplexing formats for Super Hi-Vision broadcasting also support the HLG system.
※ 2 SDR (Standard Dynamic Range): The standard dynamic range specifi ed in Recommendation ITU-R BT.709.
●The 8K HDR liquid crystal display compatible with HLG was jointly developed with Sharp Corporation.
We will establish a method of HDR 8K video production using the HLG system and contribute to HDR program production for 8K broadcasting.
● Hybrid Log-Gamma (HLG)※1 system suitable for broadcastingThe HLG system is an HDR television standard jointly developed by NHK and BBC (British Broadcasting Corporation). It is suitable for TV broadcasting because it is highly compatible with existing TV infrastructure (SDR)※2 and capable of handling images with a much wider range of contrast.
● 8K program production equipment for making HDR broadcasting a realityWe have developed 8K cameras supporting the HLG system and an 8K display with high brightness and a high contrast ratio. Conventional devices can be used for other production equipment.
● HDR live video productionThe HLG format is a specifi cation for conversion from light to electrical signals during the capture of images. It allows easy video adjustment because it can handle video signals in the same way as conventional program production and is therefore suitable for 8K HDR live program production, in which multiple cameras are switched back and forth to capture images. It also allows the effective utilization of existing content and equipment because of its compatibility with SDR.
We have developed an HDR television technology that expands the range of brightness that can be shown in TV images, with the aim of using it for 8K Super Hi-Vision broadcasting. This exhibit presents live program production using 8K cameras, an 8K display and an 8K program production system that support the HDR television standard.
Videoadjustment
8K waveformmonitor8K video
switching
Videoadjustment
8K HDRthree-sensor camera
Object
Easy adjustment of the imaging level (brightness) duringprogram production
8K HDR single-chipcompact camera
Recorded video(HDR/SDR) Recorder
8K display withHDR
● Express images with a wider range of contrast
Super Hi-Vision
A 1 High-dynamic-range (HDR) television for 8K broadcasting
8K-HDR Live Program Production
Outline
Features
Future plans
System diagram
Production equipment for full-featured 8K HDTVequipment
120-Hz timecode
U-SDIU-SDI※3(inserted 120-Hz timecode)
Synchronizationsignal generator
Synchronization withanalog sync signalSynchronization over IP
Analog synchronization unit
HDTV recorderResolution & Colorgamut converter
8K waveform monitor
120-Hztimecode reader
60-Hztimecode display
17-inch8K/120-Hz LCD
Timecode
separator
IP synchronization unit
8K/120Hzsignal router
IP synchronization unit
120-Hz timecode generator
8K/120Hzcompression recorder
IP synchronization unit
120-Hz timecode generator
8K/120Hzcompression recorder
While continuing with the development of production equipment for full-featured 8K, we will build and verify a production environment using such equipment. We will also implement the 120-Hz timecode into a system and study the practicality of a synchronization system using IP technology.
●The compression recorder is jointly developed with Tokyo Electron Device Ltd.
●The 17-inch liquid crystal display is jointly developed with Japan Display Inc.
※ 1 Full-featured 8K: A state-of-the-art format providing a color gamut that covers most real surface colors with 8K full resolution (33 megapixels each for RGB), 120-Hz frame frequency, a wide color gamut, 12-bit depths, HDR (high dynamic range), and a 22.2 multichannel sound system.
※ 2 Timecode: Time information label added to each video frame.※ 3 U-SDI (Ultrahigh-definition Signal/Data Interface): An interface that can transmit all kind of UHDTV video signals over a single cable.※ 4 PTP (Precision Time Protocol): An IEEE 1588 standard protocol for precisely synchronizing clocks over an IP network.
● 8K production equipment for full-featured 8KWe have developed 8K devices which include a compression recorder using a removable memory package, a four-input four-output signal router, a waveform monitor, a timecode separator and a 17-inch liquid crystal display.
● 120-Hz frame frequency timecode and sync signal over IPWe are working on the standardization of a new timecode supporting 120-Hz frame frequency for full-featured 8K. The timecode can also be read by existing devices because of its compatibility with the conventional timecode supporting 60-Hz frame frequency. We have also adopted a PTP※4 synchronization system that can coordinate clocks precisely using IP technology.
We are developing production equipment for full-featured 8K Super Hi-Vision※1. This exhibit demonstrates an example of a production system using equipment supporting 120-Hz frame frequency, which we have developed while considering compatibility with HDTV production systems and the conventional timecode※2.
Super Hi-Vision
For a smooth migration to 120-Hz frame frequency systems
Developments Toward Full-featured Super Hi-VisionA 2
Outline
Future plans
Features
Outline of full-resolution single-chip 8K camera system
HDTV hybridcamera cable
Camera head
Camera control unit(CCU)
U-SDIoutput
・3U (13.3 cm in height) compact size・Real-time processing of 100 Gbps video signals・Capable of HDR (high-dynamic-range) capturing
・Full-resolution 8K shooting with a single-chip imaging method・Less than 1/7 of the weight・Compatible with cinema and 35 mm full-frame lenses
※ 1 Single-chip imaging: A method to obtain RGB color images (video) using a single image sensor.※ 2 Wavelength division multiplexing: A technology to transmit multiple optical signals with different wavelengths over a single optical fiber.※ 3 U-SDI (Ultrahigh-definition Signal/Data Interface): An interface standardized in Recommendation ITU-R BT.2077 and ARIB STD-B58, etc., that can
handle the uncompressed transmission of full-resolution, 120 Hz frame frequency 8K video over a single cable.
We aim to make the camera system operational at 120 Hz frame frequency to enable the shooting of full-specifi cation 8K video. We will also continue R&D for further downsizing of the unit and to improve the image quality.
●Full-resolution 8K shooting with a single-chip cameraA single-chip imaging※1 that does not need an optical color separation prism is suitable for a compact camera. By using a 133-megapixel image sensor, we have achieved the first single-chip camera that can capture full-resolution 8K video. The camera can also use a wide variety of cinema and 35 mm full-frame lenses because the optical format of the 133-megapixel image sensor is the same as 35 mm full-frame.
●Lightweight camera headWe have developed a camera head weighing only 6 kg, less than one-seventh of a conventional three-chip full-resolution 8K camera. Using a compact, built-in wavelength division multiplexing※2 signal transmission device, it can transmit 100 Gbps output video signals over a single HDTV hybrid camera cable.
●Compact, multifunction camera control unitWe have developed a camera control unit (CCU) with many capabilities such as real-time processing of 100 Gbps high-speed signals, correction of lateral chromatic aberration of the lens, and HDR (high dynamic range) capturing. The CCU outputs a U-SDI signal※3 compliant with international standards.
We are conducting research to achieve a practical 8K Super Hi-Vision camera. We have developed a portable camera system that can capture full-resolution (RGB 4:4:4) 8K video with 33 megapixels each for red (R), green (G) and blue (B) by using a single image sensor.
Super Hi-Vision
With high mobility of full-resolution 8K video shootingA 3 Full-resolution Single-chip 8K Camera System
Outline
Features
Future plans
Holographic memory drive
H:215mm
Wavefront compensation unit
Optical unitW:300mm
D:900mm
High-density recordingusing blue-violet laser
Accurate data reproduction bywavefront compensation
We aim to improve the characteristics of the holographic recording prototype drive as well as further increase the density and data transfer rate for its practical use as 8K archival equipment.
●This research is jointly being conducted with Hitachi, Ltd. and Hitachi-LG Data Storage, Inc.
※ Photopolymer: A photosensitive resin.
● Reproduction of compressed 8K videoCompressed 8K video signals can be recorded and reproduced in a holographic memory. We have integrated laser and optical parts and a disk medium into a single holographic memory prototype drive package for stable recording and reproduction.
● Compensation for optical distortion using wavefront controlWhile the photopolymer※ material used for the disk is expected to retain recorded data for more than 50 years, it causes optical distortion due to vibrations and changes in temperature and volume of the recording medium. To compensate for the distortion, we have developed a technology to control the wavefront of the reference beam, enabling accurate data reproduction.
● Increased recording density by shortening the laser beam wavelengthThe recording density is inversely proportional to the square of the laser beam wavelength. Instead of the green laser beam (with a wavelength of 532 nm) that we previously used, we are using a blue-violet laser beam (with a wavelength of 405 nm) and a suitable material for the recording medium. This has increased the recording density.
We are conducting research on high-density holographic memory for the long-term archiving of 8K Super Hi-Vision video. This exhibit displays a holographic recording prototype drive equipped with a technology to improve the quality of reproduced data and presents a reproduced video of compressed 8K signals recorded on its disk medium.
Super Hi-Vision
For long-term storages of 8K Super Hi-Vision video
Holographic Memory for Archival UseA 4
Outline
Three-dimensional sound production equipment
Features
Future plans
22.2 ch mixing system
"Ultra-reality" meter
Loudness meter
Quantify spatial impressions(sense of reality, evocative qualities, etc.)
Analyzesound signals
Assess spatialimpressions
Loudness meter
Measure the loudness ofprogram sound
Level meter Loudness level
Sound material withoutreverberations
Extend reverberation up todouble the original length
Reverberation library
Add 22.2 ch reverberation sound
3D reverberator
Reverberation time
Studio HallChurch
Program sound (22.2 ch)
Upmixing preprocessor
Generate 22.2 ch sound fromfewer-channel sound material
2 chsound material
5.1 chsound material
22.2 ch sound
Top layer 9 channels
Bottom layer 3 channels
LFE (low-frequency effect) 2 channels
Display for8K images
LFE1
LFE2
Middle layer 10 channels-24.0
We will bring each device into practical use in preparation for 8K broadcasting.
※ International Standard: Recommendation ITU-R BS.1770-4 Domestic Standard: ARIB TR-B32 ver. 1.4
● Upmixing preprocessorWe have developed software to generate 22.2 ch sound from 2 ch or 5.1 ch sound material. This has enabled the easy utilization of existing sound material for 22.2 ch sound production.
● Improved functionality of 3D reverberatorA reverberator is a device that adds the reverberation sound heard in places such as studios and halls to sound material. With our new technology to extend the reverberation time while maintaining the features of the captured 3D reverberation sound, we have doubled the variable range of the reverberation time, which is important for adjustment of the spatial impression.
● Loudness meter for evaluation of program soundWe have developed a loudness meter for 22.2 ch sound that is compliant with the international and domestic standards※, which were revised last year. In addition to being able to measure the subjective perception of how loud a sound appears to the human ear (i.e., a loudness level), the device is equipped with an "ultra-reality" meter (Poster Exhibit A-P1) that can assess and quantify spatial impressions such as the sense of reality, spaciousness, and evocative qualities.
We are researching technologies to produce high-quality 22.2 multichannel sound (22.2 ch sound) more easily in preparation for 8K broadcasting. This exhibit displays a new upmixing preprocessor as well as a 3D reverberator and loudness meter with enhanced functions.
Super Hi-Vision
Producing 22.2 multichannel sound more easily and effectively
Three-dimensional Sound Production EquipmentA 5
Outline
Features
Future plans
Cable TV retransmission system for Super Hi-Vision (8K/4K) satellite broadcasting
Cable TV station Subscriber
Cable TVtransmission path
MMT/TLV signals
FrequencyExample ofcombinations of ultiple channels
Being compliant withdomestic and
international standards
Being compliant withdomestic and
international standards
256QAM
256QAM
64QAM
8K
Display of the sameservice as
8K satellite broadcasting
8K
Reception of8K satellitebroadcasting
256QAM
256QAM
64QAM
Modulation
256QAM
256QAM
64QAM
Demodulation
Channel bonding
Division
We will conduct experiments in cooperation with relevant institutions toward realizing the practical cable TV retransmission of Super Hi-Vision satellite broadcasting.
※ 1 MMT (MPEG Media Transport): International standard multiplexing scheme supporting media transport through various transmission paths.※ 2 TLV (Type Length Value): A transmission signal format to efficiently transmit IP packets (variable-length packets) over broadcasting channels.
● 8K retransmission on existing cable TVSuper Hi-Vision satellite broadcasting signals received at a cable TV station are divided into multiple channels for transmission and then combined correctly at homes. In this way, they can be delivered to subscribers without any changes to the existing cable TV transmission paths. Here we demonstrate how 8K signals in the MMT※1 and TLV※2 formats, which will be used for 8K test broadcasting, are received at an actual cable TV station via satellite, retransmitted, and then demodulated and combined for reproduction at the exhibition site.
● Channel bonding technology compliant with domestic and international standardsWe standardized the transmission scheme using multiple channels, which led to a domestic standard being issued by Japan Cable Television Engineering Association in 2015. Also, ITU-T published international recommendations for specifi cations consistent with the domestic standard.
We are studying ways to distribute Super Hi-Vision (8K/4K) satellite broadcasting to homes on cable TV. This exhibit presents a system for retransmitting 8K satellite signals on cable TV by using a channel bonding technology that is compliant with both domestic and international standards.
Super Hi-Vision
Super Hi-Vision retransmission experiments using commercial cable networks
Retransmission Technology of Super Hi-Vision Satellite Broadcasting for Cable TV NetworksA 6
Outline
Features
Future plans
Confi guration example of 8K/4K video coding system
8K receiver
~
4K resolutionvideo decoding
8K video
4K video4K resolutionvideo decoding
4K receiver4K resolutionvideo coding
8K video
Interlayer prediction
Interlayer prediction
4K video
Image reduction
Super-resolutionreconstruction 0
~
Locally decodedvideo
Broadcast station Supplementarydata decoding
Supplementarydata coding
Supplementarydata
4K
4K
Selective switching
Optimization decision
Super-resolutionreconstruction 1
Super-resolutionreconstruction 63
Super-resolutionreconstruction 0
Super-resolutionreconstruction 1
Super-resolutionreconstruction 63
● Increased effi ciency by interlayer prediction using super-resolution reconstructionThis system encodes and transmits low-resolution video reduced from the original high-resolution video. It also super-resolves the locally-decoded low-resolution video in different reconstruction modes at the transmission side and transmits the block-wise optimal mode selection as supplementary data. This system can achieve higher compression ratio than conventional scalable video coding because it can generate high-resolution video with only a small amount of supplementary data for super-resolution reconstruction.
● Hardware-oriented computation algorithmsTaking hardware implementation into consideration, we fi gured out appropriate computation methods for super-resolution reconstruction and optimization to decide the best super-resolution mode. We also employed a parallel-series architecture with simple image processing templates to facilitate integrated circuit implementation in anticipation of installation in receivers.
● Scalable transmission schemeThis system allows viewers to view 4K video as is with ordinary TV sets. It can also provide 8K video by adding supplementary data, the amount of which is about 3 to 10% of that of the 4K video stream. The supplementary data contains video feature values of each frame as well as switching information for super-resolution reconstruction. The feature values are used to synchronize decoded video and supplementary data.
We will conduct demonstration experiments on an overall system including transmission systems such as channel coding, modulation and demodulation processes and also contribute to future video coding standardization.
● This research is partially being performed under the auspices of the "Research and Development of Technology
Encouraging Effective Utilization of Frequency for Ultra High Defi nition Satellite and Terrestrial Broadcasting System"
program, which is funded by the Ministry of Internal Affairs and Communications.
※ 1 Super-resolution reconstruction: A technique to enhance the resolution of an image by supplementing edges and fi ne-grained texture.※ 2 Interlayer prediction: A technique to predict a high-resolution image from a low-resolution image.
We are conducting research into a high-efficiency video coding technique for the simultaneous service of video with different resolutions. The use of super-resolution reconstruction※1 technique enables accurate 4K-to-8K interlayer prediction※2. This will achieve the simultaneous service of 8K and 4K video simply by sending a small amount of supplementary data together with the compressed 4K video.
Super Hi-Vision
Effi cient simultaneous service of ultrahigh-defi nition video with different resolutions
8K/4K Video Coding System with Super-resolution ReconstructionA 7
Outline
Features
Future plans
Fundamental technologies under consideration for next-generation terrestrial broadcasting
SFN with space-time coding
Video service withSuper Hi-Vision quality
New signal structure
6MHz
Current digital terrestrialbroadcasting
Frequency
Transmission parameters under consideration
5% increasein bandwidth
13 segments
35 segments
While working to resolve technical issues in the implementation, we will develop the next-generation terrestrial broadcasting system that can address a wide variety of needs.
● A part of this research is being performed under the auspices of the "Research and Development of Technology
Encouraging Effective Utilization of Frequency for Ultra High Defi nition Satellite and Terrestrial Broadcasting System"
program, which is funded by the Ministry of Internal Affairs and Communications.
※ 1 LDPC (Low-Density Parity Check) codes: Block codes defi ned by low-density parity-check matrices.※ 2 SFN (Single-Frequency Network): A broadcasting network confi guration where multiple different stations transmit radio waves on the same
channel to form a coverage area.※ 3 Space-time coding: A method to convert a single signal to multiple different signals by simple processing. It allows the receiver to receive signals
by making full use of radio waves from multiple transmitting stations.
● Increasing frequency usage effi ciency compared with the current digital terrestrial broadcastingWe have achieved more effi cient frequency usage by using a new signal structure that minimizes the guard band, guard interval, etc.
● Error-correcting codes with superior decoding characteristicsWe are using LDPC codes※1 for error correction, which enable efficient decoding even for long codes, by employing parallel processing and other means.This has allowed us to make significant improvements in the error correction performance compared with the convolutional codes used for the current digital terrestrial broadcasting.
● Study on a more stable single-frequency network (SFN)Current terrestrial digital broadcasting utilizes an SFN※2 to make efficient use of limited frequencies. To form a stable SFN for next-generation terrestrial broadcasting even when using high-order modulation, we are verifying the effectiveness of an application of space-time coding※3 through fi eld experiments.
With the aim of achieving terrestrial Super Hi-Vision broadcasting, we are developing a broadcasting system with more efficient frequency usage than current digital terrestrial television broadcasting. This exhibit displays the fundamental technologies we are studying such as modulation schemes and error correction codes.
Super Hi-Vision
Toward the realization of terrestrial Super Hi-Vision broadcasting
Next-generation Terrestrial Broadcasting SystemsA 8
Outline
Features
Future plans
MMT-based transmission for various channels and shared use of a receiver
MMT
Broadcast station
Video
Closedcaptions
Audio
Application
Next-generationterrestrial broadcasting
Satellite broadcasting
Satellite broadcastingreceiving unit
Cable TVreceiving unit
Broadbandreceiving unit
Next-generationterrestrial broadcasting
receiving unit
Shared receiver for broadcasting and broadband
MMT
Goal!!Goal!!
Goal!!
Cable TV
CurrentInternet
Futurebroadband
We will conduct experiments using various channels including the Internet widely used today and the future broadband to demonstrate the advanced services for 8K broadcasting.
※ 1 MMT (MPEG Media Transport): An ISO/IEC standard media transport protocol for media delivery in heterogeneous environments. MMT-based broadcasting systems are specifi ed in Recommendation ITU-R BT. 2074. MMT is also specifi ed as an element of the management and protocol layer of ATSC 3.0, the next-generation terrestrial broadcasting standard under development in the U.S.
※ 2 Multicast technology: A technology to effi ciently send data to multiple terminals simultaneously over an IP network.※ 3 Baseband transmission technology: A transmission scheme to transmit digital broadcasting using digital signals without modulation or
demodulation, making it suitable for optical transmission.
● Shared receiver for broadcasting and broadbandWe are aiming to realize a shared receiver for broadcasting and broadband that allows viewers to select programs without considering their delivery channels. MMT has already been adopted as the media transport protocol for Super Hi-Vision (8K/4K) satellite broadcasting, for which test broadcasting will begin in 2016. When MMT is used for cable TV as well as for delivery in the Internet, most parts of the receiver can be shared.
●MMT-based advanced servicesMMT can transmit video, audio and various information for advanced services through broadcasting as well as broadband in the same way. It enables providing scene commentaries in broadcasting as the multiplexed form. It also enables providing multi-angle videos that are presented in synchronization with the broadcasting program even though they are separately delivered in broadband.
● 8K multichannel delivery technologiesFuture high-speed broadband will enable the transmission of multichannel 8K programs. We have developed 8K multichannel delivery technologies using multicast technology※2 and baseband transmission technology※3.
We are continuing research into 8K Super Hi-Vision delivery technology using MMT※1. This exhibit presents a shared receiver that can receive programs transmitted through either broadcasting or broadband. It also presents MMT-based advanced services and 8K multichannel delivery technology using future broadband such as optical fi bers supporting up to 10 Gbps transmission.
● A part of this research is being performed under the auspices of the "Research and Development of Technology
Encouraging Effective Utilization of Frequency for Ultra High Defi nition Satellite and Terrestrial Broadcasting System"
program, which is funded by the Ministry of Internal Affairs and Communications.
Super Hi-Vision
Toward more attractive 8K broadcast services
Delivery Technology Using MMT for 8K Super Hi-VisionA 9
Outline
Features
Future plans
Operation outline of 8K Super Hi-Vision FPU
Broadcastingstation
Millimeter-wave FPU
Short- tomedium-distancehigh-speed transmission
Long-distancetransmission
Base station
Mobile station
Long-distancetransmission
Microwave FPUand antenna 1.2-GHz/2.3-GHz-band
FPU for mobile relays
Towards practical applications of the FPUs, we will continue to perform various experiments including verifi cation of the practicality of outdoor use.
● The research on the mobile relay FPU is being conducted as a government-commissioned project from the Ministry of
Internal Affairs and Communications, titled "R&D on highly effi cient frequency usage for the next-generation program
contribution transmission."
※ 1 FPU (Field Pick-up Unit): A portable, wireless transmission device for program contribution transmissions that involve outdoor relays and reporting footage.
※ 2 MIMO (Multiple-Input Multiple-Output): A wireless transmission scheme that uses multiple antennas for both transmission and reception. ※ 3 OFDM (Orthogonal Frequency Division Multiplexing): A transmission scheme that arranges multiple carriers orthogonal to each other on the
frequency axis.
●Millimeter-wave FPU for high-speed transmissionThis FPU can provide a transmission rate in excess of 200 Mbps by using the 42-GHz band, which provides a wide bandwidth. It achieves high-speed transmission by using wideband radio transmission technology exploitting the 125 MHz bandwidth and dual-polarized MIMO※2 technology.
●Microwave FPU for long-distance transmissionThis FPU, which uses the same channels as a 6- to 7-GHz-band Hi-Vision FPU, enables long-distance 8K relays. It achieves a transmission rate of approximately 200 Mbps by using ultra-multilevel OFDM※3 technology and dual-polarized MIMO technology.
● Antenna for microwave FPUsWe have developed a portable parabolic antenna, a transmission antenna for helicopters, and a high-gain receiving antenna mounted on a rotator to support the dual-polarized MIMO technology used for microwave FPUs.
● Transmission technologies for mobile relay FPUsWe are researching transmission technologies using the 1.2-GHz/2.3-GHz band so that 8K video can be transmitted without interruption in mobile relays such as those used to broadcast road races. We have prototyped 4×4 MIMO-OFDM transmission devices that can change transmit and receive beams adaptively according to the propagation channel.
We are developing wireless transmission equipment (FPU)※1 for live broadcasts of 8K Super Hi-Vision. This exhibit shows FPUs that use radio waves in the millimeter-wave and microwave frequency band and also 8K transmission technology for implementing an FPU system for mobile relay broadcast programs.
Super Hi-Vision
For live transmission of 8K video signals
8K Super Hi-Vision Wireless Links for Program ContributionA 10
Super Hi-Vision
Experience Zone
Experience Full-featured 8K Super Hi-Vision
Immerse Yourself in a 3D Sound System
T 1
T 2
Come and experience our full-featured 8K Super Hi-Vision system having images with high resolution, wide color gamut, high dynamic range (HDR), high bit depth, high frame rate, and three-dimensional sound.
Full-featured 8K Super Hi-Vision provides the three-dimensional sound environment created by the 22.2 multichannel system, which gives us an immersive experience like never before. Another unique function of Super Hi-Vision that we are introducing is greater ability to enhance the audibility of dialogues.
A-P1
A-P2
A-P3
Assessing spatial impressions and evocative qualities by sound signals
Achieving even higher efficiency than HEVC/H.265
Using spatio-temporal processing to generate high-quality video
Ultra-reality Meter
Future Video Coding Technologies
Super-resolution Technique for Full-featured 8K Video
We are working on an objective and comprehensive method to evaluate acoustic attributes such as sense of reality, spaciousness, and evocative qualities by analyzing sound signals. Our newly developed "ultra-reality meter" can assess various spatial impressions based on the analysis of the sound signals, taking factors such as the audio channel confi gurations (e.g., 22.2ch or 5.1ch) and the reverberation time.
Toward next-generation terrestrial broadcasting, we are investigating new video coding technologies with even higher effi ciency than HEVC/H.265. Learn about our recent progress and the latest updates on technologies that are under discussion by leading international standardization bodies.
We are proposing a new spatio-temporal super-resolution technique to help incorporate 4K ultra high-defi nition television (UHDTV) and digital cinema videos into 8K Super Hi-Vision. Our technique is especially effective because it takes into account factors such as the high self-similarity and human vision properties for 4K UHDTV and digital cinema videos.
Outline
Features
Future plans
Concept of connecting between user's life and content
User (view
er) living space
Service
Service
Service
Service
Conventional TV broadcasting
Program-related inform
ationand content
Media unifying technologyAbsorb differences in the deliverymethod and viewing environment
Content matching technologyConnection between program
information and activities/Data linkage
Provide content suited touser activities with
a viewing method suitable for the user environment
Program data
Broadcaster
Service providers otherthan broadcasters
Data linkage
サービス
Content
Content
Content
Program-relatedinformation
We will push forward with R&D in cooperation with broadcasters, manufacturers and telecommunications carriers.
● This exhibit is jointly presented with the IPTV Forum, TV Asahi Corporation, Tokyo Broadcasting System Television, Inc.
and Fuji Television Network, Inc.
※ 1 Simultaneous online broadcasting: A service to deliver programs on the Internet simultaneously with TV broadcasting.※ 2 Hybridcast Technical Specifi cations ver. 2.0: IPTV Forum Japan's standard STD-0010 broadcast-internet coordination system specifi cations
ver. 2.0 and STD-0011 HTML5 browser specifi cations ver. 2.1.
● Media unifying technologyWe are studying technologies to automatically select an appropriate viewing method considering how an intended program is distributed (e.g., broadcast channel, internet server address, delivery format) and the user's viewing environment (e.g., equipment in use, network connection). This will allow viewers to enjoy programs regardless of the media type such as broadcasting, simultaneous online broadcasting※1, and VOD (video on demand).
● Content matching technologyWe are aiming to provide services that can connect topics provided by broadcasters to new discoveries and activit ies in various scenes in daily l i fe. We are studying ways to provide program-related information and content in accordance with user activities (e.g., place, situation) as well as technologies to provide service providers other than broadcasters with program data information that they need to provide program-related services.
● Advanced HybridcastWe are extending Hybridcast functions to create diverse services to connect broadcasting and the Internet. Hybridcast Technical Specifications ver2.0※2 have enhanced functions such as interaction between broadcasting and VOD and linkage between TV and mobile devices.
We are conducting R&D on a "new television experience" closely related to everyday life by making the best use of Internet technologies. This exhibit shows technologies to provide programs and information in accordance with various scenes in the user’s daily life based on data on their viewing environment and behaviors.
Internet Technology for Future Broadcast Services
Creating new broadcast services using Internet technologies
Technologies to Realize a "New Television Experience" Spread by the InternetB 1
Outline
Features
Future plans
New services utilizing technologies to synchronize broadcasts and live tracking data
More funMore
information
Venue
Prototype Hybridcastreceiver with
broadcast-internet synchronizing APICollaborating tabletInternet
Broadcast
HTML5-enrichedgraphic expressionsHybridcast's high-precision
synchronization of broadcasts andInternet content Displaying the motion and
detailed information ofplayers not beingdisplayed on TV
Frequent live tracking data
Live footage
Viewer's home
Reference clockconversion between
broadcasts and the Internet
We will continue studying various aspects of the synchronized services on Hybridcast as well as effi cient data distribution methods. We continue making efforts toward realization of more sophisticated Hybridcast services and their practical applications.
※ 1 Tracking data: Data of the motion of players and the ball at the stadium are automatically tracked and converted into numbers in real time.※ 2 Hybridcast Technical Specifi cations ver. 2.0: IPTV Forum Japan's standards STD-0010 "Integrated Broadcast-Broadband system specifi cation"
ver. 2.0 and STD-0011 "HTML5 Browser Specifi cation" ver. 2.1.※ 3 UTC (Coordinated Universal Time): The global standard clock.
● Highly precise synchronization of broadcasts and Internet content by HybridcastA feature allowing the synchronization of broadcast and Internet content, which was specifi ed in the Hybridcast Technical Specifications ver2.0※2, enables applications to acquire broadcast reference clocks. Our latest technology to coordinate the various clocks of various media components can accurately synchronize and visualize UTC※3-based Internet content along with the broadcast program.
● Live tracking data for more enriched user experienceThe use of Hybridcast's synchronization function and live tracking data delivered over the Internet brings new user experience. For example, the motion and detailed data of players not being covered in the broadcast video can be visualized using animation corresponding to the live soccer game footage.
● Delivery of live and real-time data by cloud serversReal-time tracking data produced at the game fi eld can be directly delivered to receivers over the Internet through cloud servers. The results of distribution tests show that synchronized presentation of various information with broadcast live footage can be achieved well.
Toward higher functional Hybridcast, we are researching technologies to synchronize Internet delivered content with broadcast programs. This exhibit presents an example of how we can offer multifaceted viewing experiences of live sports coverage. You will see how the synchronized presentation makes it more attractive by displaying various information along with the broadcast program through the use of tracking data※1.
Internet Technology for Future Broadcast Services
Synchronizing broadcast programs and Internet content for more exciting live sports coverage
New User Experience by Hybridcast for a Live Sport ProgramB 2
Outline
Features
Future plans
Video distribution technologies adapted to diverse viewing styles
http://
Distribution server B
Distribution server A
Encoder
Multicast network
Real-time viewingTime-shiftviewing
Terminal-to-terminalcommunication
Real-time acquisition ofreception situation
Change the delivery pathfor real-time ortime-shift viewing
Change the delivery pathfrom distribution server A to B
in response to thereception situation
Change the delivery pathfrom distribution server A to
terminal-to-terminal communicationpath during group viewing
Adaptive control oftransmission rates
Acquisition of reception situation
We will conduct technical verifi cations using the Internet and promote cooperation between broadcasters and telecommunications carriers to accelerate research on smooth video viewing anytime and anywhere.
※ 1 Time-shift viewing: Viewing of a program at a later time than the live broadcasting, in which fast reverse and chasing playback functions are available.※ 2 Multicast distribution: A technique to replicate video data on the Internet and distribute them effi ciently to multiple terminals.※ 3 Adaptive streaming technologies: A technique to provide video of appropriate quality according to the condition of the communication line, such as
MPEG-DASH (MPEG Dynamic Adaptive Streaming over HTTP).
● Real-time acquisition of reception situationWe have developed a mechanism for the distributor side to immediately grasp the reception situation based on the communication line speed and other data measured at individual viewing terminals. This enables the adaptive switching of delivery paths and the adaptive control of transmission rates to allow smooth video reproduction for each terminal.
● Adaptive delivery path control techniquesThese are techniques to reduce network congestion and provide a stable distribution by changing the delivery path for each viewing terminal according to the reception situation and by using a multicast distribution※2 depending on the scale of viewers.
● Adaptive control techniques for video distribution transmission ratesCommon adaptive streaming technologies※3 produce multiple video fi les for fi xed transmission rates so that each viewing terminal can select an appropriate transmission rate according to its screen size and communication line speed. In addition, our latest technique allows encoders on the distributor side to change transmission rates precisely according to the reception situation. This can reduce sudden variations in the reproduced image quality.
We are conducting research into stable delivery techniques for Internet video services. This exhibit presents distribution technologies for smooth video reproduction in diverse viewing styles that include various devices (e.g., TV, PC, smartphone), different places (home vs outside the home), and different viewing timings (real-time viewing vs time-shift viewing※1).
Internet Technology for Future Broadcast Services
Providing smooth video viewing experience anytime and anywhere
Video Distribution Technologies Adapted to Diverse Viewing StyleB 3
B-P1
B-P2
Reducing processing load for data encryption on user terminal
Designing broadcast services to help viewers "discover" new content
Encryption Scheme for Privacy Protection
Content Search Behaviors on Time Shift Zapping System
In integrated broadcast-broadband services, viewers’ personal data must be protected. Toward practical use of a sophisticated privacy preserving system, we have developed an encryption scheme to reduce the processing load for encrypting personal data on a user terminal. We compare our scheme with conventional schemes in terms of processing time and show that our scheme is effi cient.
We are working on creating a new broadcast service offering convenient time-shifted viewing. Using our rich archive of programs from the past few years, we have been investigating how people use keywords and time references to fl ick through programs and fi nd new contents to enjoy.
Internet Technology for Future Broadcast Services
Outline
Features
Future plans
Automatic Generation of Metadata using Scene Text Detection
Video footage ProductionScene Text Detection
“What is a scene text?”A string of letters filmed as part of a scene.Often slanted or askew and difficult to detect.
Automatically detect text within a scene
“Manually typing in metadata would take too much time…”
○○Building □□Hospital
△△Street
Spatial disparity Angular shift Uneven lighting
Easy to find thedesired footage
Traditionally…
“I need a shot of 〇〇 Building. Ah! Found it!”
“I have to run the whole video to find what I need…”
Geographical information△△Street
□□Hospital
○○Building
We will improve the speed and accuracy of text detection to develop a system that can be widely used in various stages of program production.
● Detecting complex Japanese charactersJapanese characters are complex and much more diffi cult to identify than numbers and alphabets. Once our system detects a string of characters, it takes into account its size and direction and checks that nothing has been left out.
● Distinguishing text from graphicsIt is diffi cult to distinguish text from nearby graphics with a similar color or size. The system identifi es irregular lines and components that are characteristic to Japanese characters and distinguishes them from other graphics.
To make it easier for producers to find the desired footage, we are working on a system to automatically generate metadata for video materials. Our exhibit illustrates how we can detect text from signboards and nameplates that are included in videos by chance.
Smart Production
Making it faster and easier to fi nd the desired footage
Scene Text Detection to Automatically Generate Metadata for VideosC 1
Outline
Features
Future plans
Outline of the Studio Robot for Joint Performance with CG
Displays CG charactersto help performersinteract naturally
Natural composition oflive action and CG withstudio lighting changes
CG characters canreact to performers
by sensing their motions
Studio
Estimates studiolighting conditions toapply to the CG
Compositing device
We plan to improve our light-sensing and motion-sensing technologies to fully integrate live action and CG imagery for richer and more creative video production.
● Various sensors add reality to video compositingOur robot is equipped with omnidirectional light sensors to apply the lighting conditions in the studio to CG images. It can also move around and "react" to performers by sensing their motions, resulting in more natural live video compositing.
● Bringing live performers and CG character closerInstead of acting out with an invisible counterpart, performers can now look at CG displayed on the robot-mounted monitor to engage in natural and lively interaction in real time.
Our new studio robot helps to improve production featuring live performers and CG (computer graphics) counterparts. It has a light sensor and a motion sensor enabling more integrated video compositing with lively interaction between the performers and CG characters.
Smart Production
Enabling natural interaction between performers and computer graphics characters
Studio Robot for Joint Performance with CGC 2
Outline
Features
Future plans
How Sign Language Animation is Generated using Weather Codes
Auto-generatedSign Language Animation
GeneratedSign Language Message
Weather Codes
※The weather codes and templates shown here are explanatory models.
This is weather information for (Tokyo). A (storm warning) and (high-wave warning) have been issued.
Sign Language Template withParameter Gaps
This is weather information for ( ). A ( ) and ( ) have been issued.
<DateTime>2016/05/14 10:20:00</DateTime><Title>Weather Warnings</Title><Area> <Name>Tokyo</Name> <Code>4410</Code></Area><Item> <Kind> <Name>Storm Warning</Name> <Code>05</Code> </Kind> <Kind> <Name>High-wave Warning</Name> <Code>07</Code> </Kind></Item>
Relevant parameters areinserted into the template( )
We will trial our service online and seek wider feedback to improve the overall quality of our sign language animation.
●This research is partly being conducted with Kogakuin University.
● Automatically generating sign language animation using weather codesWith the help of deaf volunteers and sign language interpreters, we have created a template that corresponds to the standardized weather codes distributed by the Japan Meteorological Agency. The system analyzes incoming codes and inserts relevant words into the template to generate appropriate sign language animations.
● Compatible with weather warningsWe have increased the speed of data processing to present urgent weather warnings with sign language animations in addition to regular weather information. We are also now able to present local warnings at the municipal level.
We are developing an automated sign language animation generator that provides weather warnings based on XML codes distributed by the Japan Meteorological Agency. In addition to regular weather forecasts, our system automatically picks up updates of local weather warnings and generates the corresponding sign language.
Smart Production
Providing safety information through sign language services
Automatic Sign Language Animation System to Express Weather WarningsC 3
Outline
Features
Future plans
How information is supplemented to help understand Japanese news texts
九州南部では大気の状態が非常に不安定になっています。The atmosphere around southern Kyushu is extremely unpredictable.
Machine translation
Automatic learning
Original news text
Easy Japanese Korean규슈 남부에서는 대기
상태가 매우 불안정해지고
있습니다.
Translation knowledge
Parallel news texts
News with supplementary easy Japanese News with supplementary Korean
九州 南部 では 大気 の状態が 非常に 不安定に なっています。南側 규슈 남부 대기 상태空気 とても 매우変わりやすく 불안정해 지고 있습니다
九州 南部 では 大気 の 状態 が 非常に 不安定に なっています 。
Japanesenews
“Easy Japanese” news
Japanesenews
Koreannews
不安定にunpredictable
↓変わりやすくchangeable
不安定にunpredictable
↓불안정해 九州の南側では空気の状態がとても
変わりやすくなっています。The air around south Kyushu is very changeable.
We will continue our work on improving the quality of translation, explore the optimal words and phrase unit for adding supplementary information, and expand the system to English and other languages.
● Supplementing information in contextInstead of replacing words mechanically, the original Japanese text is translated into easier Japanese or Korean in full then the most appropriate word is chosen in each context.
● Providing news-oriented translationThe system uses statistical machine translation and is especially news-oriented because it builds on knowledge from previous translations of news into easy Japanese and Korean.
To help non-Japanese viewers and language learners understand our news, we are working on a system that automatically supplements our news texts with simpler Japanese expressions or vocabulary translated into other languages, such as Korean.
Smart Production
Providing supplementary information to help understand Japanese news
News Service with Reading AssistanceC 4
Outline
Features
Future plans
How the Haptic Presentation Device Conveys the Shapes of 3D Objects
Three stimulation points for each finger
Computer-generated graphic oftarget object
Stimulation points on each fingertip signalthe location of the object surface
We will continue our R&D to improve the tactile presentation method and develop a more portable device that will lead to a true tactile television for everyone to enjoy.
●This research is partly being conducted with the University of Tokyo.
● Presenting multiple stimulation to multiple fi ngersThe device presents the shape of a three-dimensional object as three stimulation points on the skin of the thumb and two fi ngers.
● Recreating the sense of "grasping" an objectThe device estimates the location and angle of the hand as it tries to "grasp" an object and controls the stimulation so that the fi ngertips are always "on" the surface, virtually recreating its shape.
To make "tactile" or touchable television a reality, we are continuing our research on technology to convey the shapes and fi rmness of physical objects such as works of art. Our newest development recreates the shapes of objects virtually by letting the user "touch" computer-generated images with their thumb and two fi ngers.
Smart Production
Making "Tactile TV" a reality
Haptic Presentation Technology for Conveying 3D Shapes of ObjectC 5
Outline
Features
Future plans
Video Processing Flow Chart of Multi-viewpoint Video with 3DCG
Output Video
Object Tracking
Three-dimensionalCG CompositingCamera Calibration
Multi-viewpoint Video
Multi-viewpoint Camera
Object Tracking
Three-dimensionalCG CompositingCamera Calibration
Multi-viewpoint Video
Multi-viewpoint Camera
Analysis DataVideo
Video
Compositing Multi-viewpointVideo and CG
Camera Parameters(position, orientation, focal length, etc.)
We will improve each of the fundamental technologies and examine how well the system works in real sporting events, with the goal of making it available for the 2020 Olympic and Paralympic Games in Tokyo.
※ 1 Multi-viewpoint video: Video showing an object from multiple directions obtained by placing a number of cameras around the object.※ 2 Camera calibration: The process of estimating both geometric and optical parameters of an actual camera, such as position, orientation, focal
length, lens distortion, etc. This reduces errors in CG compositing over video.
● Calibration technique for multiple pan-tilt camerasTo create high-quality multi-viewpoint video with accurate CG compositing, we propose an accurate calibration technique that analyses images of multiple calibration patterns from multiple cameras. We have also developed a calibration method that makes it easy to obtain the positions and orientations of multiple cameras deployed for shooting in the fi eld.
● High-precision object tracking using multi-viewpoint videoWe can track objects robustly under almost any conditions by using the positional relationships of cameras along with 3D motion prediction to calculate the precise position of a ball or a player in motion. We can then superimpose data such as speed or trajectory onto the video by analyzing the object tracking data.
We are working on new technology combining multi-viewpoint video※1 with computer graphics to provide an entirely new sports viewing experience. The exhibit shows camera calibration※2 technology that achieves to obtain information including the positions and orientations of multiple cameras in real time, and object tracking technology using both 2D image analysis and 3D motion prediction.
Smart Production
Enhanced multi-viewpoint video effects for more comprehensible sports scene review
Three-dimensional Information Analysis for Live Sports GraphicsC 6
Augmented TVT 3Look in the TV screen through our special tablet with a camera and experience a new video presentation - a CG character jumping out of the TV! Enjoy the 3D world through the tablet.
Experience Zone
C-P1 Expanding sign language services
Advanced CG Sign Language Technology
We are continuing our R&D to improve and expand computer-generated sign language applications. Our achievements include new technology that makes sign language animation fl ow more naturally from one word to another. We have also created a system to display earthquake alerts in sign language on tablets and Hybridcast-enabled television sets.
Smart Production
Outline
Features
Future plans
Multipleprojectors
Opticalcontrol lens
Lens array
Integral 3D images
Capable of capturinga wider area Camera
Objects
Images capturedfrom various angles
Translation stage
Better than 8K resolution Improved resolution andviewing area
3D capturing without a lens array 3D display technology using multiple projectors
● 3D CapturingTechnology without a Lens ArrayCapturing integral 3D images involves large lens arrays with numerous tiny lenses, but we have prototyped a device that uses a single camera on a translation stage. We are now able to capture a wider area without being limited by the size of the lens array.
● 3D Display Technology using Multiple ProjectorsGenerating high-quality integral 3D images requires many more pixels than for 8K display device. Our new technology integrates multiple high-resolution projectors over a lens array, resulting in higher resolution and a wider viewing area.
●More Natural Depiction of Depth (Poster Exhibition D-P1)We are studying how we can better express a wide area with integral 3D display. Our poster explains how viewers evaluate depth-compressed integral 3D images.
We will continue our research on both capturing and displaying technologies and improve the quality of 3D image. Our goal is to create a practicable 3D television.
As part of our research on integral 3D technology that will allow viewers to see convincing three-dimensional images without special glasses, we have created an improved 3D displaying device by combining multiple projectors. We have also developed a new technology to capture 3D images of objects positioned over a wider area.
3D Television
D 1 Improving resolution and viewing area for three-dimensional images
Integral 3D Television
Outline
Features
Future plans
Basic principles of spin-SLM and optical beem steering device
Lensless Integral Display
Interference fringes
Incident light
3D image
3D image
Controls optical beam steering ata high speed without the use of lens arrays⇒ Improves the viewing zone and the resolution of integral 3D displays
Optical beam steering device
Controlling light withsub-micron size magnets(spins)
Uses sub-micron size magnets toproduce interference fringes at a high speed⇒Enables electro-holography with a wide viewing zone
Spatial Light Modulator driven byspin-transfer switching
Light beam
Optical beam steering bythe phase differences ofa light beam
Wide-viewing-zoneHolography
● Spatial Light Modulator driven by spin-transfer switching (spin-SLM)We have established a low-current device technology that can be applied to active-matrix-driven※2, narrow-pixel-pitch (NPP) spin-SLM. The NPP is expected to broaden the viewing zone of holographic 3D images.
●Optical Beam Steering Device (Poster Exhibit D-P2)Our new optical beam steering device comprises multi-channel optical waveguides. We can control the direction and shapes of light beams by altering the voltage applied to the phase shifter in each channel. We hope to apply this technology to integral 3D television without lens arrays.
We will next focus on refi ning the micro-process fabrication technology and improving the performance of each device in order to create ultrahigh-density devices with narrow-pixel-pitch, ultrahigh-resolution features.
● The research on spin-SLM has been partly supported by the National Institute of Information and Communications
Technology of Japan (NICT) under the project "R&D of Ultra-realistic Communication Technology through Innovative
3D Image Technology, "jointly conducted with Nagaoka University of Technology.
●The research on optical beam steering device is being jointly conducted with NICT.
※ 1 Spatial light modulator driven by spin transfer switching: a device that modulates the spatial distribution of light by using electric current to control the direction of submicron size magnets arranged in a two-dimensional array.
※ 2 Active matrix driving: a method of applying a voltage to selected pixels in an array of pixels. Each pixel has a switching element such as a transistor.
We are working on creating three-dimensional displays that viewers can enjoy without special glasses. Our exhibit introduces two types of displays, one is the spatial light modulator driven by spin-transfer switching※1 (spin-SLM) that enable a wider viewing zone in holographic displays and the other is the optical beam steering device for lensless integral displays.
3D Television
Developing holographic and lensless integral 3D displays
Device Technologies for Future 3D DisplayD 2
Let’s Move and Watch! Integral 3D QuizT 4Integral 3D television allows viewers to view natural 3D images without using special glasses. Here's a video quiz making use of a major feature of integral 3D television, the ability to change the 3D image according to the viewer's viewing position. You can't see the answer from the front, but you can see it by changing your position.
Experience Zone
D-P1
D-P2
A more natural depth expression in integral 3D imaging
Developing lensless integral 3D display
Depth-compressed Expression Technology
Optical Beam Steering Device
Integral 3D imaging has technical challenges in displaying depth information with high-quality as the depth expression range is theoretically bounded by the physical properties of display technologies. Here, we are investigating a method to express a large space within the narrower depth range without inducing unnaturalness, taking advantage of the characteristics of human perception. We show the subjective evaluation results of depth-compressed 3D images.
In order to realize an integral 3D display that does not require lens arrays, we are developing an optical beam steering device that can fully control light beams. Our poster illustrates the operating principles of the optical beam steering device and the basic properties of controlling the direction and shape of light beams.
3D Television
Outline
Features
Future plans
high sensitivity and high frame rate■ Back-illuminated, stacked structure
Back-illuminatedpixel structure
Incident light
A/D conversion circuit
Stackedstructure
Signalprocessing
Pixelwafer
wafer
Wiring
Photodiode
Photodetector
Photodetector
Signalprocessing circuit
Incident lightPixel
■ High frame rate possible even with increased pixelsSignal from every pixel processed simultaneously
Back-illuminated small-size image sensorPixel-parallel processing three-dimensional
integrated imaging device
We will continue to pursue a wide range of research, from fundamental technology to device development, in an attempt to dramatically improve broadcast cameras.
●The research on back-illuminated small-size image sensors is jointly being conducted with Shizuoka University.
● The research on pixel-parallel processing three-dimensional integrated imaging devices is jointly being conducted with
the University of Tokyo.
●The research on organic image sensors is jointly being conducted with Kochi University of Technology.
● Back-illuminated small-size image sensorWe have prototyped a novel image sensor for 8K Super Hi-Vision with 33 million pixels of size 1.1μm. The sensor size is equivalent to 2/3 optical inch and the back-illuminated pixel structure contributes to its sensitivity and speed. Newly developed high-speed A/D conversion circuits are integrated, enabling a high frame rate of 240 Hz.
● Pixel-parallel processing three-dimensional integrated imaging deviceThe signal processing circuit for each pixel is arranged just beneath the photodetector, allowing every pixel to output simultaneously. This means we can increase the pixel count and still achieve a high frame rate. We have demonstrated its validity with our latest 128 × 96-pixel prototype.
● Solid-state image sensor overlaid with photoelectric conversion layer and organic image sensor (Poster Exhibits E-P1, P2)
Photodetectors convert light into electrical signals and are usually made from silicon. We believe that substituting silicon with selenium, a compound semiconductor, or an organic material will improve the sensitivity or color separation performance of image sensors. The poster exhibits explain our latest achievements.
We are refi ning our imaging devices to improve current 8K Super Hi-Vision cameras and help develop future three-dimensional cameras. This exhibit introduces our latest achievements in imaging devices.
Future Devices
E 1 Revolutionizing broadcast cameras
Future Image Sensor Technologies
Outline
Features
Future plans
Elemental technologies for sheet-type displays
Controlling the durationof light emission
Light emission
Inverted OLED enablinglonger lifetime
One pixel
Pixel circuit
Solution-processed TFTsuitable for larger display
Better image quality and longer lifetime
Driving circuit
Controlling the durof light emissio
Better image quality and lon
We will continue to enhance our OLED technology to reduce power consumption while boosting lifetime. We will also work on improving solution-processed oxide TFTs and driving technologies.
●The research on inverted OLEDs is jointly being conducted with Nippon Shokubai Co., Ltd.
※ 1 TFT: Thin-Film Transistor※ 2 OLED: Organic Light-Emitting Device. Uses electroluminescence, which is the emission of light in response to electric current.
In order to create lightweight, ultrathin 8K displays, we are working on elemental technologies to extend their lifetime, increase the screen size, and improve video image quality.
● Solution-processed oxide TFT※1 technology for larger displays (Poster Exhibit E-P3)We have developed a novel fabrication technology for producing solution-processed oxide TFTs at a low temperature that are suitable for application as fi lm substrates and for creating larger displays. Applying this technology to pixel circuit formation is expected to enable cost-effective fabrication of large-area sheet-type displays.
● Inverted OLED※2 for longer lifetime (Poster Exhibit E-P4)We have created a new device called an inverted OLED, which has an inverted structure compared with a conventional OLED. Using a material that is less vulnerable to oxygen and moisture is expected to help extend the lifetime of sheet-type displays.
● Controlling light emission to improve image quality and lifetime (Poster Exhibit E-P5)To solve the problem of motion blur of OLED displays, we have developed a driving technology which controls the light emission period line by line. Longer lifetime as well as an improvement in video quality can be expected as a result of suppressing instantaneous luminance changes.
Future Devices
Toward the realization of large-area and lightweight 8K displays
Elemental Technologies for Sheet-type DisplaysE 2
Outline
Features
Future plans
Basic mechanism of a magnetic nanowire memory
“11100000010001111100…”
Formation of magneticdomains (recording)
Pulse currentgenerator Switched OFF
Magnetic nanowire
Switched ON
Pulsed current
Pulsed current
Spin-polarized electron flow pushes data forward.
Recorded data (queue of magnetic domains)
Magneticfield
Current-driven motion ofmagnetic domains(information storage)
Detection ofrecorded domains(reproduction)
SN
NS
Output “1”Switched ON
Reproduction of data
Recording head Reproducing head
In order to realize a high-speed magnetic nanowire memory, we will investigate the best magnetic material to drive magnetic domains at a high speed, and upgrade the high-speed prototype performance evaluation tool utilizing gigahertz band signal processors.
※ 1 Magnetic nanowires are ferromagnetic materials formed into wires approximately 100 nm wide, 10-40 nm thick, and 10-100 μm long.※ 2 A magnetic domain is a nanoscopic region in a magnetic material where the magnetic moments are aligned in one direction. It is a unit where
data are stored.
● Potential for a high-speed memory device without mechanically moving partsThe basic structure of a magnetic nanowire memory is similar to placing a pair of recording and reproduction heads on every data track in a hard disk medium, but with the nanowire acting as the data track. Magnetic domains※2 in a nanowire move forward at a high speed when spin-polarized electron fl ow is supplied along the nanowire. so, if we can activate the same mechanism in every magnetic nanowire simultaneously, we will be able to create a high-speed memory device without any mechanically moving parts.
● Evaluation of recording, storage, and reproduction performanceWe have prototyped a device to evaluate the recording and reproduction performance of magnetic nanowire memories. We have carefully attached a pair of recording and reproduction heads which are used in conventional hard disk drives onto a magnetic nanowire and applied pulsed currents along the nanowire length direction, thereby demonstrating how the recording head forms magnetic domains (=recording), which are then pushed forward at a high speed by the electron fl ow (=information storage) and are fi nally detected at the reproduction head (=reproduction).
We are proposing a magnetic nanowire※1 memory architechure for a future high-speed recording device. On exhibit is our prototype performance evaluation tool. We explain how magnetic nanowires are used to record, store, and reproduce information.
Future Devices
Developing a high-speed recording device with no moving parts
Magnetic Nanowire Memory E 3
E-P1
E-P2
E-P3
E-P4
E-P5
Developing high-sensitivity 8K Super Hi-Vision cameras
For compact, high-resolution, single-chip color cameras
Simple and easy fabrication technology of TFTs for large-area display
Creating an OLED that can better withstand oxygen and moisture
Reducing motion blur in OLED displays
Solid-state Image Sensor Overlaid with Photoelectric Conversion Layer
Organic Image Sensors
Solution-processed Oxide TFTs (Thin-film Transistors)
Inverted OLED (Organic Light-emitting Diode)
Driving Technology for Enhanced Video Image Quality and Longer Lifetime
To improve the sensitivity of 8K cameras, we are working on solid-state image sensors that are overlaid with photoelectric conversion layers (low-voltage carrier multiplication fi lms) that can multiply signal charges. We have made several improvements including further fl attening of the fi lm surface and reducing the dark current.
We are developing an organic image sensor using transparent readout circuits overlaid with organic fi lms that convert each of the three primary colors of light into electrical signals. The principles of organic image sensors and the latest achievements in our research are illustrated in our poster exhibit.
The aim of our research on solution-processed oxide TFTs is to create large, ultrathin, sheet-type displays using easier manufacturing process. We have succeeded in fabrication of solution-processed oxide TFTs at a low temperature while improving their performance.
Film substrates tend to allow oxygen and moisture to pass through them. We are researching the use of an inverted OLED to extend the lifetime of displays even on a film substrate and create large-scale sheet-type displays. Come and see how our inverted OLED works and how we have improved its properties.
We have developed a panel-driving technology to improve both the video image quality and the lifetime of sheet-type OLED displays. Controlling the light emission periods line by line according to picture patterns, an improvement in motion blur as well as longer lifetime will be realized.
Future Devices
Outline
Features
For details or a consultation on NHK’s patents and other areas of expertise, contact
8K Super Hi-Vision applications in the medical fi eld
--8K endoscopic camera; its testing on animals--
Tactile presentation system with
fi nger-guiding device
NHK Engineering System, Inc.1-10-11 Kinuta, Setagaya-ku, Tokyo 157-8540 TEL (03) 5494-2400 FAX (03) 5494-2152URL: http://www.nes.or.jp/en/
● 8K Super Hi-Vision technologies for wider application・8K Super Hi-Vision application to revolutionize medical technology・8K computer system applied for wider application
● R&D on advanced content production and human-friendly broadcast services・Multifunctional virtual studio system for high-end video compositing in a scaled-down framework・Compact 4K camera system for deep sea shooting (operable at 1,000 m below sea level)・ A practical Proprioception-Tactile display system to convey graphical information to the visually challenged
● Licensing and technology transfer from NHK to the worldNHK’s technologies are available for both professional and consumer use. We are willing to offer consultations regarding licensing and technology transfer.
NHK Engineering System, Inc., promotes NHK’s patents and other technical expertise and engages in R&D aimed at sharing the benefi ts of broadcast technologies with the general public. Our exhibit includes some of patented NHK’s transferrable technologies and ongoing research that are available for wider use.
NHK Engineering System
Serving society by making NHK’s technologies available for wider use
Utilization and Development of NHK’s TechnologiesF 1
Outline
Features
Future plans
Closed-caption systemSharing programs on SNS
A program being watchedis displayed on TV
Fast reverse of a program
VOD viewing
Time shift/data linkage TV linkage
SNS linkage
Home screen
■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
操作の選択
メッセージ
Gmail Google+
番組情報 字幕
A:うちは、みんなに夢持って働いてもらいたい!
A:今のままで、ほんまにええのですか!?
A:なんでだす?B:夢なんていらん!
NHK MUSIC「みんなのうた」XXXXXXX
番組情報 字幕
● Closed-caption linkage and time-shift playbackClosed captions of the program being watched are displayed chronologically via the Internet so that viewers can check any captions they have missed. Viewers can also fast reverse to a certain scene for replay by selecting the corresponding closed caption. Video on demand (VOD) playback is also available from menus displayed on the same screen.
● SNS linkageViewers can share a program scene that they liked and play it as a video clip on SNS such as Twitter.
● TV linkageViewers can display on the TV screen a program or VOD that they are watching on a smartphone and play it on TV from where they stopped on the smartphone.
We will continue to develop smartphone services such as closed captioning for future simultaneous online broadcasting※ and interaction functions with TV and SNS.
●"Smartphones fi rst" is jointly being developed with NHK Media Technology, Inc.
※ Simultaneous online broadcasting: A service to deliver a program on the Internet simultaneously with TV broadcasting.
Live streaming service for smartphones
- Smartphones fi rst era -
More and more people are using smartphones as the main device for viewing of images. In preparation for new services in the "smartphones fi rst" era, in which viewers fi rst watch programs on their smartphones, we have developed applications to realize program interactions with closed captioning, time-shift playback, SNS and TV.
Engineering Administration Department
"Smartphones fi rst" era
Live Streaming Service for SmartphonesF 2
Outline
Features
Receiver diagram of
Super Hi-Vision broadcasting
Roadmap toward widespread use
Broadcaststation
Satellite broadcasting
Test satellitebroadcasting
Satellite broadcastingto the public
Widespread use2020
TV system for 8KReceiver 2016
2018
Test satellitebroadcasting
SSatatellite broadcastingto the public
Widespread use22002200
222222222220016
22222200118
●This exhibit is presented by NHK's Engineering Administration Department.
● For the start of Super Hi-Vision test broadcastingThe test satellite broadcasting of Super Hi-Vision will start on August 1st. Here we demonstrate by using an actual receiver how you can receive 8K Super Hi-Vision at home.
● For the widespread use of Super Hi-Vision broadcastingWe present schedules and NHK's efforts toward broadcasting to the public beginning in 2018 and widespread broadcasting in 2020, the year of the Tokyo Olympic and Paralympic Games.
The test satellite broadcasting of Super Hi-Vision will start on August 1. This exhibit displays the reception equipment for Super Hi-Vision to show you how the service is received at home and to introduce our efforts towards achieving widespread use of the service. We are also open to any questions that you may have regarding digital broadcasting.
Engineering Administration Department
Envisioning the start of test satellite broadcasting and its widespread use
The Time Has Come to Launch Super Hi-Vision BroadcastingF 3
Outline
Features
Home receiver for satellite
broadcasting equipped with
a compact parabolic antenna
Wide display with 1,125
scanning lines (combining three
cathode-ray tubes)
Radio receiver compatible
with emergency warning
broadcasting
※ IEEE (The Institute of Electrical and Electronics Engineers): The world's largest professional association for the fi elds of electrical engineering, electronics, computer science and telecommunications, with more than 430,000 members in more than 160 countries.
●World's fi rst direct satellite broadcasting services for homesNHK began research on the main unit of a broadcasting satellite and a home receiver in 1966 and started the world’s first direct satellite broadcasting in 1984. This has enabled TV broadcasts to be received at homes throughout Japan, even in mountainous regions and remote islands, and established a foundation for the satellite broadcasting services currently used by people around the world.
● Hi-Vision, the leader of world broadcasting technologiesWe began basic research on a high-quality television system in 1964 and conducted a wide range of R&D from psychophysical experiments, including investigating the relationship between the angle of view and the sense of reality, to system development. In 1989, we started the world's fi rst Hi-Vision broadcasting trials, which formed the basis of the broadcasting standard of 1,125 scanning lines with a 16:9 aspect ratio. With the system of 1,125 scanning lines adopted as a unifi ed worldwide studio standard in 2000, Hi-Vision has become widespread throughout the world.
● Emergency warning broadcasting system to ensure safety and securityIn 1985, we achieved emergency warning broadcasting, which automatically turns on televisions or radios and provides information to the public in the case of a large-scale earthquake or tsunami. Emergency warning broadcasting has been integrated into the technical standards of international satellite and terrestrial broadcasting during digital TV standardization. It is still in use today, supporting disaster broadcasting.
IEEE※ milestones recognize signifi cant international technical achievements that were put into practice at least 25 years ago in the fi elds of electrical and electronic technology. In addition to the milestone for direct satellite broadcasting services awarded in 2011, this exhibit presents our recent awards for Hi-Vision and emergency warning broadcasting system.
NHK Museum of Broadcasting
Milestones for signifi cant broadcasting services
IEEE Milestones Awarded for NHK’s Technical AchievementsF 4
Expectations of Future Broadcasting Boosted by Video, Media, and Technology
Kiyoharu AIZAWA, Ph.DProfessor, The University of Tokyo
How people choose between television and Internet videoThe use of video services ––now and the future
Maki SHIGEMORIHead of Public Opinion Research Division, NHK Broadcasting Culture Research Institute
The greatest mission of broadcasting is to produce and deliver socially relevant content to its audience. However, in light of
recent developments in media technology and the media environment, the ability of broadcasting to deliver video contents to
the wider public seems to be at a turning point. Broadcast technology has constantly pursued better quality, higher resolution,
and a greater sense of reality in video. 340,000-pixel standard-defi nition video grew into 2-megapixel Hi-Vision, and now we
are ready to welcome Super Hi-Vision boasting 33 megapixels. Broadcasting is a stable and secure way to deliver video once a
system is up and running, but the technology and media landscape are evolving faster than ever.
Smartphones may well be the single greatest infl uence on how people gather and consume information. They are now much
more accessible for average users than televisions and laptops. They are used as displays to stream and watch digital contents
and cameras to capture photos/videos as well.
Social media including Facebook, Twitter, and Instagram started to introduce video to their services recently. Facebook
reported that their number of daily video views reached 8 billion as of November 2015. One can imagine the impact a video clip
can have when it is “liked” and shared on their platform, and now we are seeing a new type of media emerging called “distributed
media” that uses these social platforms to disseminate content.
With media technology advancing at a fast pace, what do we foresee for the future of broadcasting? Various technologies
that had hitherto developed independently are gaining greater presence in core areas of broadcasting technology, such as
imaging, transmission, and displays. Looking ahead, one example will be the potential of VR (virtual reality) to become the next
new mode of display.
This talk aims to examine how broadcasting may evolve in these changing times and what it needs to focus on as its essential
values.
For young people today, where video contents come from is irrelevant, i.e., whether they are televised or distributed
via the Internet. They fi nd images to their liking skillfully and effortlessly. This behavior has been enabled by advances in
the video viewing environment including the capability of Internet transmission and the capacity of hard disk drives for
recording the contents as well as the spread of the video distribution market.
In particular, the Internet is awash with different genres of videos of various lengths, ranging from movies and TV
programs produced by professional companies to so-called UGCs (user-generated contents) posted by ordinary users.
Regarding paid contents, overseas OTT (over-the-top) service providers such as Netflix and Amazon are entering the
Japanese market one after another, resulting in an increase in the number of original contents. Domestic broadcasters and
other business operators are also launching content delivery services, contributing to the rapid improvement of the viewing
environment for both real-time contents and VOD (video on demand). This is the background of the NHK Broadcasting
Culture Research Institute’s survey, “The Japanese and Television - 2015,” which revealed that not only is people’s TV
viewing time becoming shorter but also their interest in television is waning.
How, then, do Japanese people obtain and enjoy video contents? The Institute has been regularly conducting social
surveys about television and viewers. This talk examines the changes over time in the viewing status of television and online
videos, and in people’s attitudes to television and TV programs, based on survey data. It introduces the video viewing
behavior in detail from the results of a web survey, such as under what circumstances people watch videos. A qualitative
research also clarifi es how young people come in contact with web contents, including the difference in the circumstances
under which television and the Internet are chosen. These fi ndings will help predict future trends of video viewing.
10:20 am~ 10:50 am
10:50 am~ 11:20 am
Lecture
SpecialPresentation
Open House 2016 Lectures/Special Presentations Overview
at NHK STRL Auditorium (Admission Free)
5/26(Thu)
※ Japanese language only※ Japanese language only
NHK STRL is conducting research into integral 3D television as a new form of broadcasting media beyond Super Hi-
Vision. Integral 3D television obtains light rays from a 3D object and reproduces its optical image by using a lens array of
many microlenses for both capture and display. It does not require special glasses because the reproduced optical image
has horizontal and vertical parallax and allows observers to watch the optical image according to their viewing position in any
posture within a viewing zone.
Integral 3D television, which reproduces perspectives of an object from all directions (top, bottom, left and right), requires
a much larger amount of information than 2D images. To address this need, we have been conducting R&D on a camera and
display with a high pixel count as well as a 3D imaging system using multiple cameras and displays. This report describes a
prototype 3D imaging system that we have developed. It also introduces our studies on the technology to generate 3D models
from multi-viewpoint images and the technology to convert 3D models to integral 3D images with the aim of producing diverse
3D video content.
11:30 am~ 11:50 am
11:50 am~ 0:10 pm
0:10 pm~ 0:30 pm
Research Presentation ❶
Research Presentation ❷
Research Presentation ❸
R&D on the Transmission System for Next-Generation Terrestrial Broadcasting
Madoka NAKAMURA (Advanced Transmission Systems Research Division)
Working Towards a New TV Experience Using the InternetChigusa YAMAMURA (Integrated Broadcast-Broadband Systems Research Division)
R&D on Integral 3D TelevisionMasato MIURA (Three-Dimensional Image Research Division)
Preparations are under way for the trial broadcasting of Super Hi-Vision by satellite in Japan in accordance with our
predetermined schedule. Moreover, studies to realize next-generation terrestrial broadcasting are planned. NHK STRL has been
conducting R&D on large-capacity transmission technologies to realize terrestrial Super Hi-Vision broadcasting and has successfully
carried out Super Hi-Vision video transmission experiments using higher-order modulation and dual-polarized MIMO technologies.
This report introduces an overview of research on next-generation terrestrial broadcasting, which incorporates new
technologies into ISDB-T (Integrated Services Digital Broadcasting-Terrestrial), which is the current terrestrial broadcasting
system. It explains the technologies that we are currently developing such as an advanced signal structure that can select the
transmission capacity according to the mode of use, a new forward error correction technology and a single-frequency network
technology to enable stable transmission even with higher-order modulation. It also introduces a new technology to receive Super
Hi-Vision with the same antenna as that used for current terrestrial broadcasting while reducing the transmission capacity by the
combination of next-generation video coding and transmission technologies. Finally, it discusses our research on achieving a
smooth migration from current terrestrial broadcasting to next-generation terrestrial broadcasting.
With the development of the Internet and the widespread use of smartphones, the media environment surrounding us and our
lifestyles are changing drastically. In order for television to continue to live up to people's expectations as a familiar and reliable
medium amid the diversifi cation of information media, we believe it is necessary to redesign what TV ought to be in the Internet
and mobile era. NHK STRL aims to expand the range of TV-linked experiences to include viewing not just in front of the TV but
also outdoors and in various other scenes of our daily lives by combining the use of mobile devices with "Hybridcast," which is
a service platform for the convergence of broadcasting and telecommunications.
This report describes the concept of a "new TV experience," which brings new discoveries and values to everyday life by
proving TV programs and related information to mobile users depending on their circumstances. We also present two key
technologies to realize the concept. One technology is "media-unifying technology," which automatically selects the appropriate
video viewing method according to the user’s reception environment and the available broadcast/Internet channels for the
intended program. The other technology is "contents-matching technology for daily life," which offers users the topics and
information provided in TV programs in connection with their daily life.
Open House 2016 Research Presentation Overview
at NHK STRL Auditorium (Admission Free)
5/26(Thu)
※ Japanese language only※ Japanese language only
8K Super Hi-Vision Theaterat NHK STRL Auditorium (Restricted to one viewing per entry)
Event dates
Look for rubber-stamps of various characters while you enjoy the exhibits. Stamp them on the right answers!
Stamp - athon10:00 am~4:30 pm
"Sciencer" and "Nosy's Inspiring Atelier" from the educational programs by NHK will hold a craft workshops. Please join and enjoy the event with your family and friends.
※Please note each event has limited capacity.
Craft Workshop10:00 am~4:30 pm
STRL researchers will guide you
through the exhibits.
Guided tours10:20 am~3:30 pm
Scheduled 32 times a
day. Duration: approx. 1hr
FIFA TV-NHK 8K Project
5/26(Thu) 1:00 pm~ 5:00 pm
10:00 am~ 5:00 pm5/27(Fri)~ 5/29(Sun)
Evolution of 8K Super Hi-Vision with Sporting Events
ー Screening of Sports Highlights ー
The Rio Olympic Games will take place in August this
year. Since the debut of our 8K Super Hi-Vision camera
on the world’s sporting stage in the 2012 London Olympic
Games, 8K Super Hi-Vision has progressed hand in hand
with many sporting events including the Sochi Olympic
Games, FIFA World Cup and Wimbledon Championships.
Presented here is a digest of 8K sport content that NHK has
captured to date. Enjoy the sense of energy and enthusiasm
of sports through the 33-megapixel ultrahigh-definition
images with 22.2 multichannel sound.
5/29(Sun)
5/28(Sat)
Open House 2015 Open House 2015
http://www.nhk.or.jp/strl/index-e.html
Odakyu Line SeijogakuenmaeStation South exit
Tokyu Denentoshi LineYoga Station
Odakyu / Tokyu Bus
Tokyu BusTo Sibuya Sta.渋24
Tokyu BusTo Seijogakuenmae Sta.To Seijogakuenmae Sta.
等12
※ Please use the train and the bus for coming. Please get off at the bus stop: NHK Gijutsu Kenkyujo
用06To Todoroki Soshajo等12
To Yoga Sta. (weekdays only)用06
To Toritsudaigaku Sta. North exit
都立01
NHK (Japan Broadcasting Corporation)Science & Technology Research Laboratories
1-10-1 Kinuta, Setagaya-ku Tokyo, 157-8510, Japan
(weekdays only)