standard venturi presentation (10min)
DESCRIPTION
VENTURI is a collaborative European project targeting the shortcomings of current Augmented Reality design; bringing together the forces of mobile platform manufacturers, technology providers, content creators, and researchers in the field. VENTURI aims to place engaging, innovative and useful mixed reality experiences into the hands of ordinary people, by co-evolving next generation AR platforms and algorithms. VENTURI plans to create a seamless and optimal user experience through a thorough analysis and evolution of the AR technology chain, spanning device hardware capabilities to user satisfaction.TRANSCRIPT
FP7-ICT-2011-1.5 Networked Media and Search Systems
End-to-end Immersive and Interactive Media Technologies
“creating a pervasive Augmented Reality paradigm, where information is presented in a ‘user’ rather than a ‘device’ centric way”
Oct ‘11 – Oct ‘14
Partners FBK - Italy
Fraunhofer HHI - Germany
ST-Microelectronics – Italy
Metaio - Germany
ST-Microelectronics and ST-Ericsson – France
e-Diam Sistemas - Spain
Sony Mobile - Sweden
INRIA - France
37%
25%
38% Research
Istitutions
SME
Industry
Coverage
Pu
re R
esearch
Ap
plied
R
esearch
Lab
P
rototypes
Mob
ile P
rototypes
Pre-p
rodu
ct
Prod
ucts
Ma
rket End
Users
Challenges
O AR focused platform development
O Visual registration chain user-device-world
O World modelling using consumer-level mobile devices
O Mobile contextual understanding
O Context sensitive content delivery
O User interactions with AR
AR Platform Evolution
O Multi-core CPU & GPU
O Hi-res single or dual cameras,
plus large set of sensors
O Smart power management
policies
O Address AR requirements in development of platform SW framework and services (e.g. sensor fusion, video pipe optimized for AR use)
O Optimize the whole processing chain, taking server side resources (i.e. The cloud) when possible
Registration chain
O Match visual features with nearby
photos to identify ‘tagged’ landmarks
O Match visual features to synthetic
models of the World
O Locate text/logos/signs in the
environment then check against local
geo-objects/events
O Use on-board sensors to guide image /
audio processing algorithms
O Estimate user (body, face, hands)
position with respect to the device
World modelling O Photogrammetric 3D
reconstruction using mono/
stereo cameras (including
historical imagery)
O Structure from motion for
modelling dynamic objects in
the scene
O Planar surface identification
for ad-hoc interactive surfaces
Mobile context understanding
O User motion/activity analysis using on-board sensors
O Fusion of cues from:
O modelling and registration
O geo-objects
O geo-social activity
Context sensitive AR delivery
O Inject AR data in a natural manner according to:
O The environment
O Occlusions
O Lighting and shadows
O User activity
O Exploit user and environment ‘context’ to select
best delivery modality (text, graphics, audio,
haptic, etc.), i.e. scalable/simplify-able audio-
visual content
User Interactions O Explore evolving means of AR
delivery and interaction
O In-air interfaces for sensing
gestures (motion of device,
hands, face, etc..)
O 3D audio
O Micro-projection for multi-user,
social-AR
O AR visors/glasses
Prototypes O A consolidated prototype at the end of each year to be
evaluated through Use Cases
O 3 Use Cases
O Tourism
O Gaming
O Personal assistant
“creating a pervasive Augmented Reality paradigm, where information is presented in a ‘user’ rather than a ‘device’ centric way”