large scale on-demand image processing for disaster relief
DESCRIPTION
This is a status update (as of Feb 22, 2010) of a new Open Cloud Consortium project that will provide on-demand, large scale image processing to assist with disaster relief efforts.TRANSCRIPT
Large Scale On-Demand Image Processing for Disaster Relief
Robert GrossmanOpen Cloud Consortium
February 22, 2010
www.opencloudconsortium.org
• 501(3)(c) Not-for-profit corporation• Supports the development of standards,
interoperability frameworks, and reference implementations.
• Manages testbeds: Open Cloud Testbed and Intercloud Testbed.
• Manages cloud computing infrastructure to support scientific research: Open Science Data Cloud.
• Develops benchmarks.
2
www.opencloudconsortium.org
Focus of OCC Large Data Cloud Working Group
3
Cloud Storage ServicesCloud Storage Services
Cloud Compute Services (MapReduce, UDF, & other programming frameworks)
Cloud Compute Services (MapReduce, UDF, & other programming frameworks)
Table-based Data ServicesTable-based Data Services
Relational-like Data ServicesRelational-like Data Services
AppApp AppApp AppApp AppApp AppApp
AppApp AppApp
AppApp AppApp
• Developing APIs for this framework.
Storage Services
Compute Services
Applications
Virtual Network Manager
Data Services
Network Transport
Virtual Machine Manager
IF-MAP (Metadata)
Services
IF-MAP (Metadata)
Services
Identity ManagerIdentity
Manager
IaaS
PaaS
Apps
Bridging the Gaps…A Small Step
Infrastructure as a Service– Virtual Data Centers (VDC)– Virtual Networks (VN)– Virtual Machines (VM)– Physical Resources
Platform as a Service– Cloud Compute Services– Data as a Service
Open Virtualization Format (OVF)
Open Cloud Computing Interface (OCCI)
SNIA Cloud Data Management Interface (CDMI)
Large Data Cloud Interoperability Framework
Metadata service linking IaaS and DaaS
Metadata service naming and linking entities in the IaaS layers
Open Science Data Cloud
6
Astronomical dataBiological data (Bionimbus)
Networking dataImage processing for disaster relief
Image Processing on Large Data Clouds
• Data parallel applications– Parallelism is often required at file or directory level– From a MapReduce perspective, often only Map
operations are required.
• Data intensive applications– The input data size can be at 10s or 100s of TB– Requires parallel disk IO & data locality is important
Distributed File Systems• Sector is broadly similar to the Hadoop
Distributed File System• Main differences– Hadoop directly implements a distributed block based
file system– Sector is a layer over a native file system
• Sector does not split files– A single image will not be split, therefore when it is
being processed, the application does not need to read the data from other nodes via network
– A directory can be kept together on a single node as well, as an option
Sphere UDF
• Sphere allows a User Defined Function to be applied to each file (either it is a single image or multiple images)
• Existing applications, such as OSSIM, can be wrapped up in a Sphere UDF or invoked via Sector streams
• In many situations, Sphere streaming utility accepts a data directory and a application binary as inputs
Sector and OSSIM
• ./sector_stream -i haiti -c ossim_foo -o results• “-i” specifies the input data directory. In this
example all images are located in the directory “haiti”
• “-c” refers to the command (or application)• “-o” specifies the output location. This is a
directory and the output of each input image is stored in a corresponding file
Next Steps
• Working group will set up persistent on-demand cloud for image processing to assist disaster relief using OSSIM and related open source software.
• Will be used as a test case for Large Data Cloud and Intercloud Working Groups.
• One rack of dedicated hardware will be available, with required high performance networking in place.
• Initial operating capability by May 15,2010.
For More Information