deliverable d4.4 report on cloud technology...h2020-grant no 654008 embric – d4.4: report on cloud...

16
Deliverable D4.4 Report on cloud technology Date: 30.5.19

Upload: others

Post on 07-Sep-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

Deliverable D4.4 Report on cloud technology

Date: 30.5.19

Page 2: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 2 of 16

HORIZON 2020 - INFRADEV Implementation and operation of cross-cutting services and solutions

for clusters of ESFRI

Grant Agreement number: 654008

Project acronym: EMBRIC

Contract start date: 01/06/2015

Project website address: www.embric.eu

Due date of deliverable: 31/05/2019

Dissemination level: Public

Page 3: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 3 of 16

Document properties

Partner responsible EMBL-EBI (Guy Cochrane)

Author(s)/editor(s)

Christophe Blanchet, Olivier Collin, Erwan Corre, Efflam Lemaillet, Jonathan Lorenzo, Claudine Médigue, Arnaud Meng, Eric Pelletier, Chloé Riou, Olivier

Sallou, Rob Finn, Miguel Boland, Peter Harrison, Jeena Rajan, Annalisa Milano,

Bruno Fosso, Monica Santamaria, Suran Jayathilaka, Vishnukumar Balavenkataraman Kadhirvelu, Guy Cochrane

Version v1

Abstract As part of the EMBRIC Configurator task, we have investigated cloud-based models for the analysis of marine molecular biology data and methods to discover data across distinct data resources. Our explorations have

included data-to-compute and compute-to-data scenarios, we have explored the porting of existing workflows to

cloud and we have considered performance issues in transitioning analysis to cloud-based systems. Our work has seen the release of rules for maintaining dynamically the globally comprehensive marine sequence data set, the

public release of a number of analysis workflows and the launch of a demonstrator web site showing search across

multiple data resources. Amongst our conclusions from this work are that multiple models for cloud usage are viable for marine biotechnology applications, that performance trade-offs should be considered in the design phase

and that compliance with interface technical standards in data resources is encouraged to allow integrative access

to distinct data sources.

Page 4: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 4 of 16

Table of Contents

1 Introduction 2 Cloud explorations

2.1 The French use case 2.2 The EMBL-EBI use case 2.3 The Italian use case 2.4 Performance and cost considerations

3 Services to integrate across data resources 4 Conclusions 5 Annex: abbreviations and glossary

Page 5: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 5 of 16

1 Introduction

This deliverable follows up on the developments required to offer the EMBRIC Configurator1 (deliverable D4.22). Our objective has been to explore, demonstrate and assess the use of cloud and federation methods

in the management, analysis and provision of marine-related data, with focus on biomolecular data types

where data volumes are greatest. The work involved investigations into:

1) connectivity between ELIXIR3 data services and the existing French IFB federated cloud compute

facility, Galaxy service and associated data resources, focusing on marine sequence and related metadata:

moving data to compute

2) deployment of marine sequence-related computational workflows into the French IFB federated cloud

compute facility: moving compute to data

3) deployment of marine sequence-related computational workflows and production-level services developed at the Italian node of ELIXIR in the Embassy Cloud4 infrastructure hosted at EMBL-EBI: moving compute to data

4) extraction and encapsulation of existing marine metagenomics computational workflows into the Common Workflow Language (CWL) for deployment within Embassy Cloud: decoupling existing compute from services

5) compute requirements for a portable computational workflow and cost implications of adoption of a cloud-based strategy: understanding performance and cost implications

6) a light web interface offering search across distinct data sources: demonstrating user services based on distinct data sources

1 http://www.embric.eu/configurator 2 Deliverable 4.2 Configurator service available for the EMBRIC community. http://www.embric.eu/sites/default/files/deliverables/D4.2_Configurator%20service%20for%20EMBRIC.pdf 3 ELIXIR. https://www.elixir-europe.org 4 http://www.embassycloud.org/

Page 6: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 6 of 16

2 Cloud explorations

2.1 The French use case The French Institute of Bioinformatics (IFB; CNRS UMS3601) has established a federation of clouds, Biosphere, which relies on interconnected IT infrastructures of some of IFB’s platforms, providing distributed services to analyze

life science data. Biosphere provides multi-cloud deployment that help, for example, to combine several CPU

resources in order to build a larger resource, to use different data sources, or to guarantee the availability of cloud resources.

Two investigations were carried out under this deliverable, one focusing on data integration and the second on

cloud integration. The first investigation explored the connectivity between the existing IFB Cloud service and data resources and

further services from ELIXIR, especially supporting greater connectivity around marine sequence and related

metadata. This work is of particular relevance for high-volume data sets, for which computing and data must be co-

localised to provide adequate performance. This was assessed with BioMAJ4, an IFB data transfer technology, which will distribute data of marine relevance from EMBL-EBI. BioMAJ, a tool dedicated to data synchronization

and processing, will take partitioned marine datasets from the European Nucleotide Archive (ENA;

https://www.ebi.ac.uk/ena). Data can then be downloaded on the IFB-Biosphere analysis infrastructure and stored in dedicated release directories. For the investigations on cloud integration, two marine pipelines have been tested as use cases to be deployed in

the IFB cloud. These are the marine metagenomic META-pipe, developed by the project partner Universitetet i Tromsø and the METdb metatranscriptomic pipeline, developed by Centre national de la Recherche Scientifique

Institut Française de Bioinformatique-Roscoff. First prototypes have been integrated and are being tested by the

partners.

2.1.1 Integration in the French IFB Cloud of ENA data resources

BioMAJ5 (BIOlogie Mise A Jour) is an open source databank management software with the purpose of maintaining

any set of data up to date. As it is automated, it performs the different steps of dataset management (updates monitoring, download, checking, processing, etc.). The software provides support for the management of many

sequence databanks on a site.

To connect to the ENA, one defines a workflow that connects to the ENA data resource and performs a pre-processing step involving queries to get the list of available files. From this list it detects the last modification to see

if a new release is available. In the case of a new release, it fetches only the modified files to avoid re-downloading

unmodified files. Once downloaded, it creates a new local release and applies a number of post-processing indexing steps. ENA databanks are updated automatically at regular interval on clouds using BioMAJ, with files made

available to users on the shared file system.

A proof of concept was realized with a partitioned example dataset that was triggered from the EMBL-EBI’s ENA Discovery API. The result of the query is a list in JSON format of the data entries. BioMAJ analyses this list to find

5 Filangi, O. et al. BioMAJ: A flexible framework for databanks synchronization and processing. Bioinformatics24,1823–1825 (2008)

Page 7: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 7 of 16

the latest modifications and downloads the updated records. Once relevant marine datasets are validated, this

automated update process will be applied to replicate them from the ENA to the French IFB cloud.

.

2.1.2 Integration of META-pipe to the French IFB Cloud

META-pipe6 is a complete workflow for the analysis of marine metagenomic data developed by UiT (The Arctic University of Norway). It provides assembly of high-throughput sequence data, functional annotation of predicted

genes, and taxonomic profiling. META-pipe consists of three parts; processing of reads (merging, filtering, 16S

rRNA extraction and assembly), taxonomic classification (using reads against RefSeq and MAR databases, and predicted 16S rRNA) and functional assignment of predicted coding sequences (CDSs). META-pipe is available

from the Marine Metagenomics Portal7.

META-pipe distributed architecture is based on the distributed framework Apache Spark8 which has a Master/Slave

model. An integration of Spark has been done in the IFB-Biosphere cloud infrastructure that allows deployment of

custom spark clusters of the desired computing power. The use of Hadoop HDFs allows a storage solution on all clouds. The META-pipe pipeline is then deployed on top of such Spark instance. Once completed and validated,

this cloud appliance will be made available on the French IFB Cloud.

2.1.3 Integration of the CNRS IFB-Roscoff workflow into the French IFB Cloud

To expand the potential of metagenomics for the research community and biotech industry, particularly within the

marine domain, metagenomics methodologies must overcome a number of challenges related to standardization,

development of relevant databases and bioinformatics tools.

In the context of work package six of the European project ELIXIR (https://www.elixir-europe.org/) we are developing a eukaryote transcriptome reference database namely METdb which contains datasets from different

6 META-pipe: Espen Mikal Robertsen, Hubert Denise, Alex Mitchell, Robert D. Finn, Lars Ailo Bongo, Nils Peder Willassen.(2017) ELIXIR pilot action: Marine metagenomics - towards a domain specific set of sustainable services. 7 https://mmp.sfb.uit.no 8 https://spark.apache.org

Assemblies and read data

FIGURE 1 DATA TO COMPUTE, SHOWING THE FRENCH USE CASE ON

DATA INTEGRATION WHERE PARTITIONED MARINE DATASETS ARE

CO-LOCALIZED ON BIOMAJ TO ALLOW DATA ACCESS

Page 8: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 8 of 16

Roscoff station and Tara research projects. All datasets were assembled and analyzed using two workflows9

dedicated to de novo assembly and functional annotation, both developed with the CWL workflow management

system to ensure the data standardization and reproducibility.

The first pipeline includes four distinct steps: (1) raw data processing with Trimmomatic (Bolger et al. 2014) to filter

and trim reads according to their sequence quality, (2) readset comparison using Simka to detect possible cross

libraries contaminations (Benoit et al. 2016), (3) de novo assembly step using Trinity (Grabbher et al. 2011), and (4) assembled transcript quality evaluation using Transrate (Smith-Unna et al. 2016). Downstream analyses of

assembled transcripts are performed in the second pipeline including coding domain prediction and functional

annotation.

The two-step pipeline is composed of workflows rewritten in CWL during the ELIXIR Workflow Implementation

Study10. An existing cloud application BioPipes11 already contains the required tools to execute CWL workflows (in

this case we used ‘cwltool’). Both pipelines are publicly available on GitHub: Assembly pipeline for paired-end reads and for single reads. Annotation pipeline is described here in CWL version and can also be viewed here. A new

cloud appliance, called “Marine Eukaryotic Transcriptomic (ELIXIR, EMBRIC)”, was created with the BioPipes

appliance used as a base to integrate the CWL pipeline. Deploying it with IFB-Biosphere allows deployment of a comprehensive CWL environment in the French IFB cloud, which includes the current version of the workflows from

the GitHub repository. Once completed and validated, this cloud appliance will be made available on the French

IFB Cloud.

9 https://github.com/mscheremetjew/workflow-is-cwl/tree/master/workflows 10 https://github.com/mscheremetjew/workflow-is-cwl/tree/master/workflows 11 https://biosphere.france-bioinformatique.fr/catalogue/appliance/119

Marine metagenomic data Eukaryote transcriptome data

FIGURE 2 COMPUTE TO DATA, SHOWING INTEGRATION OF TWO

MARINE PIPELINES IN THE IFB CLOUD.

Page 9: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 9 of 16

2.2 The EMBL-EBI use case This investigation used the working version of the EMBL-EBI’s MGnify12 assembly binning CWL workflow which

group contigs into sets that have come from a single species (or population of species) (Figure 3). The workflow takes as input contigs and raw reads and uses metaWrap to perform the binning process. This uses three different

binning algorithms (metabat, maxbins and concoct) and then processes the combined results to output a set of

refined bins, which are assessed with CheckM to determine completeness and contamination.

FIGURE 3 OVERVIEW OF THE MGNIFY ASSEMBLY BINNING CWL WORKFLOW13

While binning can be performed on multiple samples, it is rarely applied across large and diverse datasets, for both

technical (memory) and biological (increased risk of fragmentation and/or producing chimeras) reasons. Thus, to

remove the redundancy between different binned datasets, and to determine the overlap with reference databases such as MarRef and MarDB, two different dereplication CWL workflows are being developed (Figure 4). The first

will just perform dereplication of input bins, followed by taxonomic assignment using GTDB. The second

dereplication workflow will extend the aforementioned workflow, but first will remove redundancy against a reference database (yet to be completed).

FIGURE 4 OVERVIEW OF THE DEREPLICATION CWL WORKFLOW14

12 https://www.ebi.ac.uk/metagenomics/ 13 https://github.com/EBI-Metagenomics/CWL_binning/blob/master/workflows/binning.cwl 14 https://github.com/EBI-Metagenomics/CWL_binning/blob/master/workflows/binning.cwl

Page 10: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 10 of 16

With all workflows, current work involves containerising the tools, a step necessary to enable the workflow to be

deployed on the Embassy Cloud. Once final testing has been completed, these CWL workflows and their associated

containers will be made publicly available and advertised to the marine community.

2.3 The Italian use case The Embassy Cloud (http://www.embassycloud.org/) is an EMBL-EBI infrastructure-as-a-service, which was

configured for CNR to support the provision of production-level services of relevance to marine science in close

geographical proximity to large ELIXIR data resources, particularly focusing on sequence analysis tools. This work will bring ELIXIR services important to marine scientists closer to integrated analytical workflows operating

viably across multiple ELIXIR resources.

In this framework Embassy Cloud featured a private, secure “tenancy” where the BioMaS15 pipeline deployed

onto a virtual embassy cloud machine by CNR, was employed as a use case for taxonomic annotation of

eukaryotic microbiomes. BioMaS is an automatic workflow enabling the execution of independent tasks for analysis of metabarcoding amplicon data. Although this analysis can be computationally expensive, through

Embassy Cloud, improved scalability and service reliability can be achieved. In the near future, CNR plans to

deploy ITSoneDB6, recently updated to the ENA release 138, as a reference collection in the BioMaS instances within the Embassy Cloud.

In addition, the MetaShot16 pipeline, a Python-based workflow for deep taxonomic profiling of host-associated

microbiomes, is also currently under investigation.

15 Fosso, B. et al. BioMaS: A modular pipeline for Bioinformatic analysis of Metagenomic AmpliconS. BMC

Bioinformatics16,1–11 (2015). 16 Fosso, B. et al. MetaShot: an accurate workflow for taxon classification of host-associated microbiome from

shotgun metagenomic data. Bioinformatics. 33(11):1730-1732 (2017).

Marine metabarcoding amplicon data

FIGURE 5 COMPUTE TO DATA, SHOWING THE ITALIAN USE CASE WHERE IMPROVED

SCALABILITY AND SERVICE RELIABILITY CAN BE ACHIEVED BY BIOMAS THROUGH

EMBASSY CLOUD

Page 11: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 11 of 16

2.4 Performance and cost considerations The MGnify pipeline for performing data retrieval from ENA, assembly and basic evaluation of the assembly has

been implemented in CWL and has been developed in accordance with current best practices (e.g. the use of containers). We have used this implementation to perform a series of different metagenomic assemblies using a

variety of different sized datasets from a range of marine environments (e.g. coastal, deep ocean, sediment). For

each assembly we gathered the following key computation metrics: time, memory and number of cores, as well as size of the input data.

FIGURE 6. EXECUTION TIME AND PEAK MEMORY REQUIREMENT FOR ASSEMBLY OF RUNS FROM A RANGE OF BIOMES VS

COMPRESSED INPUT FILE SIZE AND BASE COUNT.

Page 12: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 12 of 16

FIGURE 7. PREDICTED COST OF ASSEMBLIES WHEN RUN OPTIMALLY ON AMAZON AWS OR MICROSOFT AZURE CLOUD

INFRASTRUCTURE. THE LINES SEEN IN THE DATA CORRESPOND TO MACHINE FLAVOURS, WHICH HAVE A LINEAR COST TO USAGE

TIME.

The charts above (Figures 6 and 7) show the results of the assembly analysis and illustrate the difficulty in accurately predicting the execution time, hardware requirements and therefore cost of generating assemblies. Nevertheless,

while exact cost estimates are hard to achieve, there are clear trends in the data. This information will empower the

marine community to make decisions about the cost of analysis within a cloud setting so that appropriate resources can be requested. Furthermore, as datasets increase in size and complexity, it may be better to use more efficient

algorithms, even to the potential detriment of results. For example, our experience has shown that while

MetaSPAdes generally produces “better” assemblies, MegaHIT is more memory efficient. However, by increasing the dataset sizes through co-assembly, these differences in quality can often be offset or minimised.

Page 13: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 13 of 16

3 Services to integrate across data sources

Bioinformatics data sources are provided by a number of ELIXIR Nodes from across the network. As such, it will be

necessary – often prior to any move of data to compute, or of compute to data – to assemble data sets from distinct sources in preparation for compute. In order to assemble a specific data set for analysis, users must be able to

discover and navigate sufficiently data across these distinct sources. We have therefore investigated the coupling

of distinct data sources and the operation of an integrating search across these in a simple web layer.

Beyond Work Package 4, the EMBRIC project has seen communication with various external communities around

Access and Benefit Sharing relating to marine biodiversity, especially in the context of the Nagoya Protocol and the

Biodiversity Beyond National Jurisdictions initiatives. We therefore chose to create the demonstrator web site for this deliverable that would not only serve in our technical explorations, but would also have value as a demonstrator

in itself, for EMBRIC to use in its communications. The demonstrator site, therefore takes two distinct data resources

as its sources: sequence data (ENA) and a legal/regulatory document store. The content available from the system is from the Tara Oceans expedition.

FIGURE 8. SCREENSHOT OF THE TARA EXPEDITIONS ACCESS AND BENEFIT SHARING DEMONSTRATOR, SHOWING THE

RESULTS OF A SEARCH BASED ON A SAMPLING EVENT WITH DIFFERENT COLOURED COLUMNS IN THE RESULTS TABLE

INDICATING DISTINCT DATA SOURCES: SAMPLE REGISTRY (ORANGE), MOLECULAR (GREEN) AND LEGAL (BLUE).

The web site is available from https://tara-abs.org/demonstrator/ and is shown in Figure 8. Users are able to

search based on sample, event and station identifiers and are given the integrated output table, including links to

sequence data and legal documents, in web and downloadable form. The source databases are the Tara sample registry (EMBL-EBI’s BioSamples database; https://www.ebi.ac.uk/biosamples/), the European Nucleotide

Page 14: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 14 of 16

Archive (ENA; https://www.ebi.ac.uk/ena) and a free-standing document store. (While we have built this latter for

the purposes of demonstration, it exists as a distinct database with publicly available API (https://tara-

abs.org/demonstrator/api/swagger-ui.html), which we have used for integration in the demonstrator.)

Page 15: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 15 of 16

4 Conclusions

Work for this deliverable has included a number of explorations of cloud usage and data resource access for large

marine-biotechnology related bioinformatics data operations. We have explored models in which large data sets

are packaged and moved to cloud compute, involving the large ENA marine data partition being made available across the federated French Cloud using the BioMAJ system. We have explored a number of cases of moving

compute to data including the deployment of the METdb, META-pipe, BioMAS and MGnify assembly binning

workflows variously into the French Cloud and EMBL-EBI’s Embassy Cloud. We have explored compute performance and cost issues around deployment of workflows in cloud infrastructure. Finally, we have demonstrated

search and browse across distinct data sources in a simple demonstrator web site.

These various explorations have served initially to bring together marine bioinformatics software engineers and

bioinformaticians around cloud infrastructure and, over time, have allowed exposure to the kinds of practical work that is required to decouple existing workflows from production system, express these in appropriate structures

ready for cloud compute and to understand performance considerations when designing cloud strategies. Our work

has included the creation of rules that define the “marine” sequence dataset – a combination of data from those species known to be from marine environments and from anonymously sequenced marine environments (e.g. using

metagenomics methods) – that have been used directly to configure search systems provided by the ENA for

onward use by the broader user community. Further legacy of this deliverable includes a number of publicly available workflows (as documented in this report), marine data availability in the French cloud infrastructure and a

web demonstrator showing search and integrated access to data from distinct sources.

We draw a number of conclusions from this work:

1. Data partitioning systems will likely be required that allow the lightest-possible biomolecular data sets to

be made available to compute, with impact both on data-to-compute and compute-to-data scenarios, for both

network bandwidth (in the former scenario) and compute efficiency (both former and latter scenarios); the marine data partition rules remain available, but further more fine-grained partitions may become necessary in addition.

2. Marine bioinformatics data can be made available in cloud systems remote from the data stores in practical

ways; having dedicated data management systems, such as BioMAJ, are important in streamlining this process.

3. Remote cloud compute offers a viable solution for those wishing to deploy analysis and services without

sustaining their own infrastructure for these purposes.

4. Work is required to decouple, externalise and make cloud-deployable existing pipelines; in marine bioinformatics at present, it is most likely that existing non-cloud-ready pipelines will continue to need to be adapted

for cloud deployment (e.g. using CWL)

5. Consideration should be given to profiling memory and execution times for pipelines and workflows that

are candidates for cloud deployment as these will impact cost; this work will be additionally useful in understanding accuracy/performance trade-offs between possible pipeline and tool choices.

6. Computing across distinct data sources, which will be required for some marine bioinformatics cloud

analysis scenarios, requires discoverability across these distinct sources; with appropriate agreement of technical standards for data discovery, lightweight web sites with cross-source search are possible.

Page 16: Deliverable D4.4 Report on cloud technology...H2020-GRANT NO 654008 EMBRIC – D4.4: Report on cloud technology, page 4 of 16 Table of Contents 1 Introduction 2 Cloud explorations

H2020-GRANT NO 654008

EMBRIC – D4.4: Report on cloud technology, page 16 of 16

5 Annex: abbreviations and glossary

Abbreviations

CNR = Consiglio Nazionale delle Ricerche

CNRS IFB-Roscoff = Centre national de la Recherche Scientifique Institut Française de Bioinformatique-Roscoff

CWL = Common Workflow Language

ENA = European Nucleotide Archive

EMBL-EBI = European Molecular Biology Laboratory - European Bioinformatics Institute

IFB = Institut Française de Bioinformatique

MAG - Metagenome Assembled Genome

UiT = Universitetet i Tromsø, The Arctic University of Norway

Glossary Cloud computing = An online pool of shared computing resources where you can access, store and analyse data (Google Drive, Amazon Web Services and Microsoft Office 365 are examples of cloud services). Unlike owning

your own hardware, you can quickly adjust cloud services to scale to your needs, only pay for what you use and

avoid active management of the resources.

ELIXIR = distributed research infrastructure operated by 21 European ELIXIR Nodes where multiple partners

(ELIXIR nodes) use common infrastructure to provide an integrated service to users, each node focusing on their own areas of interest and expertise.