technology brief enterprise hadoop: architecting next-generation infrastructure · pdf...

3
SN0330997-00 Rev. B 07/14 1 Technology Brief Hadoop Enables Big Data Analytics Knowledge is power, and, in the customer-to-business relationship, Hadoop acts like a powerful microscope, helping companies zero in on specific attributes and behaviors of their customers for targeted marketing and advertising. INTRODUCTION Hadoop is an open source software framework that is under the Apache foundation. It can take large amounts of unstructured data and transform it into a query result that can be further analyzed with business analytic software. Hadoop enabled Yahoo ® and Google ® to analyze the tremendous amount of data generated daily by their users. The power of Hadoop is the built-in parallel processing and ability to scale in a linear fashion to process a query against a large data set and produce a result. This is a rare benefit because very few IT cluster systems scale linearly. Databases in particular reach a critical mass and then see diminished returns as additional hardware is added. This is not the case with Hadoop. For example, it is not uncommon for Yahoo to have 4,000 nodes in a production environment. Today, Hadoop has become an agenda item within many IT shops because of its ability to transform Big Data for analysis. Big Data is the vast amounts of user data generated every day on the Web and within a company’s data systems. The interest in Big Data, spurred by the ability to mine significant information from a company’s unstructured and structured data, has brought many large and small companies to look at the potential of deploying a Hadoop cluster. Hadoop makes business analytics possible for Big Data in a way that changes how companies view their customers. Knowledge is power, and, in the customer-to-business relationship, Hadoop acts like a powerful microscope, helping companies zero in on specific attributes and behaviors of their customers for targeted marketing and advertising. Enterprise Hadoop: Architecting Next-Generation Infrastructure THE HADOOP OPERATIONAL MODEL The primary operational model of Hadoop is to break an extremely large data set into smaller workable units upon which queries can be processed. A set of similarly resourced compute nodes is used to enable processing of the queries in parallel. The results are aggregated back together at the end of the processing tasks and reported to the user or piped into a business analytics application for further analysis or dashboard display. To minimize processing time in this parallel architecture, Hadoop “moves jobs to data” rather than using the traditional model of “moving data to jobs.” This implies that once data is populated in the distributed system—during an actual search, query, or data mining operation—most jobs will access local data. Only the results of a local query will be distributed between the nodes during the shuffle and reduce operation. THE HADOOP ARCHITECTURAL MODEL: WEB-SCALE This distributed, parallel processing model has a significant effect on how the data center infrastructure is architected to support a Hadoop system, particularly when the deployment is targeted to support the requirements of the initial developers of the Hadoop technology: Web-scale companies such as Google and Facebook ® . The traditional Hadoop architecture model is a cluster of nodes numbering in the 1,000’s. The primary requirement for the individual node hardware is to balance four resource parameters— compute, memory, network, and storage—with cost. At Web-scale, the optimal solution was to use yesterday’s hardware at a relatively low cost and deploy enough servers to mitigate any potential failure in the cluster or rack. By sizing to thousands of nodes, this objective was achieved.

Upload: doandieu

Post on 30-Mar-2018

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Technology Brief Enterprise Hadoop: Architecting Next-Generation Infrastructure · PDF file · 2017-12-15SN0330997-00 Rev. B 07/14 2 Enterprise Hadoop: Architecting Next-Generation

SN0330997-00 Rev B 0714 1

Technology Brief

Hadoop Enables Big Data Analytics

Knowledge is power and in the customer-to-business relationship Hadoop acts like a powerful microscope helping companies zero in on specific attributes and behaviors of their customers for targeted marketing and advertising

INTRODUCTION Hadoop is an open source software framework that is under the Apache foundation It can take large amounts of unstructured data and transform it into a query result that can be further analyzed with business analytic software Hadoop enabled Yahooreg and Googlereg to analyze the tremendous amount of data generated daily by their users The power of Hadoop is the built-in parallel processing and ability to scale in a linear fashion to process a query against a large data set and produce a result This is a rare benefit because very few IT cluster systems scale linearly Databases in particular reach a critical mass and then see diminished returns as additional hardware is added This is not the case with Hadoop For example it is not uncommon for Yahoo to have 4000 nodes in a production environment

Today Hadoop has become an agenda item within many IT shops because of its ability to transform Big Data for analysis Big Data is the vast amounts of user data generated every day on the Web and within a companyrsquos data systems The interest in Big Data spurred by the ability to mine significant information from a companyrsquos unstructured and structured data has brought many large and small companies to look at the potential of deploying a Hadoop cluster Hadoop makes business analytics possible for Big Data in a way that changes how companies view their customers Knowledge is power and in the customer-to-business relationship Hadoop acts like a powerful microscope helping companies zero in on specific attributes and behaviors of their customers for targeted marketing and advertising

Enterprise Hadoop Architecting Next-Generation Infrastructure

THE HADOOP OPERATIONAL MODEL The primary operational model of Hadoop is to break an extremely large data set into smaller workable units upon which queries can be processed A set of similarly resourced compute nodes is used to enable processing of the queries in parallel The results are aggregated back together at the end of the processing tasks and reported to the user or piped into a business analytics application for further analysis or dashboard display To minimize processing time in this parallel architecture Hadoop ldquomoves jobs to datardquo rather than using the traditional model of ldquomoving data to jobsrdquo This implies that once data is populated in the distributed systemmdashduring an actual search query or data mining operationmdashmost jobs will access local data Only the results of a local query will be distributed between the nodes during the shuffle and reduce operation

THE HADOOP ARCHITECTURAL MODEL WEB-SCALE This distributed parallel processing model has a significant effect on how the data center infrastructure is architected to support a Hadoop system particularly when the deployment is targeted to support the requirements of the initial developers of the Hadoop technology Web-scale companies such as Google and Facebookreg The traditional Hadoop architecture model is a cluster of nodes numbering in the 1000rsquos The primary requirement for the individual node hardware is to balance four resource parametersmdashcompute memory network and storagemdashwith cost At Web-scale the optimal solution was to use yesterdayrsquos hardware at a relatively low cost and deploy enough servers to mitigate any potential failure in the cluster or rack By sizing to thousands of nodes this objective was achieved

SN0330997-00 Rev B 0714 2

Enterprise Hadoop Architecting Next-Generation Infrastructure

Technology Brief

The Web-scale Hadoop model also calls for direct attached storage (DAS) in the server versus a SAN or NAS There are two primary reasons for this DAS requirement With this level of node scaling the simplicity of configuration and deployment in a cookie-cutter fashion is a primary consideration along with the cost of the storage system The arrival of a cost-effective 1TB disk enabled large clusters to store petabytes of information This solved a very expensive dilemma for the traditional shop with a SAN where the cost of that much storage would have been a prohibitive entry point to Hadoop and the data store

THE HADOOP ARCHITECTURAL MODEL ENTERPRISE The remainder of this paper will focus on implementing a Hadoop cluster within an enterprise data center environment and the tradeoffs that come with following the traditional Big Data ownerrsquos model Considering that the Big Data model for the initial developers was millions if not billions of Web pages and users individual machine performance and cost needed to be balanced against performance measured by jobs per cluster However most corporations are not associated with the Internet to the same extent as Google Yahoo or Facebook so using them as a reference for the corporationrsquos Hadoop cluster may not be the best path to follow As an example in Googlersquos history 10000 servers was an early milestone for its infrastructure For the initial deployment of Hadoop Web-scale organizations were quick to adopt 1GbE in lieu of other more costly options The CPU and memory footprint of the previous-generation hardware selected because it fit the cost model requirement balanced quite nicely with the ubiquitous 1GbE interface and data rate in a minimized cost solution

Consider that most enterprise data centers with the exception of those very recently built have little or no space for thousands of inexpensive white box servers to deploy for the cluster Fortunately Hadoop has the flexibility to change the processing paradigm from ldquoa few jobs on a whole lot of serversrdquo to ldquoa lot of jobs on fewer serversrdquo so resources can be scaled to meet specific enterprise requirements While it is possible to execute Hadoop in a virtualized environment Hadoop already enables management of its node resources so the efficiency gained with hypervisor-based resource utilization is already present in the system and there are vendor-based solutions that automate the process of maximizing per node resource assignment Virtualization also adds complexity from a configuration perspective and adds cost to the solution which is still reliant on a cookie- cutter paradigm centered on simplicity and cost minimization But Moorersquos Law helps us here as ever more powerful servers keep driving down the cost per unit of processing power Multi-core CPUs of 16 and higher are at the low end of the current server hardware hierarchy Coupled with faster and larger memory footprints of 32GB and up companies have the choice of the underpinnings of a midstream node (cost-wise) that would look like the high-end node from two years ago Looking at the green aspects of the efficiencies of current hardware there seems to be a natural propensity to move toward a more powerful balanced node for enterprise scale deployments

BALANCE OR BOTTLENECK The goal in any Hadoop data node is to have a balance between the CPU memory and network resources If any one of the three is significantly more powerful than the others there is the potential for a bottleneck in the processing capability of the system Refer to any study on Hadoop performance on the Web and it will show this very fact Adding more power on the CPU and memory components unbalances the equation in terms of the network affecting how efficiently Hadoop cluster nodes are able to shuffle data reduce the results and populate HDFS storage nodes within the Hadoop cluster Fortunately just as Moorersquos Law has been at work on the CPU and memory resources the same holds true for advancements in Ethernet technology from 1GbE to 10GbE and beyond While 10GbE may not be economically attractive for Web-scale deployments the value in enterprise environments may be a viable option for bringing the Hadoop data node resource equation back into balance

CLUSTER NETWORKING INFRASTRUCTURE Nodes using a high multi-core architecture while keeping Moorersquos law happily moving forward tend to be underpowered on the network side if maintaining the 1GbE paradigm 1GbE server connectivity and 1GbE switching infrastructure will not allow this current-generation hardware cluster to scale to its full potential A 10GbE network is the right investment to restore a balance on the node and achieve the maximum performance potential of the Hadoop cluster As with everything in regards to Hadoop it really depends on the data set The jobscluster number will increase when the network no longer has the bottleneck The data ingest and moving the results off the cluster will also produce positive results

What is not intuitive is the enhanced ability to handle the copy of the block data and the positive impact it will have on the job cycle time Remember most clusters are making two copies of each block over the network to other nodes within the cluster by default With local SAS drives as the principal storage many companies would try to use the IO capabilities to maximize their rates and not have the storage throttled by the network At the moment SSD drives would be considered cost-prohibitive especially if a company is looking at the underlying goal of using Hadoop to deal with Big Data In this scenario traditional hard drives win in a storagedollar ratio by a factor of 10

ENTERPRISE CLUSTER DEFINITION In the end a true high-performance Hadoop node could look like the following

bull 16 cores minimumbull 32GB RAMbull 6 ndash 12 1TB drives (or more if space is available)bull 2 ndash 10 GbE ports (QLogicreg 10GbE Intelligent Ethernet Adapter)

How a company approaches the number of nodes in a cluster is really a factor of its data set and the desired performance of its mapreduce functions Obviously enough storage is needed to handle the data set and store the results with room to grow Understand that the data set that

Corporate Headquarters QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo CA 92656 949-389-6000

International Offices UK | Ireland | Germany | France | India | Japan | China | Hong Kong | Singapore | Taiwan | Israel

wwwqlogiccom

Follow us Share

SN0330997-00 Rev B 0714 3

copy 2014 QLogic Corporation Specifications are subject to change without notice All rights reserved worldwide QLogic and the QLogic logo are registered trademarks of QLogic Corporation Facebook is a registered trademark of Facebook Inc Google is a registered trademark of Google Inc Yahoo is a registered trademark of Yahoo Inc All other brand and product names are trademarks or registered trademarks of their respective owners Information supplied by QLogic Corporation is believed to be accurate and reliable QLogic Corporation assumes no responsibility for any errors in this document QLogic Corporation reserves the right without notice to make changes in product design or specifications

Enterprise Hadoop Architecting Next-Generation Infrastructure

Technology Brief

has been populated will remain and be appended with new data coming in Most companies are finding that there is potential to create content from their data warehouse and use Hadoop to refine it versus turning to traditional tools This allows Hadoop to become part of the data analytics workflow and optimize the reduction time substantially from the traditional tools Because the traditional tools are very expensive and multi-purposed this is a huge win for the company and data warehouse team Other sources of Hadoop data include a companyrsquos Web site transaction database and product information and raw data gathered by Web crawlers on competitor sites

THE QLOGIC SOLUTION A few words about QLogic The company offers the worldrsquos favorite enterprise adapters in Ethernet Fibre Channel and converged networking Our adapters provide a significant management edge in dealing with large Hadoop clusters and help companies get the most out of their data set We offer unrivaled flexibility in connect topologies with a solid technology foundation matured with more than 10 million adapters shipped to our customers

The QLogic 10Gbps Intelligent Ethernet Adapters are designed for high-performance applications such as grid computing virtualization network security appliances IP content delivery systems network attached storage (NAS) and backup servers The QLogic adapters boast industry-leading networking performance with bidirectional dual-port throughput equaling approximately 40Gbps This performance makes the adapter suitable for high-bandwidth applications as well as applications that require network scaling The QLogic 10GbE Intelligent Ethernet Adapter delivers this extreme performance while consuming only minimal CPU resources approaching the CPU utilization found with TCPIP offload engine (TOE) solutions Consequently more CPU resources are available resulting in faster application performance and higher VM density per server

QLogic products are optimized from end-to-end from adapter to fabric and from edge to core The IOPS and throughput of the QLogic adapters are best-in-class and provide unprecedented levels of performance superior scalability and enhanced reliability that exceed the requirements for next-generation data centers High-performance storage networking products from QLogic ensure the broadest interoperability and design and implementation flexibility enabling companies and data center administrators to focus on larger issues thus meeting higher service level agreements (SLAs) for greater uptime and business continuity IT professionals continue to rely on QLogic technology to consolidate infrastructure and reduce total cost of ownership

Page 2: Technology Brief Enterprise Hadoop: Architecting Next-Generation Infrastructure · PDF file · 2017-12-15SN0330997-00 Rev. B 07/14 2 Enterprise Hadoop: Architecting Next-Generation

SN0330997-00 Rev B 0714 2

Enterprise Hadoop Architecting Next-Generation Infrastructure

Technology Brief

The Web-scale Hadoop model also calls for direct attached storage (DAS) in the server versus a SAN or NAS There are two primary reasons for this DAS requirement With this level of node scaling the simplicity of configuration and deployment in a cookie-cutter fashion is a primary consideration along with the cost of the storage system The arrival of a cost-effective 1TB disk enabled large clusters to store petabytes of information This solved a very expensive dilemma for the traditional shop with a SAN where the cost of that much storage would have been a prohibitive entry point to Hadoop and the data store

THE HADOOP ARCHITECTURAL MODEL ENTERPRISE The remainder of this paper will focus on implementing a Hadoop cluster within an enterprise data center environment and the tradeoffs that come with following the traditional Big Data ownerrsquos model Considering that the Big Data model for the initial developers was millions if not billions of Web pages and users individual machine performance and cost needed to be balanced against performance measured by jobs per cluster However most corporations are not associated with the Internet to the same extent as Google Yahoo or Facebook so using them as a reference for the corporationrsquos Hadoop cluster may not be the best path to follow As an example in Googlersquos history 10000 servers was an early milestone for its infrastructure For the initial deployment of Hadoop Web-scale organizations were quick to adopt 1GbE in lieu of other more costly options The CPU and memory footprint of the previous-generation hardware selected because it fit the cost model requirement balanced quite nicely with the ubiquitous 1GbE interface and data rate in a minimized cost solution

Consider that most enterprise data centers with the exception of those very recently built have little or no space for thousands of inexpensive white box servers to deploy for the cluster Fortunately Hadoop has the flexibility to change the processing paradigm from ldquoa few jobs on a whole lot of serversrdquo to ldquoa lot of jobs on fewer serversrdquo so resources can be scaled to meet specific enterprise requirements While it is possible to execute Hadoop in a virtualized environment Hadoop already enables management of its node resources so the efficiency gained with hypervisor-based resource utilization is already present in the system and there are vendor-based solutions that automate the process of maximizing per node resource assignment Virtualization also adds complexity from a configuration perspective and adds cost to the solution which is still reliant on a cookie- cutter paradigm centered on simplicity and cost minimization But Moorersquos Law helps us here as ever more powerful servers keep driving down the cost per unit of processing power Multi-core CPUs of 16 and higher are at the low end of the current server hardware hierarchy Coupled with faster and larger memory footprints of 32GB and up companies have the choice of the underpinnings of a midstream node (cost-wise) that would look like the high-end node from two years ago Looking at the green aspects of the efficiencies of current hardware there seems to be a natural propensity to move toward a more powerful balanced node for enterprise scale deployments

BALANCE OR BOTTLENECK The goal in any Hadoop data node is to have a balance between the CPU memory and network resources If any one of the three is significantly more powerful than the others there is the potential for a bottleneck in the processing capability of the system Refer to any study on Hadoop performance on the Web and it will show this very fact Adding more power on the CPU and memory components unbalances the equation in terms of the network affecting how efficiently Hadoop cluster nodes are able to shuffle data reduce the results and populate HDFS storage nodes within the Hadoop cluster Fortunately just as Moorersquos Law has been at work on the CPU and memory resources the same holds true for advancements in Ethernet technology from 1GbE to 10GbE and beyond While 10GbE may not be economically attractive for Web-scale deployments the value in enterprise environments may be a viable option for bringing the Hadoop data node resource equation back into balance

CLUSTER NETWORKING INFRASTRUCTURE Nodes using a high multi-core architecture while keeping Moorersquos law happily moving forward tend to be underpowered on the network side if maintaining the 1GbE paradigm 1GbE server connectivity and 1GbE switching infrastructure will not allow this current-generation hardware cluster to scale to its full potential A 10GbE network is the right investment to restore a balance on the node and achieve the maximum performance potential of the Hadoop cluster As with everything in regards to Hadoop it really depends on the data set The jobscluster number will increase when the network no longer has the bottleneck The data ingest and moving the results off the cluster will also produce positive results

What is not intuitive is the enhanced ability to handle the copy of the block data and the positive impact it will have on the job cycle time Remember most clusters are making two copies of each block over the network to other nodes within the cluster by default With local SAS drives as the principal storage many companies would try to use the IO capabilities to maximize their rates and not have the storage throttled by the network At the moment SSD drives would be considered cost-prohibitive especially if a company is looking at the underlying goal of using Hadoop to deal with Big Data In this scenario traditional hard drives win in a storagedollar ratio by a factor of 10

ENTERPRISE CLUSTER DEFINITION In the end a true high-performance Hadoop node could look like the following

bull 16 cores minimumbull 32GB RAMbull 6 ndash 12 1TB drives (or more if space is available)bull 2 ndash 10 GbE ports (QLogicreg 10GbE Intelligent Ethernet Adapter)

How a company approaches the number of nodes in a cluster is really a factor of its data set and the desired performance of its mapreduce functions Obviously enough storage is needed to handle the data set and store the results with room to grow Understand that the data set that

Corporate Headquarters QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo CA 92656 949-389-6000

International Offices UK | Ireland | Germany | France | India | Japan | China | Hong Kong | Singapore | Taiwan | Israel

wwwqlogiccom

Follow us Share

SN0330997-00 Rev B 0714 3

copy 2014 QLogic Corporation Specifications are subject to change without notice All rights reserved worldwide QLogic and the QLogic logo are registered trademarks of QLogic Corporation Facebook is a registered trademark of Facebook Inc Google is a registered trademark of Google Inc Yahoo is a registered trademark of Yahoo Inc All other brand and product names are trademarks or registered trademarks of their respective owners Information supplied by QLogic Corporation is believed to be accurate and reliable QLogic Corporation assumes no responsibility for any errors in this document QLogic Corporation reserves the right without notice to make changes in product design or specifications

Enterprise Hadoop Architecting Next-Generation Infrastructure

Technology Brief

has been populated will remain and be appended with new data coming in Most companies are finding that there is potential to create content from their data warehouse and use Hadoop to refine it versus turning to traditional tools This allows Hadoop to become part of the data analytics workflow and optimize the reduction time substantially from the traditional tools Because the traditional tools are very expensive and multi-purposed this is a huge win for the company and data warehouse team Other sources of Hadoop data include a companyrsquos Web site transaction database and product information and raw data gathered by Web crawlers on competitor sites

THE QLOGIC SOLUTION A few words about QLogic The company offers the worldrsquos favorite enterprise adapters in Ethernet Fibre Channel and converged networking Our adapters provide a significant management edge in dealing with large Hadoop clusters and help companies get the most out of their data set We offer unrivaled flexibility in connect topologies with a solid technology foundation matured with more than 10 million adapters shipped to our customers

The QLogic 10Gbps Intelligent Ethernet Adapters are designed for high-performance applications such as grid computing virtualization network security appliances IP content delivery systems network attached storage (NAS) and backup servers The QLogic adapters boast industry-leading networking performance with bidirectional dual-port throughput equaling approximately 40Gbps This performance makes the adapter suitable for high-bandwidth applications as well as applications that require network scaling The QLogic 10GbE Intelligent Ethernet Adapter delivers this extreme performance while consuming only minimal CPU resources approaching the CPU utilization found with TCPIP offload engine (TOE) solutions Consequently more CPU resources are available resulting in faster application performance and higher VM density per server

QLogic products are optimized from end-to-end from adapter to fabric and from edge to core The IOPS and throughput of the QLogic adapters are best-in-class and provide unprecedented levels of performance superior scalability and enhanced reliability that exceed the requirements for next-generation data centers High-performance storage networking products from QLogic ensure the broadest interoperability and design and implementation flexibility enabling companies and data center administrators to focus on larger issues thus meeting higher service level agreements (SLAs) for greater uptime and business continuity IT professionals continue to rely on QLogic technology to consolidate infrastructure and reduce total cost of ownership

Page 3: Technology Brief Enterprise Hadoop: Architecting Next-Generation Infrastructure · PDF file · 2017-12-15SN0330997-00 Rev. B 07/14 2 Enterprise Hadoop: Architecting Next-Generation

Corporate Headquarters QLogic Corporation 26650 Aliso Viejo Parkway Aliso Viejo CA 92656 949-389-6000

International Offices UK | Ireland | Germany | France | India | Japan | China | Hong Kong | Singapore | Taiwan | Israel

wwwqlogiccom

Follow us Share

SN0330997-00 Rev B 0714 3

copy 2014 QLogic Corporation Specifications are subject to change without notice All rights reserved worldwide QLogic and the QLogic logo are registered trademarks of QLogic Corporation Facebook is a registered trademark of Facebook Inc Google is a registered trademark of Google Inc Yahoo is a registered trademark of Yahoo Inc All other brand and product names are trademarks or registered trademarks of their respective owners Information supplied by QLogic Corporation is believed to be accurate and reliable QLogic Corporation assumes no responsibility for any errors in this document QLogic Corporation reserves the right without notice to make changes in product design or specifications

Enterprise Hadoop Architecting Next-Generation Infrastructure

Technology Brief

has been populated will remain and be appended with new data coming in Most companies are finding that there is potential to create content from their data warehouse and use Hadoop to refine it versus turning to traditional tools This allows Hadoop to become part of the data analytics workflow and optimize the reduction time substantially from the traditional tools Because the traditional tools are very expensive and multi-purposed this is a huge win for the company and data warehouse team Other sources of Hadoop data include a companyrsquos Web site transaction database and product information and raw data gathered by Web crawlers on competitor sites

THE QLOGIC SOLUTION A few words about QLogic The company offers the worldrsquos favorite enterprise adapters in Ethernet Fibre Channel and converged networking Our adapters provide a significant management edge in dealing with large Hadoop clusters and help companies get the most out of their data set We offer unrivaled flexibility in connect topologies with a solid technology foundation matured with more than 10 million adapters shipped to our customers

The QLogic 10Gbps Intelligent Ethernet Adapters are designed for high-performance applications such as grid computing virtualization network security appliances IP content delivery systems network attached storage (NAS) and backup servers The QLogic adapters boast industry-leading networking performance with bidirectional dual-port throughput equaling approximately 40Gbps This performance makes the adapter suitable for high-bandwidth applications as well as applications that require network scaling The QLogic 10GbE Intelligent Ethernet Adapter delivers this extreme performance while consuming only minimal CPU resources approaching the CPU utilization found with TCPIP offload engine (TOE) solutions Consequently more CPU resources are available resulting in faster application performance and higher VM density per server

QLogic products are optimized from end-to-end from adapter to fabric and from edge to core The IOPS and throughput of the QLogic adapters are best-in-class and provide unprecedented levels of performance superior scalability and enhanced reliability that exceed the requirements for next-generation data centers High-performance storage networking products from QLogic ensure the broadest interoperability and design and implementation flexibility enabling companies and data center administrators to focus on larger issues thus meeting higher service level agreements (SLAs) for greater uptime and business continuity IT professionals continue to rely on QLogic technology to consolidate infrastructure and reduce total cost of ownership