power struggle: how it managers cope with data center power

7
White Paper Power Struggle: How IT Managers Cope with Data Center Power Demands and the Raptor Solution Edwin Hoffman, Founder and Chief Solution Architect

Upload: datacenters

Post on 16-Apr-2017

208 views

Category:

Business


0 download

TRANSCRIPT

  • White Paper

    Power Struggle:How IT Managers Cope with Data Center Power Demandsand the Raptor SolutionEdwin Hoffman, Founder and Chief Solution Architect

  • 1

    SCOPE AND PURPOSEThis white paper addresses the demands faced by IT managers for power consumptionand cooling in data centers. Part of the title of this document is taken from an April 032006 Computerworld article that deals with power consumption and cooling indata centers.

    In this article references were made to both Trinity Health's and Industrial Light andMagic's (ILM) data centers. The article covered a growing problem today that powerconsumption and the cost of cooling data centers has become extreme and a major issuefor all sizes of companies.

    Currently, some data centers have run into issues where they cannot have any morepower piped into the building because of building codes or for a complete lack ofinfrastructure availability.

    To read the entire article, refer to the link:

    http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=110072

    Or, read the article in pdf format by clicking the attached paper clip.

    POWER CONSUMPTION PROBLEM OVERVIEWFrom the April 03, 2006 Computerworld article: Running a lot of servers in your datacenter these days? Don't be surprised if you get a knock on the door from your chieffinancial officer. For IT organizations, reliability and availability are paramount, and datacenter managers may not be aware that energy costs are spiraling upward, because thebills often go to the facilities group. In fact, the power and cooling infrastructure may bedesigned and operated by facilities. The facilities people may not even be aware of whatthe IT guys need, says Nathan Brookwood, an analyst at Insight64.

    While IT worries about hot spots, and facilities worries about power and coolingoperations, CFOs are starting to notice those electricity bills, says Peter Gross, CEO of EYPMission Critical Facilities Inc. in New York. Top management used to overlook the datacenter in its energy management assessments. It's never been a significant percentage ofoverall power consumption in the past. But server farms are becoming so huge and takingup so much energy that that is changing, he says. People are taking note.

    People are starting to realize it's a problem and that the two organizations can'tcontinue to work as separate fiefdoms, says Jerry Murphy, an analyst at Robert FrancesGroup Inc. These two departments are no longer separate, he says. They must workhand in hand to meet the future power and cooling needs of the enterprise. You needbetter integration.

    One area where improvement can easily be made and efficiencies realized is in networkswitching. Almost all data centers use large chassis based switches that use large amountsof power, and produce vast amounts of heat.

    An example of this is the Cisco Catalyst 6509 with Supervisor 720 that requires that two30 amp supplies are installed to support the power-hungry systems in the chassis. At amaximum 8700 watts of possible output, the 6509 produces a prodigious amount of heat(34,800 BTU1). Two Catalyst 6509s are normally required for full redundancy, thereforeup to 17400 watts could be required (69,600 BTU).2

    1. B.T.U a unit of energy equal to the work done by a power of 1000 watts operating for one hour.2. All Catalyst 6509 numbers are taken from published Cisco manuals and other documents.

    The chief operations officer(Marvin Wheeler) at TerremarkWorldwide Inc . manages a600,000-square-foot collocationfacility designed to support 100watts per square foot.

    There are two issues. One ispower consumption, and theother is the ability to get all ofthat heat out. The cooling issuesare the ones that general lybecome the limiting factor, hesays.

    http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=110072

  • Power struggle: How IT managers cope with the data center power demands Robert Mitchell April 03, 2006 (Computerworld) When Tom Roberts oversaw the construction of a 9,000-square-foot data center for Trinity Health, a group of 44 hospitals, he thought the infrastructure would last four or five years. A little more than three years later, he's looking at adding another 3,000 square feet and re-engineering some of the existing space to accommodate rapidly changing power and cooling needs.

    As in many organizations, Trinity Health's data center faces pressures from two directions. Growth in the business and a trend toward automating more processes as server prices continue to drop have stoked the demand for more servers. Roberts says that as those servers continue to get smaller and more powerful, he can get up to eight times more units in the same space. But the power density of those servers has

    exploded.

    "The equipment just keeps chewing up more and more watts per square foot," says Roberts, director of data center services at Novi, Mich.-based Trinity. That has resulted in challenges meeting power-delivery and cooling needs and has forced some retrofitting.

    "It's not just a build-out of space but of the electrical and the HVAC systems that need to cool these very dense pieces of equipment that we can now put in a single rack," Roberts says.

    Power-related issues are already a top concern in the largest data centers, says Jerry Murphy, an analyst at Robert Frances Group Inc. in Westport, Conn. In a study his firm conducted in January, 41% of the 50 Fortune 500 IT executives it surveyed identified power and cooling as problems in their data centers, he says.

    Murphy also recently visited CIOs at six of the nation's largest financial services companies. "Every single one of them said their No. 1 problem was power," he says. While only the largest data centers experienced significant problems in 2005, Murphy expects more data centers to feel the pain this year as administrators continue to replenish older equipment with newer units that have higher power densities.

    In large, multimegawatt data centers, where annual power bills can easily exceed $1 million, more-efficient designs can significantly cut costs. In many data centers, electricity now represents as much as half of operating expenses, says Peter Gross, CEO of EYP Mission Critical Facilities Inc., a New York-based data center designer. Increased efficiency has another benefit: In new designs, more-efficient equipment reduces capital costs by allowing the data center to lower its investment in cooling capacity.

    Pain Points

    Trinity's data center isn't enormous, but Roberts is already feeling the pain. His data center houses an IBM z900 mainframe, 75 Unix and Linux systems, 850 x86-class rack-mounted servers, two blade-server farms with hundreds of processors, and a complement of storage-area networks and network switches. Simply getting enough power where it's needed has been a challenge. The original design included two 300-kilowatt uninterruptible power supplies.

    "We thought that would be plenty," he says, but Trinity had to install two more units in January. "We're running out of duplicative power," he says, noting that newer equipment is dual-corded and that power density in some areas of the data center has surpassed 250 watts per square foot.

    Image Credit: Belle Mellor

  • At Industrial Light & Magic's brand-new 13,500-square-foot data center in San Francisco, senior systems engineer Eric Bermender's problem has been getting enough power to ILM's 28 racks of blade servers. The state-of-the-art data center has two-foot raised floors, 21 air handlers with more than 600 tons of cooling power and the ability to support up to 200 watts per square foot.

    Nonetheless, says Bermender, "it was pretty much outdated as soon as it was built." Each rack of blade servers consumes between 18kw and 19kw when running at full tilt. The room's design specification called for six racks per row, but ILM is currently able to fill only two cabinets in each because it literally ran out of outlets. The two power-distribution rails under the raised floor are designed to support four plugs per cabinet, but the newer blade-server racks require between five and seven. To fully load the racks, Bermender had to borrow capacity from adjacent cabinets.

    The other limiting factor is cooling. At both ILM and Trinity, the equipment with the highest power density is the blade servers. Trinity uses 8-foot-tall racks. "They're like furnaces. They produce 120-degree heat at the very top," Roberts says. Such racks can easily top 20kw today, and densities could exceed 30kw in the next few years.

    What's more, for every watt of power used by IT equipment in data centers today, another watt or more is typically expended to remove waste heat. A 20kw rack requires more than 40kw of power, says Brian Donabedian, an environmental consultant at Hewlett-Packard Co. In systems with dual power supplies, additional power capacity must be provisioned, boosting the power budget even higher. But power-distribution problems are much easier to fix than cooling issues, Donabedian says, and at power densities above 100 watts per square foot, the solutions aren't intuitive.

    For example, a common mistake data center managers make is to place exhaust fans above the racks. But unless the ceiling is very high, those fans can make the racks run hotter by interfering with the operation of the room's air conditioning system. "Having all of those produces an air curtain from the top of the rack to the ceiling that stops the horizontal airflow back to the AC units," Roberts says.

    Trinity addressed the problem by using targeted cooling. "We put in return air ducts for every system, and we can point them to a specific hot aisle in our data center," he says.

    ILM spreads the heat load by spacing the blade server racks in each row. That leaves four empty cabinets per row, but Bermender says he has the room to do that right now. He also thinks an alternative way to distribute the loadpartially filling each rackis inefficient. "If I do half a rack, I'm losing power efficiency. The denser the rack, the greater the power savings overall because you have fewer fans," which use a lot of power, he says.

    Bermender would also prefer not to use spot cooling systems like IBM's Cool Blue, because they take up floor space and result in extra cooling systems to maintain. "Unified cooling makes a big difference in power," he says.

    Ironically, many data centers have more cooling than they need but still can't cool their equipment, says Donabedian. He estimates that by improving the effectiveness of air-distribution systems, data centers can save as much as 35% on power costs.

    Before ILM moved, the air conditioning units, which opposed each other in the room, created dead-air zones under the 12-inch raised floor. Seven years of moves and changes had left a subterranean tangle of hot and abandoned power and network cabling that was blocking airflows. At one point, the staff powered down the entire data center over a holiday weekend, moved out the equipment, pulled up the floor and spent three days removing the unused cabling and reorganizing the rest. "Some areas went from 10 [cubic feet per minute] to 100 cfm just by getting rid of the old cable under the floor," Bermender says.

    Even those radical steps provided only temporary relief, because the room was so overloaded with equipment. Had ILM not moved, Bermender says, it would have been forced to move the data center to a collocation facility. Managers of older data centers can expect to run into similar problems, he says.

    That suits Marvin Wheeler just fine. The chief operations officer at Terremark Worldwide Inc. manages a 600,000-square-foot collocation facility designed to support 100 watts per square foot.

    "There are two issues. One is power consumption, and the other is the ability to get all of that heat out. The cooling issues are the ones that generally become the limiting factor," he says.

    With 24-inch floors and 20-foot-high ceilings, Wheeler has plenty of space to manage airflows. Terremark breaks floor space into zones, and airflows are increased or decreased as needed. The company's service-level agreements cover both power and environmental conditions such as temperature and humidity, and it is working to offer customers Web-based access to that information in real time.

    Terremark's data center consumes about 6 megawatts of power, but a good portion of that goes to support dual-corded servers. Thanks to redundant power designs, "we have tied up twice as much power capacity for every server," Wheeler says.

    Terremark hosts some 200 customers, and the equipment is distributed based on load. "We spread out everything. We use power and load as the determining factors," he says.

    But Wheeler is also feeling the heat. Customers are moving to 10- and 12-foot-high racks, in some cases increasing the power density by a factor of three. Right now, Terremark bills based on square footage, but he says collocation companies need a new model to keep up. "Pricing is going to be based more on power consumption than square footage," Wheeler says.

    According to EYP's Gross, the average power consumption per server rack has doubled in the past three years. But there's no need to panicyet, says Donabedian.

    "Everyone gets hung up on the dramatic increases in the power requirements for a particular server," he says. But they forget that the overall impact on the data center is much more gradual, because most data centers only replace one-third of their equipment over a two- or three-year period.

    Nonetheless, the long-term trend is toward even higher power densities, says Gross. He points out that 10 years ago, mainframes ran so hot that the systems moved to water cooling before a change from bipolar to more efficient CMOS technology bailed them out.

  • "Now we're going through another ascending growth curve in terms of power," he says. But this time, Gross adds, "there is nothing on the horizon that will drop that power."

    Related Opinion:

    Robert Mitchell's blog: Server performance? How passe.

    Where Data Center Power Goes

    Source: EYP Mission Critical Facilities Inc., New York

    Big Problem in the Biggest Corporations Do you have a problem with power and cooling in your IT data center?

    Source: Robert Frances Group Inc., Westport, Conn. Base: 50 Fortune 500 IT executives, January 2006

    How to Spend 450 Watts

    Based on a typical dual-processor 450w 2U server, approximately 160w out of 450w (35%) are losses in the power-conversion process.

    Source: EYP Mission Critical Facilities Inc.; Intel Corp.

    AC/DC losses 131w

    DC/DC losses 32w

    Fans 32wDrives 72wPCI cards 41wProcessors 86wMemory 27wChip set 32w

  • Sidebar: The politics of power Robert Mitchell April 03, 2006 (Computerworld) Running a lot of servers in your data center these days? Don't be surprised if you get a knock on the door from your chief financial officer. For IT organizations, reliability and availability are paramount, and data center managers may not be aware that energy costs are spiraling upward, because the bills often go to the facilities group. If fact, the power and cooling infrastructure may be designed and operated by facilities. "The facilities people may not even be aware of what the IT guys need, says Nathan Brookwood, an analyst at Insight64.

    While IT worries about hot spots, and facilities worries about power and cooling operations, CFOs are starting to notice those electricity bills, says Peter Gross, CEO of EYP Mission Critical Facilities Inc. in New York. Top management used to overlook the data center in its energy management assessments. "It's never been a significant percentage of overall power consumption in the past. But server farms are becoming so huge and taking up so much energy that that is changing," he says. "People are taking note."

    "People are starting to realize it's a problem" and that the two organizations can't continue to work as separate fiefdoms, says Jerry Murphy, an analyst at Robert Frances Group Inc. "These two departments are no longer separate," he says. "They must work hand in hand to meet the future power and cooling needs of the enterprise. You need better integration."

  • Sidebar: Doing the Math Robert Mitchell April 03, 2006 (Computerworld) In a typical data center, every watt of power consumed by IT equipment requires another watt of power for overhead, including losses in power distribution, cooling and lighting. Depending on efficiency, this "burden factor" typically ranges from 1.8 to 2.5 times more power.

    Assuming a 1:1 ratio, a 3mw data center will require 6mw of power to operate. At 6 cents per kilowatt-hour, that adds up to $3.15 million annually. However, in some areas of the country, average costs are closer to 12 cents per kilowatt-hour, which would double the cost. With those numbers, even a modest 10% improvement in efficiency can yield big savings.

    With average per-rack power consumption tripling over the past three years, skyrocketing power bills are turning the heads of chief financial officers, particularly in companies with large data centers. Such scrutiny is less prevalent at financial institutions, where reliability is still the most important factor. But other industries, such as e-commerce, are much more sensitive to the cost of electricity, says Peter Gross, CEO of EYP Mission Critical Facilities.

    How many servers does it take to hit 3mw? Assuming today's average of 5kw per rack, you would need 600 cabinets with 15 servers per enclosure, or 9,000 servers total. A new data center designed for 100 watts per square foot would require 30,000 square feet of raised-floor space to accommodate the load.

    HERE'S HOW DATA CENTER POWER COSTS CAN ADD UP: Power required by data center equipment 3 Mw

    Power-distribution losses, cooling, lighting 3 Mw

    Total power requirement 6 Mw Cost per kilowatt-hour $0.06 Annual electricity cost for 24/7 operation

    $3.15 million

    Annual savings from a 10% increase in efficiency $315,000

  • Sidebar: The big drain: where the most power is wasted Robert Mitchell April 03, 2006 (Computerworld) Increasing power densities of networked storage, communications equipment and servers all contribute to the power and cooling problem. However, servers have become the biggest issue because server farms have grown so large relative to everything else in the data center. Within the server, increasing power budgets for processors are one of the biggest problems, says Nathan Brookwood, principal at Insight64. "When the first Pentium chip came out, everyone was horrified because it used 16 watts," substantially more than the 486, he says. Today, the worst offenders are at 150 watts, and 70 to 90 watts is common. Processor inefficiencies reached critical mass when chips moved to 90nm technology. "The leakable current for transistors became a meaningful part of overall power consumption," he says. As heat loads were going up, increasing voltage and frequency was no longer an option.

    "Power was going up at an alarming rate," says Randy Allen, corporate vice president at chip maker Advanced Micro Devices Inc.. New dual-core chips from Intel Corp. and AMD may give IT a bit of breathing room. Intel says its Woodcrest generation of processors, due later this year, will cut thermal design power by 40% over current single-core designs, while AMD claims its dual-core chips offer 60% faster performance within the same "power envelope" as its single-core chips. Both also offer capabilities to step down power levels when utilization levels drop. For example, AMD claims that a 20% drop in performance level will cut power levels on its newest chips by 50%.

    While dual-core and future multicore designs may give data center managers a bit of breathing room, they won't stop the power-density crisis. While performance per watt is going up, total wattage per chip is staying at the same or slightly higher levels and will continue to climb. Allen likens the battle to trying to climb a falling ladder. "The fundamental issue here is that computational requirements are increasing at a faster rate" than efficiency gains are rising, he says.

    Beyond the processor, there are areas of inefficiency within the server itself. For example, vendors commonly put inefficient power supplies in high-volume x86-class server because they don't see a competitive advantage in putting in more-efficient components that add a few dollars to their server costs. Jon Koomey, consulting professor at Stanford University who has studied data center power issues for the industry, calls this a "perverse incentive that pervades the design and operation of data centers." Commonly used power supplies have a typical efficiency of 65% and are a huge generator of waste heat. Units with efficiencies of 90% or better will pay for themselves in reduced operating costs over the life of the equipment. There's a second benefit as well, says Koomey. Data center managers who are purchasing thousands of servers could also design new data centers with a lower capital investment in cooling capacity by purchasing more efficient servers. "If you are able to reduce power use in the server, that allows you to reduce the capital cost of the equipment and you can save costs upfront," says Koomey. Power supplies are a simple way to gain efficiency, he adds, because they can be inserted into existing system designs without modification.

    "For IT managers building data centers with thousands of servers, the performance per watt will be an absolutely critical buying criteria," Allen says. Unfortunately, there are no benchmarks to help data center managers make such determinations, and power ratings on servers do not reflect real-world power-consumption levels. "If you want to buy [more-efficient] servers, there is no objective power measurement," says Koomey. He is working with Sun Microsystems Inc., which hopes to eventually bring together a consortium of vendors to develop performance-per-watt benchmarks.

    Large improvements are also possible in room air-conditioning efficiency, says Hewlett-Packard Co.'s Brian Donabedian. One of the biggest costs is the power required to run all of the fans that keep the room temperature balanced. Variable-speed blower motors are a relatively easy retrofit that can cut fan power consumption -- and related heat that's generated -- by 70%, he says. Poorly located air conditioners can end

  • up fighting one another, creating a low-pressure, dead-air space in the middle of the room where little or no cold air comes out of perforated tiles in the cold-air aisles. "It's like a tornado," he says. On the other hand, putting high-density racks too close to air-conditioning units gives the racks less cool air because the pressure below the vent tiles is lower, he says. Placing them farther away may sound counterintuitive, but actually allows for greater efficiency, he says. Even when air is delivered to the rack, a failure to install blanking plates and plug cable-entry points drops air pressure within the rack, reducing air circulation and cooling efficiency.

    Air-conditioning units need to cooperate better, says Steve Madara, vice president at Emerson Network Power, a division of Liebert Corp. Most units in data centers operate as "isolated islands" today, but they are increasingly being designed to work as a single unit, he says. That avoids common-room balance problems, such as when one unit humidifies one side of the room while the other is dehumidifying it.

    Power struggle: How IT managers cope with the

    Sidebar: The politics of power

    Sidebar: Doing the Math

    Sidebar: The big drain: where the most power is

    RichardFile AttachmentcomputerworldarticleId=110072.pdf

  • 2

    RAPTOR SOLUTION FOR MORE EFFICIENT DATA CENTER POWER USEToday network switches exist that not only can switch as much or more than the large chassis switches, but do sowith less power output, much lower cooling requirements, spreading the power and cooling requirement betweenmultiple racks or areas of the data center. The Raptor Networks Technology ER-1010 distributed switch fabricsystem fits this new paradigm to lower the power and cooling costs of network switching.

    An ER-1010 8-Pack with 192 Gigabit ports and 24 10-GbE ports has a maximum output of 1600 watts (1/5 the powerof a Cisco solution) and a normal operating output of 1,480 watts, using standard 15-amp circuits (and not speciallyinstalled 30-amp circuits). The 8-Pack cooling requirements are also much less than an equivalent standardchassis-based switching system. The ER-1010s distributed nature permits installation of any switch to any part ofthe data center and still maintain a single switch fabric. This facilitates the placement of switching power to allowfor the cooling and power requirements. Efficient use of power and cooling may extend the time a data center hasbefore a move becomes essential, even preventing a move altogether.

    Companies are still building centralized data systems into larger data centers because it is easier than buildingsmaller data centers that can distribute the loads. In this case, when the power needs of a data center exceed thefacilitys capacity, the only alternative left is moving to a much larger, better supplied location, which may not beeconomically desirable, feasible, or possible at that point in time.

    The ER-1010 allows parts of a data center to be moved to other buildings less than ideal for a major center, butfine for incremental growth, and still allows the new data center annex to be part of the same switched backbone.Figure 1 shows how a main data center can be increased via a series of annexes to create a more resilient anddistributed, disaster-tolerant center that acts like a single data center system. Buildings with smaller data centerpower requirements can be utilized more cost effectively with cooling costs also reduced due to smaller and moremanageable areas and systems that do not produce as much heat as a larger chassis-based system.

    Figure 1: Distributed Data Center

    384 Ports TotalAnnex A

    Annex B

    384 Ports Total

    672 Ports TotalData Center

  • 3

    The amount of space available in the data center can offer other challenges relating to heat and power. If the datacenter is at full, or near full capacity, cooling hotspots can develop and overall temperatures can elevate. Whenelectronic systems start operating in temperatures that are near or exceed the design specification, thencomponents degrade more quickly, leading to higher levels of failures and lower the Mean Time Before Failure(MTBF) of existing systems. The amount of downtime and the associated expenses can raise the cost andcomplexity of a data center when redundancy and resiliency requirements are put in place.

    As illustrated in the Meta Group's data, a failure thatresults from overheating can lead to significant revenuelosses.

    Rather than using chassis-based switches that take uplarge amounts of rack space, a distributed system withswitches that can fit into existing 1U high spaces andallows some of the much hotter systems to use an emptyrack. The ER-1010 fits into a 1U high space, but thenconnects to other ER-1010s to create a virtual chassis(similar in operation to the Catalyst 6500 series), all thewhile offering all the features of that virtual single switch,no matter where the physical switch is placed.

    Figure 2, shows 15 ER-1010 switches installed in spaces that are wasted empty spaces in existing racks. The emptyrack on the right can now be used to share the power load, cooling load, and processor load around the data center.

    Figure 2: Raptor Switches Filling Empty Rack Space

  • 4

    If these racks are not located in the same facility or if the facilities are large distances apart, the ER-1010 can linkthe racks to satisfy any data center interconnection needs. Figure 3 shows how ER-1010 systems can operate as asingle switch over DWDM to link multiple data centers as a single switching device, in this case data centersbetween Irvine CA and San Jose CA.

    The ER-1010 can even help with racks full of Blade Server Systems by linking all the blade servers into a singleswitching system. In Figure 4 on page 5 a complete ER-1010 8-Pack is installed on top of every rack (8U), whichallows up to 192 GbE blade servers to be connected or 24 10-GbE blade servers per chassis per rack.

    This is impossible with a chassis-based switch system because the chassis uses much of the rack space itself. Thepower consumed in this design is only 1480W per rack, whereas a Catalyst Sup 720 chassis could consume 8700Wper rack. Assuming the diagram of Figure 4 to be true, then Table 1 shows the annual saving in just power andcooling for a 4-rack switching solution.

    Table 1: System Power Cost Comparisons

    System Cooling Watts per Hour Power Watts per Hour Cost per Kilowatt Total Cost (Annual)Catalyst 6509/w Sup 720 34,800 34,800 $0.06 $36,581.76

    ER-1010 8-Pack 4920 4920 $0.06 $5,150.88Annual savings $31,430.88

    Figure 3: ER-1010 Using DWDM Over Long Distances

  • 5

    More importantly additional systems can be installed and connected when using Ether-Raptor switches than withstandard chassis switches. The hourly power savings of 59kW allows many more servers with the same powerconsumption to be added to the facility.

    Example 1: Using a representative 1U dual Opteron server (nServ A121) that has a maximum power requirementof 300W, then the choice of using ER-1010 8-Packs over a chassis solution could free up power for another 196more servers (59000W/300W).

    Example 2: A site in Silicon Valley has 29 Catalyst 6509s (all Sup 720s) and is potentially drawing 8700 watts per6509 using two 6300-watt power supplies (not power-redundant). With a total of 252 kWh (just from the switches)and adding in cooling requirement to expand the power draw to 504 kWh their 6509s are potentially costing$264,902 per year in power costs. The equivalent Raptor solution would draw 42 kWh (including cooling) or$44,140 per year. Yet again the power difference of 504kW and 42kW would allow many more systems to beinstalled with no overall increase in power or cooling requirement and may just allow the data center to remain atthe facility for another 23 years.

    Note: The power differential in Example 2 could supply 1,540 of the servers used in the Example 1.

    Figure 4: Raptor 8-Pack Switches Filling Top Rack Space

    42U42U42U42U

  • Corporate Headquarters: 1241 E. Dyer Road, Suite 150 Santa Ana, CA 92705

    Phone: 949-623-9300 /Fax: 949-623-9400 / Web: www.raptor-networks.com / E-mail: [email protected]

    Raptor Networks Technology, Inc. reserves the right to make changes without further notice to any products or data herein to improve reliability, function, ordesign. Information furnished by Raptor Networks Technology, Inc. is believed to be accurate and reliable. However, Raptor Networks Technology, Inc. doesnot assume any liability arising out of the application or use of this information, nor the application or use of any product or circuit described herein, neither doesit convey any license under its patent rights nor the rights of others.

    Raptor Networks Technology, Inc. is a registered trrademark and RAST is a trademark of Raptor Networks Technology, Inc. All other trademarks are theproperty of their respective owners.

    CD-WP1800 05/07/2007

    6

    SUMMARYA major concern faced by data center operators is the power consumption and cooling of the equipment in thedata center. These operators must continuously plan and implement plans to update, increase the number of, andrelocate switching units and the associated devices into other areas using other cooling sources, while distributingthe operation of the data center and remaining under a single operational Layer 2/3/4 switch.

    The primary goals for data center operators are to:

    Save power costs and cool the systems to prevent heat-induced failures

    Expand data center space without straining the company budget every time an expansion occurs

    Plan a distributed data center that truly scales to the needs of the company

    Using the Raptor Networks ER-1010 product line with its distributed switching fabric (Raptor Adaptive SwitchTechnology) helps the data center operator to meet these primary goals by:

    Delaying costly data center moves, even entirely removing the need to move

    Facilitating any move if needed by making it easier, more modular, and much easier to implement

    Operating multiple data centers, which are connected essentially by the same Layer 2 switching devicethus providing disaster avoidance (not recovery)

    Operating blade servers, processor clusters, and storage clusters over a Layer 2 interconnect between siteswithout exceeding power/cooling budgets in any location

    The numbers of other devices that can be installed because of a simple change in a network switching unit can bevery large, and most certainly allow companies to plan their data center growth more efficiently.