architecting next-generation networks

116
Architecting Next-Generation Networks Produced Exclusively for Broadcom by

Upload: others

Post on 05-Jan-2022

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Architecting Next-Generation Networks

Architecting Next-Generation Networks

Produced Exclusively for Broadcom by

Page 2: Architecting Next-Generation Networks

Table of Contents

i

Chapter 1: Introduction ....................................................................................................................1

Why Your Network Isn’t Good Enough..........................................................................................1

The Changes Add Up...........................................................................................................2

The Internet..............................................................................................................2

Voice and Video ......................................................................................................4

B2B E-Commerce....................................................................................................6

More and More Users ..............................................................................................7

A Device in Every Pocket........................................................................................9

The Problems Pile On ..........................................................................................................9

Network Efficiency................................................................................................10

Management and Design........................................................................................11

Security ..................................................................................................................11

The Evolving Network...................................................................................................................12

Bigger, Better, Faster, More ..............................................................................................13

Open Standards, Open Protocols .......................................................................................14

Designed for Mobility and Connectivity ...........................................................................14

Embedded Security ............................................................................................................15

Next-Generation Networking.........................................................................................................15

GbE ....................................................................................................................................16

Wireless..............................................................................................................................16

Switches .............................................................................................................................16

Servers................................................................................................................................17

Security ..............................................................................................................................17

Getting Ready ................................................................................................................................17

Education ...........................................................................................................................17

Future-Proofing Your Network..........................................................................................18

Summary ........................................................................................................................................18

Chapter 2: Gigabit Ethernet Migration ..........................................................................................19

GbE Technology Primer ................................................................................................................20

Switches .............................................................................................................................20

Aggregation........................................................................................................................21

Duplex................................................................................................................................21

GbE Products .................................................................................................................................22

Page 3: Architecting Next-Generation Networks

Table of Contents

ii

GbE Deployment Strategy .............................................................................................................24

GbE Emerging Technologies.........................................................................................................30

TOE....................................................................................................................................30

RDMA................................................................................................................................31

iSCSI ..................................................................................................................................32

iSCSI Extensions for RDMA.............................................................................................34

Looking Ahead...............................................................................................................................35

Summary ........................................................................................................................................35

Chapter 3: Extending Enterprise Networks with Wi-Fi®...............................................................36

A Brief History of Wireless Networking .......................................................................................37

802.11 Legacy....................................................................................................................37

802.11b...............................................................................................................................37

802.11a...............................................................................................................................38

802.11g...............................................................................................................................38

802.11 Everything Else......................................................................................................39

How Wireless Networking Works .................................................................................................40

Basic Operations ................................................................................................................40

802.11 Legacy Specifics ....................................................................................................42

802.11b Specifics...............................................................................................................42

802.11a Specifics ...............................................................................................................43

802.11g Specifics...............................................................................................................43

Broadcom Xpress Frame Bursting Technology.................................................................43

Radios Matter.....................................................................................................................45

Mixed 802.11b and 802.11g Environments.......................................................................46

Building Wireless LANs................................................................................................................47

The Wired Connection.......................................................................................................47

802.11b Architecture..........................................................................................................48

802.11a and 802.11g Architecture .....................................................................................48

Wireless Security Concepts ...........................................................................................................49

WEP ...................................................................................................................................49

802.11i ...............................................................................................................................50

WPA...................................................................................................................................50

AES....................................................................................................................................50

Page 4: Architecting Next-Generation Networks

Table of Contents

iii

802.1X................................................................................................................................50

Putting It All Together .......................................................................................................51

The Wired Weak Point.......................................................................................................52

Architecting Secure, Next-Generation Wireless LANs .................................................................52

Prerequisites.......................................................................................................................52

Client Software Support.....................................................................................................52

Hardware Support ..............................................................................................................53

Management and Maintenance Concerns ..........................................................................53

Summary ........................................................................................................................................54

Chapter 4: Switching Intelligence in the Enterprise ......................................................................55

What Is a Switch? ..........................................................................................................................56

Intelligent Switching......................................................................................................................57

Key Functionality of Intelligent Switches .........................................................................59

Quality of Service ..................................................................................................59

Security ..................................................................................................................60

Management...........................................................................................................60

Scalability ..............................................................................................................61

VoIP ...................................................................................................................................61

Video..................................................................................................................................62

Wireless LAN Switching ...................................................................................................62

ROI/Convergence ..................................................................................................65

Implementing Intelligent Switching ..............................................................................................65

Summary ........................................................................................................................................70

Chapter 5: Server Migration and Optimization: Maximizing ROI for Existing Assets and Future Growth ...........................................................................................................................................71

Server Technologies.......................................................................................................................71

File and Print Servers.........................................................................................................71

Database Servers................................................................................................................71

Application Servers............................................................................................................72

Email Servers .....................................................................................................................72

Storage Servers ..................................................................................................................72

Web Servers .......................................................................................................................72

Blade Servers .....................................................................................................................74

Defining the “Cutting Edge”..........................................................................................................74

Page 5: Architecting Next-Generation Networks

Table of Contents

iv

Understanding Performance-Oriented Technologies.........................................................76

Core I/O Components ........................................................................................................76

North Bridge ..........................................................................................................76

South Bridge ..........................................................................................................76

Storage ...............................................................................................................................77

IDE/ATA................................................................................................................77

SATA .....................................................................................................................78

SCSI .......................................................................................................................78

Serial Attached SCSI .............................................................................................78

RAID..................................................................................................................................79

GbE ....................................................................................................................................80

TOE........................................................................................................................80

RDMA....................................................................................................................80

iSCSI ......................................................................................................................81

Technology Integration......................................................................................................81

Technology Convergence ..............................................................................................................82

Converged Network Interface Cards .............................................................................................83

Scalable and Configurable I/O.......................................................................................................84

Interconnects ......................................................................................................................84

HyperTransport ......................................................................................................84

PCI-X .................................................................................................................................85

PCI-Express .......................................................................................................................85

CPU Support ......................................................................................................................85

IA-32 ......................................................................................................................86

AMD Opteron ........................................................................................................86

AMD Athlon 64 and Athlon 64-FX.......................................................................88

EM64T ...............................................................................................................................88

Summary ........................................................................................................................................89

Chapter 6: End-to-End Security: How to Secure Today’s Enterprise Network ............................91

Securing from the Outside In.........................................................................................................92

Software or Hardware Security?....................................................................................................94

Identity Management: Identifying Who and What is on the Network...........................................96

Managing the Proliferation of Client Devices ...................................................................98

Page 6: Architecting Next-Generation Networks

Table of Contents

v

Secure Devices...................................................................................................................99

Who You Are vs. Who You Say You Are.......................................................................100

Minimizing Performance Impact .....................................................................................101

Securing VoIP Applications ............................................................................................102

Securing Wireless Networks and Applications................................................................106

Enabling Convergence and the Four-Function Box.........................................................108

Summary ......................................................................................................................................109

Page 7: Architecting Next-Generation Networks

Chapter 1

1

Chapter 1: Introduction

It’s a term you’re starting to hear more and more—next-generation networks. Depending on how long you’ve been in the industry, you might have heard it in the past, too: The move from coaxial 10Base-T and 2Base-T networks to modern 10Base-T twisted-pair networks (as well as Token Ring networks) was a major leap forward. As corporate networks began to roll out larger and larger Ethernet LANs, user productivity increased. Users had easier access to files, printers, and other resources; networks were easier to manage and troubleshoot; and connections, based on easier-to-wire CAT3 and CAT5 cabling, were more reliable. Another generation of networks was created when Ethernet switches hit the market, making networks faster and more efficient.

What came before, however, is no match for what’s ahead. Forget about simple speed increases, lower latency, and a new type of infrastructure device. This time, next-generation network means a tenfold or better increase in network throughput. It also means an entirely new range of connectivity options, including wireless “disconnected connectivity.” Next generation means intelligent devices capable of improving network performance and reliability. Finally, it means security built right into the infrastructure, for the first time ever. It’s an exciting time to be a networking professional—provided you’re ready.

Why Your Network Isn’t Good Enough Simply put, today’s networks are barely sufficient for what companies are asking of them, and the networks are completely inadequate for the demands of the future. Think about it—many companies are running 100Base-T networks at best, perhaps with 11Mbps wireless connectivity for some users. These are more or less the same networks they’ve been running for half a decade or more, and yet the number of additional demands that have been placed on the network since then is truly staggering.

To appreciate the improvements provided by the next generation of networks, you need a firm grasp of how current networks and technologies developed to the present. In the following sections, we’ll explore the Internet, voice and video technologies, business-to-business (B2B) e-commerce, handheld devices, and more. This background information will provide evidence of the need for the latest technologies and how they will address current networking concerns. We will then build on this foundation in the rest of the book:

• Chapter 2—The important elements of next-generation networks—specifically Gigabit Ethernet (GbE)

• Chapter 3—Strategies for wireless deployment and security

• Chapter 4—The importance of switching intelligence in the infrastructure

• Chapter 5—Server migration and optimization, paying special attention to return on investment (ROI)

• Chapter 6—How to secure your next-generation network

Page 8: Architecting Next-Generation Networks

Chapter 1

The Changes Add Up Five years ago, the Internet was still just beginning to take off as a major vehicle for commercial communications and interaction. Nobody had a camera connected to a computer, very few people had anything like a handheld personal digital assistant (PDA), cellular phones did not have digital cameras and built-in General Packet Radio Service (GPRS) transmitters, and many companies still had users who didn’t have email. In 5 short years, everything has changed.

The Internet Today’s users double the amount of data they work with every 18 months. Much of that data comes from the Internet. To date, companies have dealt primarily with maximizing the efficiency of their relatively low-bandwidth wide area network (WAN) connections; with the average company connecting to the Internet via a T1 line—even an old 10Base-T local area network (LAN) offers almost seven times the speed of a WAN connection. Companies have addressed the WAN bottleneck primarily by using both firewalls and proxy servers (see Figure 1.1) and by increasing pipe capacity through T3, OC3, and OC12 and higher-speed connections.

Figure 1.1: Maximizing Internet efficiency with proxies and firewalls.

2

Page 9: Architecting Next-Generation Networks

Chapter 1

3

Proxy servers increase efficiency by aggregating multiple client connections. The proxy server retrieves content from the Internet, then saves it for future internal requests rather than retrieving the same content over and over. Some proxies and firewalls can improve efficiency by eliminating wasteful traffic, such as Web surfing to game sites and other non work-related sites.

This focus on the WAN pipe, however, has left the LAN, which is where bottlenecks are starting to appear, largely ignored. Users are working with a lot of data, and much of that data now originates on the LAN in the form of enormous data warehouses, databases, files, and more. Networks are becoming hard-pressed to transport all of that data.

Think about the average size of a Word, Excel, or PowerPoint document that contains pictures and graphics. File sizes continue to increase in most applications from version to version as users take advantage of new features to create more creative documents. Even the default image file from a 5 mega pixel digital camera is more than 3MB. The network—which isn’t gaining speed as quickly as the files gain size—still has to move all the bits from the file server to client computers and back again.

Productivity is also affecting the network. Despite recent corporate restructuring and downsizing, most companies in the United States increased their overall output. How? Everyone is doing more with less. Thus, each worker has become more productive, and they didn’t get to be more productive by dealing with less data; the corporate network bears the brunt of this increased productivity. Unfortunately, overburdened networks are easy to ignore. Users may complain that things are slow in the mornings, but they gradually begin to accept the status quo and the network remains a hidden efficiency problem.

Even worse for current network bandwidth is the trend towards collaborative computing. Products, such as Microsoft SharePoint Portal Server, which are designed to allow real-time collaborative computing between network users, cause not only an increased load on the network infrastructure but also highlight any latency problems, which become immediately noticeable and annoying to users.

Network bandwidth hogs can be many and varied including:

● Storage and backups—larger drives mean more to back up

● Educational training and videoconferencing—video being accessed by a large number of users

● Collaborative applications

● CRM tools

● Help desk software

● Richer content (voice, video, mp3)

Now consider the massive increase in data throughput that companies will see in the next 18 months to 3 years. Networks simply must become faster, more efficient, and much more intelligent in order to keep up. Raw speed is part of the answer, but more efficient and intelligent use of that speed is also an important component. Next-generation networks will provide this speed and intelligence.

Page 10: Architecting Next-Generation Networks

Chapter 1

4

Such networks are not a distant phenomenon; they are a current reality. For example, if you are building a network starting with a clean slate, you can use GbE to ensure that your network infrastructure is ready to benefit from emerging technologies. For existing networks, the migration to GbE means reduced wire time, less buffer congestion, and relieved flow control mechanisms—all of which add up to a better user experience and a less harried IT staff.

We’ll explore GbE in detail in Chapter 2.

Voice and Video As companies seek to reduce travel expenses and improve employee productivity, network-based voice and videoconferencing have become more popular. Many companies are saving tens of thousands of dollars a year by piggybacking voice communications onto their data networks. Voice over IP (VoIP) is a popular suite of technologies that provides high-quality voice transmission over IP networks.

Unfortunately, voice and video, in particular, are harsh on the corporate network. Few networks are engineered to carry a normal share of data as well as decent streaming video. Network engineers and videoconferencing designers have been forced to make a number of concessions and compromises to make videoconferencing feasible. One primary technology is multicasting, enabling a conferencing server to send a single transmission for a video signal. Clients subscribe to the multicast’s IP address and pick up the traffic sent to that address off the network. This technique is much more efficient than unicasting, in which the server must transmit an individual video feed to each client. With multicasting, multiple clients can receive the same transmission, conserving bandwidth. Even multicasting isn’t always enough to make videoconferencing possible, however; some networks are so overburdened that the video traffic must be limited to a portion of the network. Figure 1.2 shows how routers can be programmed with multicast boundaries, creating a multicast domain that contains the videoconference traffic. Outside the multicast domain, users cannot subscribe to the feed.

Page 11: Architecting Next-Generation Networks

Chapter 1

Figure 1.2: Creating a multicast domain with router multicast boundaries.

In the world of networking, the inability of a network to handle its traffic load—particularly when the applications generating that traffic provide monetary savings and increases in productivity to the organization—is an unforgivable offense. Multicasting is an excellent technology that was designed to increase the efficiency of a network, but networks that can’t even carry a share of multicast traffic across the entire corporation are clearly not engineered to serve the business’ best interests. Next-generation networks need to offer the ability to extend cost-saving, productivity-boosting technologies to every corner of the corporate LAN. They will do so by providing additional raw bandwidth, more efficient routing of traffic, and better management of specialized traffic.

5

Page 12: Architecting Next-Generation Networks

Chapter 1

B2B E-Commerce Networks built 5 years ago carried almost no B2B traffic. Such systems basically didn’t exist; the closest systems to displaying B2B characteristics were the value-added networks (VANs) provided for electronic document interchange (EDI) customers—sort of a private equivalent of the Internet.

Today, there are few companies that don’t run some form of B2B application on their networks, even if it is as simple as ordering office supplies from a Web site. Many companies rely heavily on B2B communications, placing an even greater burden on the corporate network. There is no denying the ways that B2B improves efficiency—inventory systems can place orders with vendors automatically, and entire classes of retailers now exist that don’t even carry an inventory; they simply take orders from customers, pass those orders on to distributors through B2B systems, and process payments on both sides. Many companies utilize e-procurement systems for internal procurement of everything from office supplies to contractors.

The infrastructure required to support these B2B efforts is significant. Figure 1.3 shows a typical B2B infrastructure, including multiple firewalls, application servers, B2B processing platforms, database servers, internal and external clients, Web servers, and more.

Figure 1.3: A typical B2B infrastructure.

6

Page 13: Architecting Next-Generation Networks

Chapter 1

What effect does this burden have on the network? Imagine that the population of the United States increased by a factor of ten over 2 or 3 years—how would the postal service feel the effect of this increase? Like a Los Angeles freeway at rush hour—which is pretty much how many corporate networks look these days. In addition, B2B functions aren’t limited to server-to-server or external B2B connections. Internal clients will be using automated ordering, data entry, catalog management, and all sorts of high-bandwidth applications that deliver results to external clients or vendors but generate a great deal of activity between LAN clients and servers.

Next-generation networks need additional speed, intelligence, and security to segment and manage the traffic for these important B2B functions and to provide them with additional bandwidth.

More and More Users The rapid pace of business growth also has an effect on networks. Obviously, networks must grow to support the business, but rapid change often means unplanned growth that lacks any cohesive, logical design. For example, consider the simple network that Figure 1.4 shows, which looks a lot like most networks that are just starting out. The network was over-engineered for the number of users it needed to support, providing plenty of room for growth—notice the router used to connect two segments, each containing a small number of users—or so it probably seemed at the time.

Figure 1.4: A typical network in the beginning.

As the business grows, users are added until the network can’t support any more. Then the emergency growth patterns begin, with new segments added here and there, segments cascaded from one to the other, and so forth. Before long, the network is out of control—and the business is so busy growing that nobody has the time to redesign it. Figure 1.5 shows how a network’s growth can be like that of a cancer cell—uncontrolled and ultimately detrimental to the host. Routers connect segments in a complex chain rather than through any logical topology. Segments are now more crowded with users and other devices, reflecting the network’s rapid growth. Segments containing servers are at least dedicated to that task, but are haphazardly spread across the architecture rather than being centrally accessible to all segments containing client computers. In short, it is a mess.

7

Page 14: Architecting Next-Generation Networks

Chapter 1

Figure 1.5: Uncontrolled growth is common in today’s corporate networks.

Although this type of network “design” might not create huge performance issues, it definitely creates management issues. Problems, when they occur, are more difficult to troubleshoot. Managing change and finding bottlenecks is next to impossible. In short, the network works fine, but it is harder and harder to rein in. Next-generation networks must allow for easily controlled growth, making it so easy to expand the network in any direction that administrators don’t need to think about it. In addition, such networks must ensure that manageability and security remain tight.

8

Page 15: Architecting Next-Generation Networks

Chapter 1

A Device in Every Pocket We’re used to thinking about networks in terms of users: How many users per segment? How many users on the LAN? How many videoconferencing users? But today’s users are acquiring a staggering array of wired and wireless devices, meaning each user can easily represent three or four actual devices, as Figure 1.6 illustrates.

Figure 1.6: Users are beginning to represent multiple devices apiece.

Each device requires bandwidth, has security implications, and has network addresses. Multiply the number of users in your environment by even a conservative number like 1.5 devices, and you’ll see that it is no wonder that networks are beginning to show a little strain. Next-generation networks must provide the raw bandwidth for these additional devices. They also need to support open protocols for management and security, allowing this vast range of devices to participate in the network in a secure, controllable fashion.

The Problems Pile On All of these factors—the Internet, increased data processing, new voice and video services, B2B e-commerce, rapid growth, and a diversity of devices—tend to result in three problem areas: efficiency, management, and security.

9

Page 16: Architecting Next-Generation Networks

Chapter 1

Network Efficiency Network engineers often speak of network utilization in percentages. “Our network runs at 70 percent utilization.” What many don’t realize, and even more don’t discuss, is that networks can’t achieve 100 percent efficiency. Ethernet networks, in particular, become less efficient the more traffic they carry, primarily as a result of the shared-medium, collision-detection nature of Ethernet.

To set up our discussion in later chapters, we’ll quickly review switches to ensure a baseline vocabulary. Switches are the primary methodology used to improve network efficiency. They create an individual physical segment with each switch port while allowing IP addressing to remain the same—in effect creating a virtual subnet that spans many physical segments. As Figure 1.7 illustrates, switches can permit multiple simultaneous conversations because they separate the actual traffic.

Figure 1.7: Switches make networks more efficient.

However, even switches have their limits. Switches can become saturated, at which point they simply can’t carry any more traffic. Bargain-basement switches are the most likely to become saturated—even before they are carrying all the traffic that they should be able to handle—creating an instant bottleneck in your network.

Next-generation networks will help solve this problem by providing faster raw bandwidth, which will require a more robust switching fabric. If computers can transmit the information they have and then get off the line, another computer will be able to transmit much sooner. Next-generation switches will operate at higher speeds and will be able to handle the full load of traffic that the network can generate.

We’ll discuss these concepts, including switching fabric, in detail in Chapter 4.

10

Page 17: Architecting Next-Generation Networks

Chapter 1

11

Management and Design As networks have become more complex to suit business needs, they have also become more difficult to manage. The sheer variety of devices—routers, switches, hubs, gateways, firewalls, proxies, servers, desktop and notebook computers, and other network-attached devices (such as printers)—has, in many cases, become a management nightmare. Pile on the complexity of application-specific management—managing VoIP, videoconferencing protocols and gateways, and so forth—and it is a wonder that administrators don’t simply quit in frustration.

The next generation of networks needs to offer more intelligence and self-management capabilities. Switches must be able to talk to one another more effectively, allowing groups of devices to be managed as a single unit. Devices need to take more responsibility for handling today’s special-purpose traffic, such as VoIP, videoconferencing, and next-generation applications including TCP/IP Offload Engine (TOE), Internet Small Computer System Interface (iSCSI), and Remote Direct Memory Access (RDMA).

We’ll discuss each of these emerging technologies in detail in Chapter 2.

The next generation of networks must also build on the intelligence in today’s networks—particularly in regard to tolerating rapid growth. Networks must readily adapt to changing business conditions without requiring complex redesigns. For next-generation network topologies to succeed, there will need to be even more intelligence and performance built-in to the switches that control the flow of traffic on the network. The combination of better software combined with more advanced hardware is the key to making these critical network infrastructure components a success.

Security A little more than 2 years ago, network security was something a few industry gurus preached, but nobody seriously practiced. Security was an add-on, something you implemented if you had some free time. And who has ever had free time? Today, security is an overriding concern in every field of information technology (IT) and communications. It is no longer sufficient to add security to a network by adding a monitoring tool or antivirus software—security has to be built-in starting at the physical network level.

Today’s networks offer only a modicum of built-in security. For example, on wireless networks, the Wired Equivalent Privacy (WEP) standard provides little more than a veneer of security due to the ease with which the wireless network traffic can be probed and the availability of tools to crack the simplified encryption scheme used. In addition, wired networks are limited to transport-level encryption protocols such as IP Security (IPSec) to provide security. Practically no physical security exists, making it easy for intruders to simply plug-in to a spare LAN jack anywhere in an office to begin sniffing traffic from the network. Even when the network hardware supports the ability to route traffic only to an approved list of MAC addresses (thereby preventing a random LAN jack from allowing access to your entire network enterprise) few network administrators take the steps necessary to implement this degree of security.

Page 18: Architecting Next-Generation Networks

Chapter 1

Next-generation networks will include security in every aspect of their design. Already, network adapters with built-in IPSec capabilities are enabling all-encrypted networks that are transparent to the client and server operating system (OS). These adapters use high-speed onboard processors to reduce or eliminate additional overhead on the computer’s CPU. Support for the 802.1x protocol is becoming available, requiring network devices to authenticate themselves before they’re even allowed to pass other traffic—effectively stopping the plug-in attacker. In addition, support for new security standards that provide powerful authentication and data encryption functionality—such as Wi-Fi Protected Access™ (WPA) and Advanced Encryption Standard (AES)—are being built-in to next-generation wireless devices.

The Evolving Network The next generation of networks promises, when properly designed and deployed, to solve most of today’s problems, as well as, perhaps for the first time ever, look ahead to bypass future networking problems. Network engineers are thinking more about the future applications of networks and designing networks that support open protocols and standards to provide the best possible compatibility with technologies that don’t yet exist. As Figure 1.8 shows, the next generation of networks will provide seamless connectivity for a range of devices, both wired and wireless, with built-in security, a diverse range of connection speeds, and more.

Figure 1.8: Next-generation networks focus on easy connectivity, security, and open standards and protocols.

12

Page 19: Architecting Next-Generation Networks

Chapter 1

Bigger, Better, Faster, More Next-generation networks will provide better mid-range connectivity. Rather than relying on expensive, complex Synchronous Optical Network (SONET) connections—a form of high-speed connectivity generally used for WAN connectivity that provides speeds in excess of 2.8Gbps—metropolitan area networks (MANs) will be able to rely on massive 10GbE connections. As Figure 1.9 shows, these MANs will allow end-to-end Ethernet connectivity, providing better security (because the traffic won’t have to pass thorough protocol gateways), design, and bandwidth. In addition, an Ethernet MAN provides the benefit of using only a single technology, Ethernet, rather than multiple technologies and the equipment necessary to bridge between many different protocols at every location (that is, you are not doing expensive protocol conversion; you are connecting an Ethernet LAN to an Ethernet LAN rather than converting to Frame Relay or ATM then back to Ethernet).

Figure 1.9: 10GbE provides an exciting new opportunity for MANs.

13

Page 20: Architecting Next-Generation Networks

Chapter 1

14

Internal server I/O has always been measured in megabytes per second, while network I/O has always been measured in megabits per second. Hard disk I/O has continued to grow, however, GbE is the first major jump in network performance in recent memory. Thus, while GbE is now capable of reducing the performance bottleneck on servers (especially when multiple NICs are being aggregated), the introduction of 10GbE will take us to the point at which the bottleneck between server and network begins to disappear. GbE clients will also see significant latency improvements as they will be able to take full advantage of the bandwidth and reduced latency that GbE promises. Hence, the real promise of next-generation network is to remove the network as a limitation to business.

Read more about 10GbE and its applications at http://searchstorage.techtarget.com/tip/1,289483,sid5_gci870890,00.html, as well as in Chapter 2 and Chapter 4.

Open Standards, Open Protocols Past networking technologies have often relied on proprietary or complex protocols, such as SONET, Integrated Services Digital Network (ISDN), and others. As an open standard, Ethernet and wireless fidelity (Wi-Fi) offer a broader range of support and, thanks to the competitive marketplace for such devices, lower prices. Because next-generation networks will support open standards for security, traffic management, and device management, you will be able to easily mix and match devices to achieve exactly the type of network your company requires.

Designed for Mobility and Connectivity The remaining limitation of the network is the wire—and that limitation will be short lived. Wireless technologies, such as 802.11b Wi-Fi, have already had a major impact, enabling “disconnected connectivity” everywhere from the office conference room to the neighborhood coffee shop. The next generation of that technology—802.11g Wi-Fi—is quickly becoming the new mainstream wireless LAN standard. Even advances in cellular technologies, such as 1X and 3G, promise ubiquitous wireless connectivity.

Read more about 802.11g at http://searchmobilecomputing.techtarget.com/sDefinition/0,,sid40_gci783003,00.html, as well as in Chapter 3.

Now, full-size desktop and notebook computers are among the minority in the world of connected devices. Cellular phones, wireless PDAs, tablet PCs, and convergence devices such as the popular BlackBerry™ personal communicator all rely on cellular, GPRS, and Wi-Fi connections to access the Internet, corporate networks, and email—either through direct connection to the corporate net or VPN connections that use the Internet to reach back to the home office.

Even wired networks are seeing an enormous amount of growth in device diversity. Not that long ago, the network contained servers, client computers, and printers. Today, even the mailroom fax machine and copier are connected, allowing users to utilize these devices right from their desktops. Webcams allow parents to peek in on daycare centers while at work. Even the office soda machine may be Internet-connected, allowing 30 workers to check the available selections over the Web and to bill purchases to their company accounts.

Page 21: Architecting Next-Generation Networks

Chapter 1

15

Wired networks will also become a medium for storage area networks (SANs). Rather than using expensive, dedicated fiber-based connections, the iSCSI standard allows directly connected storage devices to be accessed by servers over 1GbE, 10GbE, 100GbE, and 1000GbE connections. Microsoft’s iSCSI implementation for Windows 2000 (Win2K), Windows Server 2003, and Windows XP Professional allows iSCSI use on any form of standard Ethernet, not just GbE and faster technologies. The LAN now provides an infrastructure for building out vast, fault-tolerant SANs at a lower cost than many fiber-based solutions, using reliable, well-understood Ethernet technologies.

Read more about iSCSI at http://whatis.techtarget.com/definition/0,,sid9_gci750136,00.html and in Chapter 2.

Next-generation networks will provide appropriate connectivity options for all of these devices—from slower 100Base-T wired connections to the fastest new Ethernet connections—as well as seamless roaming between wireless LAN and wireless WAN connections.

Embedded Security Next-generation networks build security into every layer. 802.1x support, which includes embedded Extensible Authentication Protocol (EAP) capabilities, authenticates devices at the LAN port, disallowing unknown devices and locking down the physical network. Run wires—or wireless signals—anywhere you like; only authorized users will be able to attach. These networks will also include embedded IPSec capabilities, transparently encrypting traffic and preventing even authorized connections from eavesdropping on the network’s traffic. Security won’t be something you add to these networks, it will be something that is built-in from the very start, providing, for the first time in the history of networks, a truly integrated, secure infrastructure on which to build business applications and services. A highly secure OS will finally become a less important consideration as the underlying infrastructure begins to take responsibility for security.

Read more about 802.1x at http://searchmobilecomputing.techtarget.com/sDefinition/0,,sid40_gci787174,00.html as well as in Chapter 6.

Next-Generation Networking What are the technologies that will deliver on all of these wonderful new promises? Surprisingly, nothing new. Instead, the next generation of networking technologies builds upon the solid, reliable foundation of technologies that you’ve been using for years. Starting with this foundation enables lower upgrade costs, easier architecture design, and easier integration with your existing infrastructure—meaning you can “ease into” the next generation of networks without throwing away all of today’s investment.

Page 22: Architecting Next-Generation Networks

Chapter 1

16

GbE Described in the Institute of Electrical and Electronics Engineers (IEEE) 802.3ab standard, GbE is available now. Also named 1000Base-T, this new networking technology is ten times faster than 100Base-T Ethernet and backward-compatible with 10Base-T and 100Base-T networks. Also available, although still fairly expensive, is 10GbE, as specified in the IEEE 802.3ae standard. Currently designed primarily for trunking (between offices, for example) applications, 10GbE switches and other devices are on the market, allowing companies to create end-to-end Ethernet networks at lower acquisition and support costs than previous technologies permitted.

Wireless Wireless access points (WAPs) now support a variety of protocols. In addition to bridging to 10Base-T, 100Base-T, and 1000Base-T wired networks, WAPs support wireless clients using 11Mbps 802.11b, 54Mbps 802.11g, and even 54Mbps 802.11a. 802.11a is currently being used primarily in areas of high user density, as it provides lower range than 802.11b and 802.11g. 802.11a operates in a higher frequency band (5GHz) than 802.11b/g (which operates at 2.4GHz), making the two sets of standards inherently incompatible.

Read more about the 802.11 family of wireless protocols at http://searchmobilecomputing.techtarget.com/sDefinition/0,,sid40_gci341007,00.html as well as in Chapter 3.

European networks are more likely to feature 5GHz-based HiperLAN technology, which is similar to the 802.11x family of protocols. A newer version, HiperLAN/2, offers as fast as 54Mbps operation in the same frequency.

Switches Switches supporting 1000Base-T, 100Base-T, and 10Base-T connections are widely available, and switches offering 10GbE connections for office-to-office connections are also entering the market. Most of these devices are fully compatible with slower network devices, allowing you to deploy the central infrastructure of a faster network and slowly migrate individual devices—such as clients, servers, and other connection devices—as feasible or necessary within your organization. Switches have finally killed the hub—few companies are offering new Ethernet hubs, recognizing that a fully-switched architecture is a more efficient and practical way to build the next generation of networks.

Page 23: Architecting Next-Generation Networks

Chapter 1

17

Servers Servers are already shipping with dual 1000Base-T network adapters integrated into their motherboards, and a variety of server-quality GbE adapter cards are available to upgrade older equipment. As you begin to purchase new computers, look for machines that offer a built-in or bundled GbE network adapter. Many hardware vendors include GbE NICs as standard in their business computing lines; if it isn’t standard, you’ll likely pay as little as $40 for the upgrade with a new computer (but you’ll pay three or four times that amount to upgrade the computer later). Because of it’s complete backward-compatibility with existing Ethernet standards (such as 100Base-T), trickling GbE into your environment is an affordable, slow-paced way to build the next generation of networks without a complete redesign of what you’ve got and without the need to throw away existing equipment. Upgrading existing server hardware to GbE when you have the box open for another upgrade, such as disk or memory, gives you a very inexpensive way to move your servers to GbE, because the cost is in the downtime, not the upgraded NIC.

Security 802.1x and IPSec are the latest rage in network security, and you’ll find them available in higher-end network adapters and devices, such as wired switches and WAPs. IPSec is available in hardware network adapters for most major OSs, allowing you to completely offload the otherwise considerable burden of encrypting large quantities of network data onto a dedicated hardware processor, making network security completely transparent and easier to manage. Newer versions of the Windows OS include an 802.1x client, allowing those computers to participate in 802.1x-secured networks. WPA and AES are part of the upcoming IEEE wireless LAN security standard 802.11i, which will provide powerful wireless LAN security. Some vendors offer these technologies today: Microsoft offers support for WPA on a select subset of the available wireless hardware in a OS upgrade for Windows XP.

Getting Ready The building blocks of the next-generation networks are available and prices are falling rapidly. You’ll need to start planning to introduce them into your environment, but before you do, what steps should you take? What can you do today to prepare yourself, your peers and employees, and your equipment for the new network?

Education Take the time to learn all you can about these new networking protocols—how they differ from past versions and how they’ll affect your environment. Focus on the leading edge: 1GbE and 10GbE, 802.1x security, iSCSI SANs, and 802.11a/g wireless connections. Vendor white papers, magazine articles, and an increasing number of books are available to explain these new technologies and give suggestions for how to approach them in your environment.

Page 24: Architecting Next-Generation Networks

Chapter 1

18

Future-Proofing Your Network How can you get your network ready to be a next-generation network? The following list highlights tips—we’ll explore these topics in more detail throughout the rest of this book:

• Consider the next-generation network in all new hardware purchases. Provision new computers with 1GbE, and buy new switches that support 1GbE and potentially even 10GbE for MAN connections.

• Make all of your new wired NIC purchases 1000Base-T adapters. The backward-compatibility with your existing Ethernet technology makes the eventual transition to GbE completely transparent to users.

• Migrate server backbones to 1000Base-T infrastructure devices (switches, routers, and so on).

• Root out the old voice-quality CAT3 cabling that is hiding in your walls—1000Base-T is designed to run over existing CAT5 and CAT5e or better wiring. Use high-quality CAT5e, CAT6, or better cables, and ensure that cable runs don’t run along electromagnetic sources such as ceiling lights and electrical lines. Check wall jacks to ensure cable terminations meet CAT5, CAT5e, or better standards; improperly terminated wall jacks are the leading cause of electromagnetic noise in high-speed networks. If you’re running new wires, go with CAT6, which will provide the best long-term investment in your physical infrastructure.

• Take a hard look at where your network is going. Many next-generation networking technologies—such as GbE and iSCSI—are relying more on less-expensive copper wiring (CAT5, CAT5e, and CAT6) than on fiber. GbE over CAT5 and iSCSI will give you plenty of performance at a much lower cost than implementing a fiber networking technology.

Summary The network that you’ve been working with for the past decade is likely showing its age. Fortunately, the next generation of networking technologies is here: GbE, solid wireless networks, smarter switches, and built-in security. These are the building blocks of the next-generation networks that companies will rely on to enhance productivity, lower costs, raise security, and improve connectivity. So how do you get started?

In the next chapter, we’ll explore GbE, including network adapters and switches, introducing you to the key improvements in addition to the speed of this technology. We’ll discuss how to migrate your current network to support GbE. In each subsequent chapter, we’ll cover the additional technologies that form next-generation networks so that you have the resources in place to make decisions that will result in an optimized next-generation network in your environment.

Wi-Fi® and Wi-Fi Protected Acess ™ are trademarks of the Wi-Fi Alliance. BlackBerry™ is a trademark of Research In Motion Limited. All other trademarks are the property of their respective owners.

Page 25: Architecting Next-Generation Networks

Chapter 2

19

Chapter 2: Gigabit Ethernet Migration

The move to Gigabit Ethernet (GbE), or “Gigabit,” has already begun. In fact, across the IT industry, Gigabit adoption has been on the rise since mid-2002 when Broadcom introduced the first single-chip GbE controllers. Today, nearly every type of networking equipment—from routers and switches to servers and desktop computers—is offered with GbE either built-in or as an inexpensive upgrade option.

Administrators of existing networks are deploying GbE in the same way that they deployed 100Base-T—they buy new hardware that comes standard with the GbE hardware. GbE is transparent to users because it is fully backward-compatible with the previous standard, in this case, 100Base-T.

However, the fact that GbE is needed now differentiates this migration from the previous move to 100Base-T. User demand on network resources has grossly exceeded the networks’ ability to efficiently support the modern-day functions businesses perform. 100Base-T applications—such as email (particularly email attachments), system back up, and video conferencing—as well as other enhanced uses of the network that enable a flexible workplace require the adoption of GbE now.

Is GbE on your corporate radar? It should be. The last chapter described how networks are fighting an escalating battle to keep up with corporate applications such as voice and video conferencing, increased data retrieval, and much more. It also pointed to emerging technologies such as TOE, RDMA, and iSCSI, which are solving business problems while placing an even greater burden on the network. The 100Mb Ethernet networks adopted since 1997 are no longer adequate, particularly for network segments containing servers.

In this chapter, we’ll explore how GbE brings the user one step closer to real-time computing. When deployed on a client, GbE delivers the same relative performance as a PC hard drive. Given this performance, GbE enables a seamless network, transparent among the other large data subsystem in the PC—the hard drive. Let’s begin by developing a foundation of GbE knowledge through a primer on this technology.

As a whole, corporate networks have gotten a lot of mileage from Fast Ethernet (100Base-T), but not entirely due to its speed. 100Base-T was introduced before switches were very common, and networks were upgraded to 100Base-T and switching over a long period of time. As a result, the lifespan of 100Base-T was one of small increases every month or so. With GbE, however, corporations have a much stronger reason to upgrade faster. The additional speed is useful, but the additional business processes that GbE can support make it well worth the price—especially considering that the price is transparent—PCs carrying GbE bring little to no additional cost.

Page 26: Architecting Next-Generation Networks

Chapter 2

20

GbE Technology Primer GbE is based on Ethernet, a tried-and-true networking protocol that has been around for decades. However, using the term “based on Ethernet” isn’t quite accurate because it implies that GbE is Ethernet plus something else. GbE is pure Ethernet, through and through, offering the reliability, low cost and easy maintenance that Ethernet networks have always enjoyed. To clarify the differences between Ethernet and GbE, reference the comparison in Table 2.1.

Feature 100Base-T GbE

Speed 100Mbps 1000Mbps Frame format 802.3 Ethernet 802.3 Ethernet MAC layer 802.3 Ethernet 803.3 Ethernet Flow control 802.3x Ethernet 802.3x Ethernet Primary mode Full duplex Full duplex

Table 2.1: Comparison of Fast Ethernet and GbE.

Is older better? Ethernet is decades old, and you might be wondering why such an old protocol is the best choice for the future. Ethernet is the most stable networking protocol in the world. The entire global Internet is built primarily from Ethernet connections on local networks. Nearly every imaginable problem with the Ethernet protocol was worked out years and years ago, leaving us with a technology that offers true “dial-tone” reliability—meaning you just turn it on and it works. What better technology could be used to build next-generation networks?

Switches In Ethernet’s early days, all devices were connected directly to one another or to a central hub. When any one device transmitted, all the connected devices saw the signal. A hub’s entire job, in fact, was to receive transmissions and retransmit them to every connected device on the hub. As a result of this methodology, collisions became an issue. As the number of connected devices increased, more collisions would occur. But a solution became evident and available to solve the Ethernet-collision problem—switching. 100Base-T switches are logically similar to hubs, providing a central connection point for all devices on the network. However, when one device transmits and the switch receives the transmission, the switch doesn’t necessarily retransmit that signal to all other attached devices. Instead, the switch looks at the destination MAC address to determine the next action:

• If the MAC address is a special broadcast address, the transmission is intended for all devices, and the switch rebroadcasts the signal to all attached devices.

• If the MAC address is not one that the switch has seen before, the signal is broadcast to all attached devices. However, the switch watches for a reply from that MAC address. When it sees a reply, it associates that MAC address with the port on which the reply was seen, making future traffic more efficient.

• If the MAC address is one that the switch has seen before, the signal is only rebroadcast out the port that is associated with the MAC address.

Page 27: Architecting Next-Generation Networks

Chapter 2

21

Switches are complicated devices. Ethernet switches must actively watch traffic and take action based on the traffic’s destination. Thus, switches require onboard memory to remember MAC-to-port mappings. Initially, such devices were expensive. However, with the advent of single-chip Ethernet switch controllers, such as those produced by Broadcom, switches have become less expensive. The result is that switches have almost completely replaced less-efficient hubs on most Ethernet networks.

Originally, switches were used as a form of central backbone device. One switch would provide connections to several hubs, and the hubs would, in turn, provide connections to several devices. The hubs each represented a collision domain, meaning every device connected to a hub would be competing for transmission bandwidth with the other devices on the same hub. The switch separated these collision domains, reducing the number of devices that were competing with one another.

By the year 2000, switches became so inexpensive that most companies simply attached devices directly to switch ports and eliminated hubs altogether. Doing so helped reduce the collision domain to one computer per domain. Of course, collisions still occur: broadcast traffic and other fairly common types of traffic result in collisions between transmitting devices. Switch fabrics have become efficient and inexpensive enough to make the standardization on switched networks a non-issue at any price point.

Aggregation The first 10/100 switches enabled a new type of network efficiency—aggregation. Multiple client segments running at 10Base-T were connected to the switch along with one or two 100Base-T server segments. Incoming traffic from clients was running at 10Mbps, so the servers could receive 100Mbps and one client “conversation” occupied only one-tenth of the available server segment bandwidth. The switch, then, could aggregate as many as 10 client connections onto the server segment at once, vastly increasing network efficiency.

Although it would seem that faster clients have caused more network congestion in recent years, such isn’t really the case. Calculations and test reports that make use of the total available bandwidth and the maximum bandwidth utilization of the network clients are generally red herrings; that is, they simply draw attention away from the actual issues, the most common of which is client response time.

What about when clients catch up again? Of course, clients will eventually catch up and be running GbE. That is where 10GbE switching comes into play, offering the ability to aggregate GbE connections between data centers, buildings, and so forth; retaining a high-speed edge; and maintaining network efficiency. We will explore 10GbE switching later in this chapter.

Duplex Ethernet can operate in one of two duplex modes: half and full. Half-duplex is a lot like CB (Citizens Band) radio communications between truckers: you can talk or listen but you can’t do both at the same time. In fact, the collisions that occur when two truckers try to transmit at once are similar to the way that Ethernet collisions occur. Half-duplex is a pretty inefficient mode, but when network devices are connected to a hub, it is generally the only choice. Half-duplex is necessary to manage collisions.

Page 28: Architecting Next-Generation Networks

Chapter 2

22

When connected to a switch, however, full-duplex becomes possible, which basically means that a device can send and receive data at the same time—more like a telephone conversation in which both parties can talk at once. This mode is obviously more efficient and is another reason that companies replace hubs with switches. GbE, in most instances, usually operates only in full-duplex mode, maximizing the potential of each client connection.

Baseband vs. Broadband

The 1000Base-T designator for GbE stands for 1000Mbps, Baseband, Twisted pair. This term is an indication of the protocol’s basic speed, its bandwidth utilization (baseband), and the wiring used (twisted pair CAT5 wiring or better).

Other designations have existed in Ethernet’s past. For example, the earliest Ethernet was 10Base-2, running a 10Mbps baseband over coaxial cabling. 10Base-5 used a thicker coaxial cable. A term that you don’t often encounter is the basebase designation, which simply means that each transmission utilizes the entire bandwidth that can be carried by the wiring.

The alternative is broadband communications, through which transmissions are divided into channels, and each channel can carry traffic independent of the others. The most common form of broadband is cable TV, which is capable of carrying hundreds of channels of programming as well as voice and data services over a single wire.

Which begs the question: Why doesn’t Ethernet switch to broadband? Something like 1000Broad-T would be incredibly efficient, assigning one dedicated “channel” to each connected device and eliminating collisions altogether.

The problem is that today’s CAT5 wiring can’t carry broadband transmissions. Those signals require high-quality, heavily shielded wiring, such as the coaxial wires connected to your cable TV box at home. Companies would have to spend millions to replace their existing wiring. Another problem is getting traffic from one channel to another. When two computers want to talk to one another, some sort of switching device would have to bridge traffic across their two individual channels. These switching devices would act a lot like a telephone switch, which connects two phone lines for a conversation. However, these network switches would have to operate at incredible speeds and carry incredible amounts of data. It’s uncertain whether they could be built to operate with the required speeds.

We’ve established a base understanding of GbE technology. With this foundation of knowledge, we can begin to explore how GbE products will affect next-generation networks.

GbE Products All GbE products are not created equal. The actual products you buy—from Hewlett-Packard (HP), IBM, Dell, and so forth—are generally built using Ethernet chipsets from companies such as Broadcom and Intel. These chipsets create and process the Ethernet frames and are the major factor in determining the networking functions’ speed and efficiency.

For example, in a database test conducted by Broadcom, a client PC using Broadcom’s NetXtreme GbE controller outperformed a similarly equipped client PC utilizing a competitive brand GbE controller by 300 percent. The Broadcom-equipped machine was able to achieve more than 6200 transactions per second—whereas the system equipped with the competitive brand reached only 2083 transactions per second, in both transmit and receive operations. In a similar test, the Broadcom-equipped system was used to measure Active Directory (AD) logins, and beat the competitive brand-equipped machine by 260 percent, processing more than 710 logins per second versus the competitive brand-equipped system’s 275.

Page 29: Architecting Next-Generation Networks

Chapter 2

Another test utilizing Microsoft’s Exchange Server software provided excellent results as well. The Broadcom-equipped system handled 5432 sent messages per second, the competitive brand-equipped machine handled 2777. Thus, the system running Broadcom was about 196 percent faster. Again, these client PCs were all equipped with similar-seeming hardware: GbE controllers, the same processors, fast SCSI storage, and more—the different GbE controllers made all the difference.

GbE Performance

GbE has rapidly become the standard network technology that connects today’s business users and is the next logical step in Ethernet technology. It provides users with an enhanced computing experience that increases network performance and productivity and reduces CPU utilization and network congestion. Tests conducted by the Ziff-Davis Media Company’s eTesting Labs (http://www.veritest.com) show that GbE connections can provide as much as a 341 percent performance and productivity improvement over 10/100 Fast Ethernet connections running everyday business applications on client systems such as email, Web browsing, databases, and disk backup.

Data

base

Tx

Datb

ase

Rx

Exc

hang

e

AD

Competitive Product

Broadcom NetXtreme

0

1000

2000

3000

4000

5000

6000

7000

GbE Performance

Figure 2.1: The GbE controller manufacturer of a GbE device makes a big difference in performance numbers.

23

Page 30: Architecting Next-Generation Networks

Chapter 2

24

GbE Network Adapters and Switches that use Broadcom GbE Controllers

Most major manufacturers embed Broadcom’s proven, high-performance GbE controllers into their notebooks, desktops, and servers. These manufacturers also offer add-in NICs featuring Broadcom technologies. When making your next purchase, confirm the product has Broadcom GbE controllers.

In addition, most first-tier infrastructure devices, including switches, utilize Broadcom technology. For a complete list of network adapters and switches that incorporate Broadcom’s high-performance GbE controllers, visit http://www.broadcom.com.

GbE Deployment Strategy So how do you begin implementing GbE into your environment? GbE offers an easy approach to deployment thanks to its built-in backward-compatibility with 10/100Base-T. As a first step in your deployment, start specifying client computers that include GbE NICs. Major computer hardware vendors are already standardizing on GbE for the client computers they sell. Because clients tend to be upgraded more easily then servers, simply specify that anytime a client computer is upgraded, there should be a GbE NIC somewhere in the mix. For example, if you are opening the case to upgrade memory in preparation for migrating a computer to Windows XP, take that opportunity to see that a GbE NIC is installed as well.

The same situation applies to servers. Any new server that is purchased should include multiple GbE controllers, either built-in or through NIC cards (to be able to take advantage of bandwidth aggregation, failover capabilities, and the technological improvements of GbE). As existing servers are physically upgraded, they too should have multiple GbE NICs added. When purchasing GbE NICs, be sure to specify a GbE NIC that will seamlessly integrate into your existing servers. NICs, such as those based on Broadcom-based controllers, work seamlessly with all other branded NICs in important functions such as teaming, failover capabilities, and link aggregation.

As Figure 2.2 shows, clients and servers will automatically fall back to 100Base-T while talking to your legacy network components, such as switches. Note that the orange lines in the figure indicate GbE connectivity. The switches, shown in gray, do not support GbE, so the servers will auto-negotiate to the highest speed the switch supports, likely 100Base-T.

This adoption method is a great technique, and almost all major brands of servers are offered with built-in GbE or have an option for a GbE upgrade from the factory.

Page 31: Architecting Next-Generation Networks

Chapter 2

Figure 2.2: The first step in deploying GbE is to acquire GbE servers and clients.

When you’re ready to make a major improvement in network performance, upgrade infrastructure components such as routers and switches to GbE, as Figure 2.3 shows. Instantly, your GbE servers and clients will switch to 1000Base-T.

25

Page 32: Architecting Next-Generation Networks

Chapter 2

Figure 2.3: The next deployment step is to upgrade infrastructure components.

As Figure 2.4 shows, the GbE adoption process is the same for client computers and workstations that have built-in GbE NICs. In this figure, both the switch and the router have been upgraded to support GbE (shown in orange).

As the orange lines indicate, the client computers have now been upgraded, although they are connected to a legacy switch and will therefore negotiate to a slower speed. Because they will be connected to legacy switches, they will fall back to 100Base-T; however, you’ll have complete compatibility along with a built-in, ready-to-go upgrade for the future.

26

Page 33: Architecting Next-Generation Networks

Chapter 2

Figure 2.4: Upgrade older client computers and acquire new systems that have GbE built-in.

As Figure 2.5 shows, the last switch has now been upgraded. The only client still running at 10Base-T or 100Base-T will negotiate the proper speed with the switch.

When buying a notebook, ask for built-in GbE integration right on the motherboard. Add-in GbE will either use the PC Card/PCMCIA slot or a USB port, neither of which are really capable of managing 1000Mbps traffic. Unfortunately, notebooks continue to be the most difficult-to-upgrade kind of computer, so your switch to GbE may not be complete until you’ve phased out all your older notebooks.

27

Page 34: Architecting Next-Generation Networks

Chapter 2

Figure 2.5: Upgrade the remaining infrastructure devices/

Finally, when you’re ready, you can replace any devices and systems that are still running at slower 10/100Base-T speeds. The infrastructure is already in place, so you can conduct this phase of the upgrade—like the other phases—at your leisure. Figure 2.6 shows the final all-GbE network.

28

Page 35: Architecting Next-Generation Networks

Chapter 2

Figure 2.6: The final step is to have GbE on every device.

Unlike other networking technologies, which don’t offer the “it’s all Ethernet” compatibility, GbE allows you to conduct your upgrade as slowly or as quickly as you like. Begin by acquiring new devices with built-in GbE, and gradually replace infrastructure components as required by your business needs. Or conduct an over-the-weekend upgrade of specific types of bandwidth, such as server-switch or router-switch. The decision is yours and GbE offers the flexibility to support whatever plans you might have.

29

Page 36: Architecting Next-Generation Networks

Chapter 2

30

Depending on how your network is designed, it contains many categories of bandwidth such as client-switch bandwidth, which carries traffic from a switch to client computers; router-switch bandwidth, which carries traffic between switches and routers; switch-switch bandwidth, which carries traffic between switches, often between buildings on a corporate campus; and server-switch bandwidth, which carries traffic between servers and switches. If you have created dedicated segments for networked printers, the switch-printer bandwidth category will be an important consideration in your GbE-deployment plan. You might also want to independently consider the bandwidth used on perimeter networks (DMZs), extranets, and other specialized subnets.

Getting to the end of your GbE deployment? When you’re down to the last few devices and systems, use a hardware inventory system such as Microsoft Systems Management Server (SMS) or Intel LANDesk to inventory your server and client computers and pick up the last few 100Base-T models. Add-in PCI adapters can be used to upgrade these computers. And don’t forget about other network-attached devices, such as networked printers and copiers. Contact their manufacturers for information about upgraded network connection modules.

GbE Emerging Technologies Simply migrating your existing network infrastructure to GbE will bring both cost and performance benefits to your network users. In addition, many new technologies will emerge over the next year that build upon the networking platform that GbE brings to your computing environment. These hardware, software, and application technologies combine to bring the next generation of networking performance to your corporate environment.

TOE Most Ethernet networks use the Internet protocol (IP) to handle the addressing and routing of the packets through the network. IP was designed by the U.S. Department of Defense to allow network packets to travel over various routes across a global network. If a particular part of the global network were to become unavailable, the protocol would automatically steer the packets on alternative routes. Although this implementation is very robust, its drawback is that packets may arrive at their destination out of order and some may get lost along the way. The Transmission Control Protocol (TCP) was developed to account for this possibility. TCP checks each arriving packet to ensure that the packet is in order and has not been corrupted along the way. Historically, networks have been slow—relative to the processing power of the CPU—so the TCP function has been handled by the CPU. However, network technology and traffic has evolved much faster than the CPU to the point that processing TCP at Gigabit speeds can overwhelm even the most modern CPUs. To address this shortcoming, companies are building new devices—TCP/IP offload engines (TOEs).

However, an offload engine alone doesn’t make for a networking performance solution. It is simply the hardware upon which the solution is built. To build a reliable, cost-effective solution, there needs to be a standard way to utilize the offload engine. Because the TCP/IP function is tied tightly to a computer’s OS, it is important to develop TOE technology in partnership with OS vendors.

Page 37: Architecting Next-Generation Networks

Chapter 2

31

In an effort to encourage a consistent TOE implementation that is compatible with Microsoft OSs, Microsoft has introduced the TCP Chimney offload architecture. This architecture is designed to intelligently segment the TOE technology between the Microsoft OS and hardware. TCP Chimney offload architecture was publicly introduced at the Windows Hardware Engineering Conference (WinHEC) in May 2003. This partial offload technology is designed to provide a standardized TOE access methodology that doesn’t require a vendor-specific parallel transport stack to hook the existing transport stack that ships with the Windows OS (the Chimney technology is scheduled to be released with the next version of the Windows OS, code-named Longhorn). Broadcom has been working closely with Microsoft on this technology from the beginning.

Full offload vs. partial offload

Full offload—All TCP/IP-related functionality—including the TCP/IP data path, TCP/IP packet creation and breakdown, connection management, and state management—is offloaded to the hardware.

Partial offload—High-overhead activities, such as dealing directly with the TCP/IP packets, are offloaded to the add-in hardware, and tasks such as connection and state management are handled by OS drivers. With the partial offload Chimney approach, there is no additional third-party stack to maintain or additional overhead in processing the duplicate stack. Microsoft claims significant benefits with this Chimney approach, not the least of which is that it provides a standardized implementation for TOE in the Windows OS environment.

RDMA GbE and TOE technology will greatly increase the amount of network traffic possible between servers and clients. This added traffic will place an additional burden on the memory and CPU bus because the traffic needs to be moved from one location in the computer’s memory to the other. One way to improve this situation is to point network traffic directly to the memory location on the computer where it’s desired. Doing so will reduce the burden on the CPU/memory subsystem and improve the time it takes (called latency) for information to get from one computer to the next. This solution is called RDMA and has been standardized by the RDMA Consortium (http://www.rdmaconsortium.org).

RDMA is a technology feature that allows one computer to place data in the memory of another computer, thereby reducing the processing overhead and maximizing the efficient use of available network bandwidth. RDMA uses a kernel bypass model in which the application talks directly to the NIC, and the NIC takes the buffer content and transmits it to the target computer using the RDMA write message, which contains both the data and the destination information. The NIC on the target computer then writes the data contained in the RDMA write message directly into the target application’s memory buffer. These actions all take place with minimal involvement of the CPUs on the originator and target computers. With TOE hardware in place, the push for RDMA over TCP/IP, and technologies such as Microsoft Chimney—with its direct hooks into the OS—you will be able to bring the benefits of these combined technologies transparently to users and application development.

Page 38: Architecting Next-Generation Networks

Chapter 2

32

Many OS and application vendors are working on new products that will take advantage of the emerging RDMA infrastructure. RDMA requires specialized application support, which isn’t very widespread at the moment. Applications and OSs will need to be aware of the capabilities of RDMA and will have to take special steps to utilize it. As RDMA matures and becomes more readily available in network hardware, you’ll see OSs and applications begin to take advantage of it.

iSCSI iSCSI is one of the most exciting technologies to emerge in the world of storage area networks (SANs). The big player in the SAN marketplace is fibre channel (FC). Although FC is an effective and enterprise-capable technology, it is a bit complicated and quite expensive. It also makes the SAN a dedicated network, essentially requiring that servers be connected both to the primary Ethernet network for client connectivity and to a dedicated FC network for SAN access. Figure 2.7 illustrates this architecture.

Figure 2.7: Traditional FC SANs use a dedicated networking topology.

Page 39: Architecting Next-Generation Networks

Chapter 2

iSCSI, however, tunnels normal SCSI commands over TCP/IP packets, making the traffic suitable for a normal Ethernet network. Applications make normal calls to the OS, which, in turn, generates normal SCSI commands through a SCSI device driver. With traditional, directly connected SCSI storage, these SCSI commands are transported to a hardware device driver and eventually to a SCSI controller card that is connected to the SCSI storage devices. In the case of iSCSI, the driver software encapsulates the SCSI commands in TCP/IP packets and places them onto the network. No special NIC hardware is required, and the packets simply travel to the network-connected storage device.

Because iSCSI runs over existing networks without any special controller hardware in the server, it offers many of the same benefits but at a much lower cost than traditional FC-based SANs. In fact, iSCSI is already beginning to revolutionize the concept of SANs, placing these otherwise expensive architectures well within the reach of small and midsized businesses. The benefits of a SAN—centralized storage management, better fault tolerance, and easier storage reconfiguration—are becoming available to organizations of every size. As Figure 2.8 shows, iSCSI rides on your existing network topology, making the deployment easier than FC-based SANs.

Figure 2.8: iSCSI-based SANs use your existing network.

To further simplify the deployment of iSCSI, Microsoft released iSCSI drivers for Windows 2000 (Win2K), Windows Server 2003, and Windows XP Professional in June 2003. In this first implementation, the driver is built-in to the OS and works with the TCP/IP stack (supporting iSCSI over TCP/IP). Support is provided for all standard Ethernet adapters, meaning that iSCSI solutions can be implemented without a major hardware upgrade. This same transparency of support is what will make the Microsoft Chimney TOE implementation so desirable with future versions of the Microsoft OSs.

33

Page 40: Architecting Next-Generation Networks

Chapter 2

34

You can find current details about Microsoft’s iSCSI support and announced future plans at http://www.microsoft.com/windowsserversystem/storage/iscsi.mspx.

The trick, of course, is that your existing network must be able to support the additional traffic that iSCSI imposes, and only GbE can offer that flexibility.

iSCSI vs. FC

The SAN debate is beginning to heat up and will likely rage for years. The thinking is that FC SANs provide dedicated bandwidth for storage (which you can also achieve with iSCSI, of course) and a more efficient, more stable, and higher-end set of technologies for creating SANs. Other emerging technologies will run FC over IP, meaning you could potentially build FC-based SANs over Ethernet networks.

However, FC is undeniably expensive. The FC adapters required for servers and storage devices are expensive, as is the fiber-optic cabling. iSCSI offers a much less expensive solution. It uses standard Ethernet technologies and requires much less expensive hardware and cabling. Whether iSCSI will be able to edge out FC remains to be seen, but iSCSI will likely represent a majority of the SAN market of the future because iSCSI-based SANs are affordable enough for companies that would have never otherwise considered SANs. It should also be noted that iSCSI and FC are not mutually exclusive implementations.

In fact, iSCSI is being compared in many ways to Microsoft Windows Server in the early days. Although Windows wasn’t, at the time, the most feature-laden or stable server OS, it was cheap, easy to set up, and ran on inexpensive commodity server hardware. These characteristics made it an attractive solution to small companies and departments who couldn’t afford more complex, expensive, and demanding solutions like UNIX, Novell NetWare, and so forth. iSCSI is to FC what Windows was to Novell—an inexpensive, easy-to-maintain, easily deployed technology that will become very popular.

iSCSI Extensions for RDMA iSCSI Extensions for RDMA (iSER) takes the robust iSCSI protocol and adds RDMA capabilities. In short, iSER allows an application to request data through the SCSI layer and allows the RDMA-enabled NIC (RNIC) to retrieve data from an external iSCSI storage device and to place that data directly in the servers’ memory, where it becomes accessible to the application. Much of this operation bypasses the computer’s processor, neatly offloading a great deal of processing power onto the RNIC and relieving the processor bottleneck.

As with iSCSI and RDMA, iSER benefits from the fast, solid platform provided by GbE—giving you yet another reason to ensure that GbE is implemented in your network. In fact, technologies such as TOE, Microsoft Chimney, iSCSI, RDMA, and iSER are poised to offer significant, innovative performance advantages to client/server applications. If the immediate cost and performance benefits that GbE brings to the table aren’t enough to convince you to begin your GbE migration now, these advanced technologies provide evidence of the value a GbE implementation will ensure—making your networking infrastructure a competitive advantage for your business.

Page 41: Architecting Next-Generation Networks

Chapter 2

35

The Financial Story

How much are your employees paid to kill time while their computers wait for a busy network to deliver data? Suppose a GbE deployment managed to save your employees just 2 minutes per day. The reality is that GbE will save much more time than that, but even saving 2 minutes a day for a $50,000 per year employee is a $601 yearly savings for your company. For a $100,000 per year employee, your organization would save $1202.

There are additional financial aspects of GbE to consider. For example, what is the actual cost of upgrading to GbE? In many cases, new client PCs and servers come standard with GbE or offer a GbE upgrade for as little as $40.

Looking Ahead A number of emerging technologies—many of which are beginning to be available right now—offer faster performance, easier management, and advanced functionality. Your next-generation network isn’t as far away as you might think.

One of the most exciting new developments is 10GbE switching. As GbE becomes more prevalent at the desktop and server tiers, you will need a way to aggregate Gigabit speeds. Traditionally, companies have used expensive fiber connections between buildings on a corporate campus or on city-spanning MANs. In addition to the expensive fiber cabling, the fiber controller modules can cost thousands of dollars. 10GbE offers a much less costly solution, offering 10Gbps speeds over inexpensive, copper InfiniBand cabling. A single 10GbE connection can aggregate many GbE connections over long distances and improve the ability of servers to maintain multiple connections.

Summary GbE represents the ideal upgrade to today’s overburdened networks—you can deploy GbE as quickly or as slowly as you like, and with the right purchasing decisions on new equipment, GbE will effectively deploy itself invisibly throughout your enterprise. In addition to helping bandwidth-hungry applications, such as voice and video conferencing, and meeting the demands of data-hungry users, GbE offers additional bandwidth for exciting new technologies such as TOE, iSCSI, RDMA, and iSER. These technologies offer more than just new functionality; they offer serious solutions to performance bottlenecks that hamper high-end business applications.

When placed in the context of planning for your future business growth and development, GbE implementation becomes a critical part of the future infrastructure for any business that plans to be competitive, especially in light of the simplicity of deploying GbE in your current networking enterprise.

In the next chapter, we’ll explore wireless networking, considered by many to be the most exciting development in the network arena since Ethernet. From a standing start just a few short years ago, wireless networking—now in its fourth generation of broad-audience technologies—is revolutionizing the way people work and play, and the way networks are designed, secured, and managed.

All product and service names and all trademarks are the property of their respective owners.

Page 42: Architecting Next-Generation Networks

Chapter 3

36

Chapter 3: Extending Enterprise Networks with Wi-Fi®

Wireless networking is arguably the most important advance in networking technology since Ethernet. Today, wireless networking is enabling a whole new range of devices and functionality. With wirelessly networked notebook computers and handheld devices, for example, employees can stay connected whether they are attending a meeting across the building or catching a plane across the country. And cell phones with built-in wireless networking are emerging to help companies cut costs and improve productivity by using Voice over IP (VoIP).

The state of wireless networking has evolved rapidly over the past several years. The good news for those who are considering implementing a wireless network is that the technology has reached a state in which there are well-defined standards and a widely accepted seal of interoperability to ensure that competing products work together. IEEE 802.11g has now emerged as the mainstream wireless LAN standard and new advances in the physical layer of wireless networking aren’t expected before 2005. Measures to secure wireless network communications are also now well defined.

In addition, today’s wireless LAN products are smarter and more flexible; thus, they will be able to more easily adapt to emerging standards and features. The easiest decision you can make is to buy portable devices that provide built-in wireless networking. For pocket-sized mobile devices such as handhelds, choose 802.11b, because it is the most common wireless networking standard and it enables lower power adapters than the higher-performance alternatives. Laptops and other portable computers, however, will benefit from the higher raw data rates of 802.11g, 802.11a, or, better yet, a dual-mode adapter that supports both 802.11a and 802.11g. Most manufacturers already offer these technologies built-in to new laptops, and you can upgrade older units by using PC Cards or USB adapters.

Innovations in CMOS technology have helped drive Wi-Fi® performance up while driving costs down. CMOS is the most widely used manufacturing technology in the world and the digital portion of most wireless networking chip sets is built in CMOS. With radios now designed in CMOS technology, chip-set suppliers are able to combine the entire wireless LAN solution onto a single chip. This recent innovation squeezes all the functionality, including the analog radio, onto a single piece of silicon. These devices enable wireless network-enabled handheld devices that are smaller, use less power and are less expensive.

In this chapter, you’ll learn how wireless networking operates, why the technology is important, and how to tell the difference between a stable, future-proof wireless network and proprietary offerings that likely will not support your enterprise in the years to come.

Page 43: Architecting Next-Generation Networks

Chapter 3

37

A Brief History of Wireless Networking Before you can start selecting wireless networking technologies, it is useful to know a little bit about where those technologies came from. Seeing the progression of wireless LAN technologies makes it easier to predict where wireless networking is going in the future and to determine which of today’s technologies will provide the most stable, long-lasting solution for your enterprise.

802.11 Legacy 802.11 is the family of Institute of Electrical and Electronics Engineers (IEEE) specifications that address wireless networking. The first implementations of these technologies were capable of achieving speeds of 1Mbps and 2Mbps. Popular primarily in vertical applications, these original technologies didn’t provide enough bandwidth for enterprise use. However, they did act as a proof-of-concept for the viability and market interest of wireless networking in general, and set the stage for significant advances.

It’s doubtful that you will see much original 802.11 in use these days unless you’re working in an industry that has implemented a vertical solution based on the technology. Although 802.11 saw early popularity in applications such as manufacturing and healthcare, it wasn’t widely implemented in mainstream enterprise environments.

802.11b The IEEE approved two enhancements to the original 802.11 standard in 1999, 802.11a and 802.11b. 802.11b occupies the same 2.4GHz radio frequency as the original 802.11 specification, extending raw data rates to 11Mbps. It was the first major commercial success for wireless networking, primarily because it provided similar maximum data rates to 10Base-T Ethernet, making it viable for corporate use. Many manufacturers quickly released commercial 802.11b products, including 3Com, Apple, Cisco, Dell, Gateway, Hewlett-Packard and others.

The Wi-Fi CERTIFIED™ Designation

Although the IEEE created the 802.11 family of specifications, the organization doesn’t enforce the specification or ensure that manufacturers create products that precisely meet the specification. To ensure that manufacturers produce implementations that are interoperable with other 802.11 devices, the Wi-Fi Alliance provides interoperability testing and a seal of approval.

Currently comprised of more than 200 member companies, the Wi-Fi Alliance’s Wi-Fi CERTIFIED™ designation ensures that products claiming to be 802.11b compatible are, in fact, fully interoperable with other 802.11b devices. The Wi-Fi Alliance conducts rigorous tests of hardware and software to ensure compatibility before issuing the designation, providing consumers with confidence that all Wi-Fi CERTIFIED products will work with one another. Today, Wi-Fi CERTIFIED has been expanded to include 802.11g and 802.11a, and more than 1000 products have been Wi-Fi CERTIFIED to date.

Wi-Fi CERTIFIED has become so popular and widely recognized that it’s harder to find products that don’t carry the designation. Still, don’t bother purchasing products that aren’t certified should you come across any—the benefit of compatibility and specification adherence is worth looking for the Wi-Fi logo. Wi-Fi CERTIFIED is your guarantee of interoperability between devices.

Page 44: Architecting Next-Generation Networks

Chapter 3

802.11a With 802.11a, the IEEE took the standard up to 5GHz, offering raw data rates up to 54Mbps. As with 802.11b, 802.11a provides for lower data rates to compensate for coverage, offering speed fallbacks to 48Mbps, 36Mbps, 24Mbps, 18Mbps, 12Mbps, 9Mbps and 6Mbps. 802.11a products began appearing in 2001. The higher speed allows greater capacity, but the higher frequency means shorter range. The biggest issue for 802.11a is that its different radio frequency makes it incompatible with 802.11b, which has seen wide deployment throughout the world. These limitations have hindered adoption of 802.11a. As the market continues to evolve, manufacturers are releasing network adapters and wireless access points (APs) that support tri-mode operation—which means they support 802.11a, 802.11b, and 802.11g—or dual-band—which means they cover both 2.4GHz and 5GHz frequencies—allowing client devices to connect with whichever form of wireless networking is best at the time.

The Wi-Fi CERTIFIED program requires manufacturers to indicate whether their certified product operates at 2.4GHz or 5GHz, making it easier for consumers to buy the right equipment for their needs.

802.11g 802.11g is the new mainstream wireless networking technology. Ratified by the IEEE in June 2003, 802.11g works in the same 2.4GHz range as 802.11b. 802.11g provides speeds of 54Mbps, with fallback to speeds of 48Mbps, 36Mbps, 24Mbps, 18Mbps, 12Mbps, 11Mbps, 9Mbps, 6Mbps, 5.5Mbps, 2Mbps and 1Mbps, if necessary. Like 802.11a, 802.11g is nearly five times faster than 802.11b. Its advantage is that it is fully backward compatible with 802.11b, making it the logical successor to that protocol. In fact, to carry the Wi-Fi CERTIFIED designation, 802.11g products must provide full backward support for 802.11b, ensuring a smooth migration to the new protocol.

54g™

54g™ is Broadcom’s implementation of the 802.11g standard, providing maximum performance in speed, reach, and security. 54g™ products are fully 802.11g compatible and provide the fastest possible speeds allowed by that specification. 54g™-branded products offer extended ranges thanks to SmartRadio™ and the standards-based Broadcom XpressTM technology, built-in Wi-Fi Protected Access™ (WPA) and Advanced Encryption Standard (AES) security (which we’ll discuss later in this chapter). 54g™ products were the first to achieve Wi-Fi certification, and were included in the 802.11g Wi-Fi test bed that all other products are tested against for interoperability.

The Wi-Fi Alliance recently announced a new brand, Wi-Fi ZONE™. This brand is used to designate public wireless LAN access that is built using Wi-Fi CERTIFIED hardware. If your client device contains Wi-Fi CERTIFIED hardware, a Wi-Fi ZONE provides a place where you’re ensured interoperability. You can find a list of places offering Wi-Fi ZONE access at http://www.wi-fizone.org.

38

Page 45: Architecting Next-Generation Networks

Chapter 3

39

It is becoming more common to find APs that support a variety of standards, including 802.11a and 802.11g. These APs make it easy to get connected no matter which type of equipment you have in your client device.

Dual-Band 802.11a/b/g

For the enterprise, dual-band is a compelling option when architecting your network. Client devices such as laptops can automatically select 802.11g or 802.11a, depending on traffic and usage patterns. Near the end of this chapter, we’ll explore sample network architectures that leverage these devices to provide the most robust, future-proofed wireless network possible.

802.11 Everything Else The 802.11 specification includes the three physical layer extensions described earlier, 802.11a, b, and g. In addition, each new extension to the standard must first be designed and approved by an IEEE task group chartered with moving the standard forward. The IEEE task groups that are working toward final specification include:

• 802.11d—Used in country-specific domains

• 802.11e—Enhancements to the media access control (MAC) layer, including quality of service (QoS) and packet bursting

• 802.11f—The Inter-Access Point Protocol (IAPP), which establishes communications between access points in a network so that clients can roam between them

• 802.11h—A 5GHz networking enhancement using dynamic channel/frequency selection and transmit power control for European compatibility

• 802.11i—Security enhancements

• 802.11j—Enhancements for use in Japan

• 802.11n—Higher throughput improvements

IEEE specifications typically require years of work and research and, sometimes, the specifications’ goals turn out to be unreachable given current technologies, or those goals evolve enough that a new specification is warranted. In addition, pieces of a specification are sometimes implemented in the marketplace ahead of the full specification ratification. WPA and Broadcom Xpress technology, both of which we’ll cover later in this chapter, are examples of how the IEEE draft specifications can drive product development even before full ratification.

Of these additional specifications, 802.11e and 802.11i provide the most important benefits to wireless networking in general. 802.11i is of particular importance, as it deals with security in wireless networking—a topic that has been a concern since the limitations and vulnerabilities of Wired Equivalent Privacy (WEP) became clear.

Page 46: Architecting Next-Generation Networks

Chapter 3

How Wireless Networking Works Wireless networking occupies the same layer of the network as Ethernet. Whereas Ethernet (spelled out in the 802.3 standard) specifies the physical characteristics of an electrical transmission over copper wires, 802.11 specifies the physical characteristics of a radio transmission through the air. The basic purpose of wireless networking is to translate digital signals into an analog radio signal, then to receive that signal and convert it back into digital. Like Ethernet, wireless networking doesn’t care about upper-layer protocols carried over the network and can transmit TCP/IP and IPX/SPX.

Basic Operations There are two types of networks specified in the standard—ad-hoc and infrastructure networks. Most wireless LAN adapters in client devices are capable of establishing an ad-hoc network—a point-to-point connection between two clients; however, most networks are set up in infrastructure mode. In an infrastructure wireless LAN, , which Figure 3.1 shows, clients transmit information to an AP. The AP acts much like a hub in a wired network, connecting several wireless clients to one another. APs also connect the wireless clients to a wired network, providing access to servers, printers, the Internet, and so forth.

Figure 3.1: Simple WLAN configuration.

Engineering a wireless network requires careful placement of these APs to provide complete coverage. APs can—and should—have an overlapping signal area; clients will automatically select one AP, then select a new AP when moving out of range of the first. As Figure 3.2 shows, you might need to provide significant overlap for high-density areas, increasing the total amount of bandwidth available to the wired network.

40

Page 47: Architecting Next-Generation Networks

Chapter 3

Figure 3.2: Overlapping APs provide more bandwidth for a larger number of clients.

Think of it this way: each 802.11g AP provides up to 54Mbps connectivity between wireless clients and the wired network. However, each 802.11g AP must share its available bandwidth with all the clients on the network. By adding a second AP in the same transmission area, some clients will be able to utilize that AP’s connection to the wired network rather than the first AP’s connection. A simple analogy is a highway: adding lanes won’t increase the speed limit, but it will allow more cars to travel at that top speed.

Shared Bandwidth

An AP can only provide its maximum throughput to a single wireless client at a time. If there are two wireless clients within range, they will share that bandwidth, just as they would on a wired Ethernet segment. In fact, APs provide a function logically similar to Ethernet hubs, connecting wireless clients and allowing them to share the available bandwidth.

By contrast, Figure 3.3 shows what happens when APs don’t provide sufficient coverage. Mobile clients may travel out of range of one AP before reaching another AP, resulting in a loss of connectivity. It’s important to understand the transmission characteristics of your clients and APs and to thoroughly test AP placement when deploying a full-coverage wireless network.

Figure 3.3: Insufficient coverage can cause a loss of connectivity.

41

Page 48: Architecting Next-Generation Networks

Chapter 3

42

Hardware can make a big impact when it comes to coverage. Antenna design can be especially important, and add-on third-party antennas can be used to increase the range of a wireless network.

802.11 Legacy Specifics The original 802.11 standard specified products in the 2.4 GHz frequency band and allowed both frequency-hopping spread spectrum (FHSS) and direct-sequence spread spectrum (DSSS) technologies. Products operate in the unlicensed Industrial, Scientific and Medical (ISM) band, which means that no license is required for operation, but they must accept interference from other ISM-band devices. DSSS basically provides a means for structuring the signal to be transmitted. The DSSS transmissions create a pseudo-random noise signal and add it to the signal being transmitted. The receiver—which knows the noise sequence—can filter out the noise to retrieve the original signal.

The basic DSSS standard includes a 1Mbps and 2 Mbps mode of operation. To double the raw data rates to 2Mbps, a technique called differential quadrature phase shift keying (DQPSK) is specified. With DQPSK, phase shifts represent two-bit combinations instead of a single bit at 1Mbps, thus doubling raw data rates. The single-bit technique is called differential binary phase shift keying (DBPSK).

The 2.4GHz radio frequency of 802.11 allows for a nominal range of about 350 feet. The band allows for as many as 14 channels, depending on geography. With each channel reaching about 20MHz in each direction, there is only room for three non-overlapping channels in the 2.4GHz band.

802.11b Specifics The 802.11b specification sticks with DSSS and a refinement called HR-DSSS. To reach the higher data rates, a scheme called complementary code keying (CCK) is used. CCK is basically a set of algorithms that enables each bit to represent an even greater number of bits. At its maximum, the raw data rate for 802.11b reaches 11Mbps with fallback to 5Mbps and the 2Mbps and 1Mbps of the original 802.11 specification.

Coverage is a significant consideration for deploying wireless LANs. Radio signals are affected by wallboard, metal, and other everyday materials. If a network can’t hold a connection reliably at 11Mbps, it will fall back to 5.5Mbps, 2Mbps and 1Mbps. As a result, it is important to properly distribute wireless APs throughout a location to provide the best signal coverage.

All range measurements for wireless networking are theoretical ranges; actual operating range depends on a number of factors, including antenna type, antenna location and orientation, interference and environmental factors.

Page 49: Architecting Next-Generation Networks

Chapter 3

43

802.11a Specifics 802.11a also uses a different transmission structure than 802.11b—orthogonal frequency division modulation (OFDM), which is sometimes called discrete multitone modulation (DMT). The technique has seen widespread use in other high-speed networking applications, namely a form of asynchronous digital subscriber line (ADSL). OFDM is highly resistant to noise and jamming and can be combined with other techniques to resist signal dispersion, burst noise, fading, and other transmission problems. Because 802.11a uses 5GHz radio frequencies, it has a shorter operating range than 802.11b. However, 802.11a is well suited for high traffic locations because it can support as many as 12 non-overlapping channels, so there are more channels available to support client devices.

802.11g Specifics 802.11g uses the same DSSS, HR-DSSS as 802.11b and adds the same OFDM modulation method as 802.11a. Like the original 802.11 and 802.11b, 802.11g supports a range of about 350 feet and three non-overlapping channels because it resides on the same 2.4GHz radio frequency.

Broadcom Xpress Frame Bursting Technology There is growing demand for more bandwidth, yet a wireless LAN standard for data rates beyond 54Mbps is at least a year away. In the meantime, there are technologies available to improve efficiency, thereby increasing the effective bandwidth of today’s data rates.

One such technique is called frame bursting. Frame bursting, an extension of a feature in an original version of the 802.11 specification, is included in drafts of the upcoming 802.11e QoS standard. Frame bursting improves wireless LAN performance by eliminating some overhead traffic. As a result, more of the limited bandwidth is available to send and receive data. Broadcom is one of the first wireless LAN chip set suppliers to offer frame bursting, and markets the feature as Broadcom Xpress technology.

Wireless networking provides a shared medium; all wireless clients within range of an AP share that AP’s bandwidth, and the more clients you place on the AP, the less bandwidth each individual client will receive. More devices are going wireless—in fact, according to TechKnowledge Strategies, by 2007, 75 percent of the wireless networking chip sets produced will go into something other than notebook computers. Wireless VoIP phones, PDAs, notebooks, MP3 players, digital cameras and other applications will all compete for wireless bandwidth.

In addition, wireless clients never achieve the full speed of their network (wired networks don’t either, though wireless networking overhead is more substantial). For example, in an 11Mbps 802.11b network, clients can’t usually exceed 6Mbps actual speed due to networking overhead (there is also a difference between the data rate and the throughput, which we’ll explore later in this chapter). Every packet transmitted incurs a small amount of overhead. Unfortunately, to maintain compatibility with older standards, overhead doesn’t change much even as data transmission speeds increase. For example, an 802.11g network takes less time to transmit a data packet than an 802.11b network requires, but both networks incur about the same overhead in doing so.

Page 50: Architecting Next-Generation Networks

Chapter 3

Frame bursting is designed to help address this problem. The original 802.11 standard requires wireless LAN devices to pause after each transmitted frame, which is basically a packet prepared for wireless transmission. These pauses allow other devices a chance to signal their intention to transmit, keeping the network working smoothly. With frame bursting, the client that is sending data is allowed to send several frames in a row without pausing—thus decreasing the total overhead while transmitting a data packet. Figure 3.4 illustrates this process.

Figure 3.4: Unbursted vs. network traffic with frame bursting.

Note that transferring the data frames in 802.11g requires less time even though they contain the same amount of data; this benefit is one of the major features of 802.11g that allows it to achieve higher throughput.

Imagine a conversation in which you’re required to pause for one second after every word to see whether anyone else wants to talk. If you wanted to say “nice weather we’re having,” it might only take half a second per word, but the entire phrase would require five seconds due to the pauses. In frame bursting, you would be allowed to get out as long as 1.5 seconds of words before pausing, meaning your phrase would only require 3 seconds—a savings of 2 seconds (a 40 percent savings).

The early 802.11 specification includes a feature called fragment bursting that essentially provided this savings for single packets that were divided into sub packets. Frame bursting is a standards-based technology that extends and implements this feature for multiple data packets. Frame bursting is also included in the draft 802.11e specification (in which it is called continuation transmit opportunity—CTXOP), which focuses on QOS issues such as prioritizing certain frames of time-sensitive traffic (such as streaming media). Industry leaders such as Broadcom and Microsoft are creating the Wi-Fi Multimedia Enhancements (WME), a subset of 802.11e that should be brought to market sooner than the full 802.11e specification. WME also includes frame bursting technologies.

44

Page 51: Architecting Next-Generation Networks

Chapter 3

45

As you can see in Figure 3.4, the performance improvement offered by Broadcom Xpress frame bursting is significant. Broadcom Xpress technology includes specific features to deal with mixed-mode environments (networks with both 802.11b and 802.11g clients). For example, in an environment with only one 802.11g client, Broadcom Xpress technology can result in an aggregate of as much as 23 percent performance improvement. With two clients, Broadcom Xpress technology shows as much as a 27 percent improvement, reflecting the savings of both clients eliminating some of their transmission overhead. With one 802.11g and one 802.11b client, as much as 61 percent performance improvement is possible—assuming only the 802.11b client is using Broadcom Xpress technology. In a mixed environment in which an 802.11g and 802.11b client both use Broadcom Xpress technology, the performance improvement is close to 75 percent simply by eliminating wasted transmission time.

Broadcom has introduced Broadcom Xpress technology through its OneDriver™ software, which makes frame bursting available for Broadcom’s entire family of AirForce™ wireless networking products. These solutions are used in many of the major network and notebook brands.

One advantage of Broadcom-based solutions is that the entire AirForce family (found in wireless LAN products from Apple, Dell, Hewlett-Packard, Linksys/Cisco, and others) utilizes a single software driver. This makes it easier for enterprises to maintain a single OS software image as product updates are deployed.

Radios Matter Remember that GI Joe walkie-talkie you had as a kid? If your friend ran halfway down the block, you couldn’t talk anymore, and you just couldn’t imagine how the real military got by with such shoddy equipment. Obviously, the real military had better equipment, so the message is simple: all radios are not created equal. For that matter, not all digital signal processor (DSP) algorithms and antennas are created equal, and they all play an important role in the performance of wireless network hardware. One reason some notebooks seem to perform so well is that their wireless antenna is embedded in the notebook’s housing and extends around the circumference of the display—providing a large antenna that tends to rise above desktop-level signal blockages.

CMOS radios are also an important technology. Because CMOS manufacturing techniques are designed for precision and reliability, CMOS radios lend themselves to consistently better performance than other chip-manufacturing technologies. First introduced in 2002, CMOS radios are the most common type of radio found in 54Mbps products. CMOS has a host of other advantages, including lower power and a smaller form factor, which helps to increase portable devices’ battery life and make the technology easier to implement in a wider range of devices. Competing, more exotic technologies such as silicon germanium (SiGe) and gallium arsenide (GaAs) provide less sensitivity and higher power consumption and are typically more expensive to produce—increasing the price of the wireless LAN product you buy.

Experts predict that, eventually, all wireless LAN radios will be CMOS. The cost savings, reliability and ease of manufacturing of the CMOS process is simply too significant. In the meantime, you can save yourself money and increase reliability by choosing wireless networking products that already incorporate CMOS radios, such as those from Broadcom.

Page 52: Architecting Next-Generation Networks

Chapter 3

When selecting radios, you should also look for features such as self-calibration, which enables the radio to adapt more readily to deal with walls, extended ranges and other conditions, providing consistently higher data rates without forcing the network adapter to fall back to a slower rate. Bluetooth®, a short-range wireless technology, uses the same 2.4GHz band as 802.11b and 802.11g, providing potential for interference, particularly when both technologies exist in the same device, as is becoming more common. Selecting solutions that are designed to work together, and ideally, integrated, allows them to cooperate rather than compete.

Mixed 802.11b and 802.11g Environments If your goal is to build the fastest wireless network possible, you should be aware of a performance limitation imposed on an 802.11g network when 802.11b clients are present. 802.11g can only operate in its fastest mode when there is no need to support 802.11b devices; even a single 802.11b device will force the network into a slightly slower mode. 802.11g devices will continue to function at much higher data rates than 802.11b, but they won’t reach their full throughput potential. This protocol for providing backward compatibility in mixed-mode environments is called protection mechanism, and it is part of the 802.11g standard.

Consider the network that Figure 3.5 shows, which includes two APs running on a single channel and four wireless clients. Three of the clients are 802.11g, and one is 802.11b. Because the two APs are on the same channel, they must activate protection mechanism to accommodate the 802.11b client, thereby providing support for both the 802.11b and 802.11g clients.

Figure 3.5: A single 802.11b client activates protection mechanism which slows the network.

Many industry observers expect that wireless networks will need to be prepared to deal with 802.11b traffic for years to come, as handheld devices that don’t require the bandwidth of 802.11g can instead take advantage of inexpensive, lower-power 802.11b technology. However, if you want to provide maximum speed to your 802.11g clients, you’ll need to build overlapping wireless networks on different channels, with one dedicated to serving only 802.11g clients. Figure 3.6 shows this setup.

46

Page 53: Architecting Next-Generation Networks

Chapter 3

Figure 3.6: Different channels help maximize performance in mixed environments.

In this example, clients using the channel one and six APs will be able to run at full, native 802.11g speeds. Clients using the channel eleven APs will run either at 11Mbps (802.11b clients) or the slightly slower mixed-mode speeds (802.11g clients).

Don’t expect 802.11b to go away just because 802.11g is available. Many devices, including PDAs and cell phones, simply have lower bandwidth needs and can do just fine with 802.11b. 802.11b is also less expensive to add to these devices, is available in single-chip implementations from manufacturers such as Broadcom, and has a long life ahead of it. Make sure your wireless network plans include 802.11b support.

Building Wireless LANs Building a wireless network isn’t totally different from building a wired one. Obviously, the key players in the wireless network are the APs, which provide a connection between your wired network and your wireless clients. Specific architecture strategies for the different modes and frequencies are a bit different, though, depending on your needs. In the next few sections, we’ll discuss the major design factors and decision points for building a wireless network.

The Wired Connection APs function as physical layer bridges, providing connectivity between disparate physical networks—specifically, wireless 802.11a/b/g networks and Ethernet wired networks. Some APs can also be configured to act as repeaters, simply picking up wireless signals and relaying them back to a wired AP, increasing the range of the wireless network.

47

Page 54: Architecting Next-Generation Networks

Chapter 3

48

Ad-hoc vs. Infrastructure

Wireless network adapters support a built-in ad-hoc mode designed to connect two adapters directly to one another. The wired equivalent of ad-hoc mode is an Ethernet crossover cable, and the mode is useful when you simply need to transfer a few files from place to place. Ad-hoc mode doesn’t utilize APs.

Infrastructure mode comes into play with APs, allowing multiple wireless clients to connect to the AP, thereby connecting to the wired corporate LAN. Infrastructure mode is more like using an Ethernet hub.

Windows XP™ built-in wireless networking software will automatically detect APs that are advertising themselves, and generally requires users to take extra steps to establish an ad-hoc connection. The idea is that most users, most of the time, will want to use infrastructure mode to access the resources of a wired network (such as the Internet).

802.11b Architecture Many corporations have already rolled out 802.11b wireless connectivity within their offices, and a large number of public “hot spots” are available that provide free or inexpensive wireless access. The networks providing this connectivity are generally simple. In most corporate environments, APs are placed near major areas of wireless LAN need: conference rooms, lobbies, cafeterias, employee lounges, and other areas in which mobile client devices are typically used. APs are wired back to the nearest Ethernet switch, providing connectivity to the wired network. Figure 3.6 is a simplified illustration of a typical 802.11b deployment. Note that a single 802.11b AP provides a maximum of 11Mbps data rate (not throughput), shared between all wireless clients within range.

To allow for a higher density of users and because 802.11b allows for three distinct non-overlapping channels, a configuration could be set up in which each AP handles one channel apiece. This setup provides an aggregate 33Mpbs data rate shared in 11Mpbs chunks with each 802.11b client on a particular channel. This type of configuration is appropriate for large conference rooms in which additional high-capacity users may be online at the same time. Many companies place APs so that their coverage areas overlap significantly around high-density areas such as large conference rooms, engineering labs, and so forth.

802.11a and 802.11g Architecture 801.11a and 802.11g each provide a maximum of 54Mbps shared bandwidth per AP. Like 802.11b, 802.11g provides for three non-overlapping channels, meaning a coverage area served equally by three APs can provide up to 162Mbps aggregate capacity. When architecting your network, however, be aware that any 802.11b clients within a channel will cause that channel to enable protection mechanisms that will result in lower bandwidth for the 802.11g clients (although they will still get better than 802.11b bandwidth).

802.11a, however, provides as many as 12 non-overlapping channels in a shorter, 180-foot range. This feature makes 802.11a ideal for especially high-density areas; as many as 12 APs can service a single coverage area, providing an aggregate raw data rate of 648Mbps. Although it is unlikely that many organizations will need that much bandwidth in such a small area, there are certainly applications—such as videoconferencing and other streaming media applications—that might make the additional dedicated bandwidth worthwhile.

Page 55: Architecting Next-Generation Networks

Chapter 3

49

One way to structure wireless networks is to deploy 802.11g (which also provides 802.11b support) to areas of normal usage, such as office spaces, smaller conference rooms, employee lounges, and so forth. You can then deploy multiple dual-band 802.11a/b/g APs to higher-density areas, such as cafeterias, larger conference rooms, or anyplace in which higher density may be required in the future. To begin, you can simply deploy one AP to each of these areas. As the need for additional aggregate bandwidth becomes evident, you can add more APs to the coverage area. If you adopt this strategy, make sure you’re investing in tri-mode 802.11a/b/g clients, as well, so that your clients will be able to connect to the networks within range. You can then switch your high-density areas to provide primarily 802.11a coverage, because your clients will be able to roam between networks fairly easily.

Where Do You Need Coverage?

Your first big decision, of course, is to decide where you need wireless networking. Conference rooms, lobbies, and other meeting areas are obvious choices. Employee cafeterias and lounges may be other choices. Some companies go so far as to provide wireless LAN coverage in nearby public areas, such as an outdoor courtyard or picnic area. Large companies may also provide access at a nearby shopping mall’s food court so that employees can check email while at lunch. This access may be in the form of a sponsored public “hot spot” or an extension of the company’s own, authentication-required wireless network.

You’ll also need to decide how much wireless coverage you need in your regular office spaces. Some companies figure that their desktop and other office computers are all wired, so there’s no need to invest in additional APs. However, employees coming back from a conference may not plug their laptops into a dock or other network connection right away; providing at least minimal AP coverage in the office areas will ensure that these employees can continue to work without interruption.

Wireless Security Concepts The idea of broadcasting your data into the air can be a little scary. After all, even though wired networks are far from immune to attack, they at least have the advantage of being physically difficult for outsiders to access. Wireless networks send your data outside your walls, where any passerby could easily eavesdrop. Another new problem brought by wireless networks is war driving, through which less-than-scrupulous individuals look for unsecured wireless networks and hijack bandwidth, wasting your corporate resources for their own uses. Fortunately, wireless LAN security has come a long way and is able to address these problems.

WEP The original 802.11 specification included WEP. The intent of WEP was to make the wireless LAN connection secure through an encryption scheme. WEP required an encryption key to participate in the network. As it turned out, there were two critical flaws with WEP. First, the encryption key was static and shared by the entire network, so it proved to be easy for a computer to crack. Second, WEP provided no means of authenticating users who were approved for network access.

It was clearly time to improve upon WEP, and the IEEE created the 802.11i project to address the two shortcomings. IEEE specifications can take a long time to come to completion, so the Wi-Fi Alliance stepped in. Together with the IEEE, the Wi-Fi Alliance created Wi-Fi Protected Access (WPA), which addresses both of WEP’s shortcomings and is available today.

Page 56: Architecting Next-Generation Networks

Chapter 3

50

802.11i Currently in draft status with the IEEE, 802.11i is designed to shore up wireless LAN security with a comprehensive specification. 802.11i is being built around 802.1X port-based authentication, which we’ll explore later in this chapter. 802.11i is nearing completion and should be ratified in mid-2004 according to the current pace of work. Two critical components of 802.11i are AES, a new cryptographic standard created by the United States government, and WPA’s authentication scheme.

Prior to 802.11i’s ratification, however, the Wi-Fi Alliance announced WPA, a new part of the Wi-Fi CERTIFIED program. The alliance requires WPA support for new products to earn the Wi-Fi CERTIFIED designation.

WPA WPA uses the Temporal Key Integrity Protocol (TKIP), which is a bundle of data encryption features. Keys are derived differently than with WEP and are rotated frequently to prevent any one key from becoming overused and potentially compromised. WPA also adds message integrity checks to prevent forged packets.

AES AES is a new cryptographic standard supported by the National Institute of Standards and Technology (NIST). It supports key sizes of 128, 192, and 256 bits, and serves as a replacement for the aging Data Encryption Standard (DES), which supports 56-bit keys. AES is a faster encryption algorithm than the now-common Triple-DES; a DES enhancement that basically encrypts data three times for better security. NIST describes AES as “…a symmetric block cipher that can encrypt and decrypt information.” The estimated time required for modern computing equipment to crack an AES-encrypted block is 149 trillion years, compared with 4.6 billion years for Triple-DES.

AES is a significant component of 802.11i, and encrypting and deciphering every packet that comes in or out of a wireless client device or AP is a significant task. Fortunately, AES can be implemented in hardware, where it is extremely fast and places virtually no overhead on the client OS. Broadcom networking products carrying the 54g™ logo include complete on-hardware AES for full compatibility with future standards and high performance.

802.1X An IEEE standard based on Extensible Authentication Protocol (EAP), 802.1X provides port-level authentication for networks, especially wireless networks. The idea is to only allow authenticated users on the network, both to ensure privacy and to protect corporate network resources from being wasted by outsiders.

802.1X is designed to leverage corporations’ existing centralized authentication resources, primarily through the use of the Remote Authentication Dial-In User Service (RADIUS). 802.1X takes EAP and ties it to the physical medium—Ethernet or wireless LAN. EAP messages are encapsulated in 802.1X messages and referred to as EAP over LAN (EAPOL).

Page 57: Architecting Next-Generation Networks

Chapter 3

51

For wireless networks, 802.1X has three primary components:

• The supplicant, which is the client software trying to be authenticated

• The authenticator, which is the AP (or, on an Ethernet network, a hub or switch)

• The authentication server, which is usually a RADIUS server, although 802.1X doesn’t specifically require RADIUS

The supplicant attempts to connect to the AP, which detects the client and enables its port for communications. The port is placed into an unauthorized state so that only 802.1X-related traffic is accepted and forwarded to the wired network. The supplicant is then required to send an EAP-start message.

The AP responds with an EAP-request identity message, asking to obtain the client’s identity. The supplicant then sends an EAP-response message containing the client’s identity, which is forwarded to the authentication server. The authentication server uses whatever means it wants to authenticate the client. For example, in an all-Microsoft environment, the authentication server might be a RADIUS front-end to Active Directory (AD—Microsoft provides such a front-end, called the Internet Authentication Server, with Windows® 2000 and Windows Server™ 2003). The result is the authentication server sending an accept or reject packet back to the AP.

A reject packet will cause the supplicant’s port to be shut down. An accept packet will cause the port to be placed into an authorized state in which all traffic is accepted and placed onto the wired network to which the AP is connected. The last bit of 802.1X comes at logoff, when the client sends an EAP-logoff message to shift the port back to an unauthorized state.

Putting It All Together So where does it all fit together? The acronyms alone can be hard to keep up with; the following list provides a summary:

• WEP is the original, outdated, and less-than-secure data encryption technique featured in the original 802.11 standard. WEP does not address user authentication.

• 802.11i is the IEEE draft specification addressing wireless LAN security from both a data encryption and user authentication standpoint.

• 802.1X is a port-level authentication scheme used to authenticate clients to a wireless network. 802.11X provides the foundation for 802.11i.

• AES is the new encryption standard created by the United States government, replacing the older DES. AES is also referenced in the 802.11i standard.

• WPA is a subset of the 802.11i draft standard that IEEE and the Wi-Fi Alliance ordained to provide an immediate replacement to WEP, while the standards-setting body hammers out the final 802.11i standard. WPA includes most of the major pieces of 802.11i, including 802.1X, TKIP encryption, and the improved message integrity check (MIC).

• RADIUS is an authentication protocol often used in conjunction with 802.1X. RADIUS can be built as a front-end to other existing authentication services, such as AD.

• EAP is a generic authentication protocol. 802.1X builds on EAP to create a port-level authentication protocol. Several specific authentication protocols, built on EAP, already exist; more are forthcoming.

Page 58: Architecting Next-Generation Networks

Chapter 3

52

The Wired Weak Point Keep in mind that all of these features only protect communication between wireless clients and their APs; as soon as the data hits the wired network, it’s completely unprotected by wireless LAN security measures. If you’re concerned about the security of your wired network—a valid concern especially for traffic transmitted over the Internet—you will need to continue to employ higher-level encryption mechanisms, such as IPSec, virtual private networks (VPNs) and Secure Sockets Layer (SSL).

Architecting Secure, Next-Generation Wireless LANs The next-generation wireless network is at your fingertips. All the technology pieces are in place, and the products are available now, you just need to deploy them to start taking advantage of faster speeds, higher client densities, better security and more privacy.

Prerequisites You’ll need to have a few extra pieces available on your network in order to build tomorrow’s wireless network. The first prerequisite is planning today for future wireless LAN implementations. Many new notebooks come with built-in wireless LAN capabilities. Be sure that all new notebook purchases include built-in 802.11g, an inexpensive option that is five times faster than 802.11b. Doing so will ensure connectivity in any 802.11b or 802.11g environment you implement. If budget permits, specify a/g clients. Adding wireless LAN connectivity later can be time consuming and expensive. The following list highlights prerequisites for an efficient and successful wireless network implementation:

• A good plan is the first thing you’ll need. Know your business requirements, where APs are needed, and what type of wireless LAN devices you will be supporting. Understand your users’ wireless LAN bandwidth needs and make plans to meet them. Also make plans to grow the wireless network as utilization increases.

• RADIUS is almost a must in a larger business environment. Fortunately, RADIUS implementations are available for almost any environment and can leverage your existing enterprise directory, if you have one.

• Central provisioning capabilities are useful. You will want to be able to centrally configure all your wireless LAN hardware from a single desktop, if possible.

Client Software Support More and more client devices are being built to include 802.11b, 802.11a, and 802.11g hardware; make sure your OS can handle such hardware. Windows 2000 Professional and Windows XP™ include support for wireless networks as do Linux®, UNIX® variants, Mac and more. Wireless is becoming more popular in portable devices too, such as Microsoft® Pocket PCs and Palms. The next wave of portable devices to include wireless LAN will be digital cameras, MP3 players and VoIP phones.

Page 59: Architecting Next-Generation Networks

Chapter 3

53

Hardware Support Although it has been mentioned several times, it is worth repeating: the quality of the wireless LAN hardware you select can be critical to your wireless LAN implementation—if not today, then tomorrow. Here are some tips:

• Look for AES that is integrated into the wireless networking hardware. Simply supporting AES isn’t enough; implementing AES in a software driver will place additional unnecessary processing overhead on your client computers and APs, resulting in significantly degraded performance.

• Standardize on equipment that uses lower-powered, inexpensive and reliable CMOS radios. They will be the only choice a few years from now, so there is no reason not to make the smart choice today.

• Select equipment that rigorously complies with IEEE standards. Look for the 54g™ logo for maximum performance 802.11g, and Wi-Fi CERTIFIED logos to ensure the broadest possible range of compatibility, reliability, quality, and future-proofing.

Where Can You Get the Right Wireless Hardware?

Broadcom’s pioneering wireless networking products provide a completely standards-based, forward-looking approach to wireless networking. In addition, Broadcom is the power behind many of the leading brands of wireless LAN products, including Apple, Belkin, Buffalo, Compaq, Dell, eMachines, Fujitsu, Gateway, Hewlett-Packard, Linksys/Cisco, Microsoft and Motorola.

Broadcom’s hardware and associated software offers everything you need for a secure, stable wireless LAN solution: single-chip 802.11b components for small-device and low-power scenarios, integrated CMOS radios, AES embedded in the hardware, a universal software driver across a product family, superior radio technologies and much more.

Management and Maintenance Concerns Wireless networks can bring a new level of management and maintenance concerns if you’re not careful. The following list highlights tips for making your deployment easier to manage now and in the long term:

• Use centralized provisioning whenever possible. Some tools can provision compatible clients with wireless encryption keys, network settings and more, making it easy to configure clients without a trip to each one.

• Use your existing central directory for 802.1X. Most directories provide RADIUS compatibility.

• Select network hardware that can utilize a single software driver for an entire family of products, such as Broadcom’s AirForce family of products. You will be able to maintain fewer OS images and lower your support costs by reducing environment variables.

Page 60: Architecting Next-Generation Networks

Chapter 3

54

Summary While you’re building your wireless network of today, take the time to build the wireless network of tomorrow as well. Future-proofing is possible, particularly when you select wireless LAN equipment that is designed to be forward-looking. The following list highlights specific considerations for wireless networking equipment:

• Standards-based—Look for Wi-Fi CERTIFIED equipment as well as equipment carrying the 54g™ logo. Wi-Fi CERTIFIED equipment meets the stringent specifications created by the IEEE and provides the best interoperability between varying brands of equipment. 54g™ equipment supports WPA.

• Dual-band—Look for 802.11a/b/g equipment—both APs and clients—that provides the most flexibility for a variety of networking situations. You will be able to continue to leverage any existing 802.11b investment while taking advantage of the unique strengths of both 802.11a and 802.11g.

• 802.11e implementation—Look for equipment that implements early drafts of the 802.11e specification, including frame bursting. This equipment is designed with the future in mind, so future software updates can provide complete 802.11e compatibility.

• AES encryption in hardware—Look for equipment that includes AES capabilities built-in to the hardware—such as devices carrying the 54g™ brand—because hardware AES support provides better performance with less overhead on client computers.

Today, wireless networking is one of the most exciting areas of networking. Wireless networks are becoming more secure than wired networks, as few wired networks today offer 802.1X port-level authentication and continuous encryption. Architecting wireless networks isn’t difficult, and you can build a future-proofed network by choosing equipment that is built to today’s standards while looking forward to tomorrow’s developments.

Broadcom®, the pulse logo, Connecting everything®, 54g™, the 54g logo, AirForce™, Broadcom Express™, OneDriver™ and SmartRadio™ are trademarks of Broadcom Corporation and/or its affiliates in the United States and certain other countries. Wi-Fi®, Wi-Fi CERTIFIED™, Wi-Fi Protected Access™ and Wi-Fi ZONE™ are trademark of the Wi-Fi Alliance. Bluetooth® is a trademark of Bluetooth SIG. Windows®, Windows XP™, Windows 2000™ and Windows Server™ are trademarks of Microsoft Corporation. Linux® is a trademark of Linus Torvalds. UNIX® is a trademark of Unix System Laboratories, Inc. All other trademarks or trade names are the property of their respective owners.

Page 61: Architecting Next-Generation Networks

Chapter 4

55

Chapter 4: Switching Intelligence in the Enterprise

Switched networks are a ubiquitous part of the corporate networking environment. Previously used primarily on network backbones, switches are now commonly found at the workgroup level, having replaced the simple network hub at only a slightly higher price but with significantly greater return on investment (ROI).

Networks do not remain static. As business needs change and evolve, so must the corporate network. However, for business to grow unconstrained by technology roadblocks, the corporate network infrastructure must stay two steps ahead of the current corporate networking demands. Fortunately, the current generation of intelligent switching technologies, coupled with the cutting edge advances in network performance, enables smart network administrators to spend their dollars on intelligent Gigabit Ethernet (GbE) hardware (both switches and network interface cards—NICs) to provide the technology growth path that the business enterprise needs.

These days, technology is rarely the primary driving force behind network upgrades. In a tighter economy, it is critical that infrastructure updates result in an evident positive impact on the bottom line. Although the phrase “do more for less” sends shudders down the spine of most IT managers, the reality is that dollars need to be more carefully spent. The challenge with IT spending is to find products that either improve the overall performance of the enterprise in a quantifiable fashion or reduce overall costs.

Surprisingly, a migration to a fully switched GbE infrastructure can do both. Given that newly purchased client and server hardware will include GbE as the standard network interface, it makes sense to take advantage of the significant performance advantages that GbE will bring to the network. To do so, network administrators must plan to implement an end-to-end GbE topology.

Intelligent switching makes it possible to improve network performance and reduce network complexity while making the network easier to manage. Switching offers the advantages of network consolidation and bringing network resources under more direct control, as well as provides the additional benefit of a more secure computing environment. Switching solutions are available for every application, from desktop workgroup switches to Internet/Core routers, and provide recognizable benefits to each level of the network. Even a single intelligent switch in the right place within the network architecture can bring immediate improvement to the behavior of the network environment.

Environments that are running applications that can show a direct benefit from improved network performance and security—such as e-commerce, remote call center support, and multimedia broadcasting (interactive video and video streaming) applications—are ideally suited to intelligent switching in the GbE environment. Adding these intelligent switches to the network will enable you to take advantage of the latest technologies that require bandwidth and network control such as Voice over IP (VoIP). Implementing intelligent switching is the path to the future for the network computing environment.

Page 62: Architecting Next-Generation Networks

Chapter 4

What Is a Switch? At the most basic level, a switch is a device that controls signals going from one side of the device to the other. In networking terms, a switch handles the signals coming in across either a copper or fiber connection, and directs the signals to the other side of the switch. Multi-port switches are able to route inbound traffic across any supported media and send the traffic outbound on any other port. Intelligent switches provide additional capabilities. In a switched environment, each port on the switch has a dedicated full-bandwidth connection available to it; in a more traditional hub, each connection shares the available bandwidth and must attend to the various network contention issues inherent in the Ethernet design (see Figure 4.1).

Figure 4.1: The basic switch architecture has not changed, but the performance of the network has increased tenfold.

Today’s typical switch will offer one or two high-speed ports (usually GbE) for connecting back to a network backbone or directly to a server, and 24 to 48 lower speed (10/100) ports are provided for connection to client devices. Although most network backbones are being upgraded to GbE, only a small percentage of those upgraded backbones are attached to upgraded switches for the client connections; an overwhelming majority of those client switches are still based on 100Mb Ethernet. The current generation of GbE switches will offer 10/100/1000GbE connectivity to each client. Clearly, a migration to GbE for all the client connections is the next step in corporate network evolution—especially given the minimal incremental costs involved to achieve this configuration.

Adding intelligent switching to a network is a simple process. Even if the current networking infrastructure is comprised of an antique backbone and hub configuration, the migration to intelligent switches will be transparent to end users. Regardless of the level of sophistication of the current network configuration, the benefits that intelligent switching presents to the network administrator as well as the potential competitive business advantages enabled by this technology far outweigh the incremental cost of the migration.

56

Page 63: Architecting Next-Generation Networks

Chapter 4

Intelligent Switching An intelligent switch is one that, at the very least, knows something about the traffic that it is passing through the network. It has the capability to examine each packet as it passes through, so the switch has the ability to make decisions about each packet. The most common decision it makes is usually related to the routing of each packet. These routing and forwarding decisions are applied to each packet based on the type of traffic contained within the packet and the priority that has been assigned to that type of data.

Intelligent switches are usually referred to by their position in the Open Systems Interconnect (OSI) network model. For example, a Layer 2 switch refers to Layer 2 in the OSI model (see Figure 4.2), which is the Data Link Layer. As such, the Layer 2 switch is always aware of a data packet’s media access control (MAC) address, which is unique to each network device in your network. The unique identification available through MAC addresses enabled the transition from shared media networking—in which information was broadcast to every network node—to switched networking—in which information is only transmitted to the target node.

Figure 4.2: The OSI model is used as a standard definition of a network.

All switches must be able to forward packets to the appropriate client; basic Layer 2 switches include the ability to understand and utilize the Layer 2 priority settings and virtual LAN connections. This enhanced feature is important because the virtual LAN (VLAN) capability allows network designers to create virtual networks with the existing wiring infrastructure without rewiring. As a result, physical proximity no longer becomes a requirement for clients when attached to a specific network. The network administrator simply must ensure that the switch port is identified with the configured VLAN.

57

Page 64: Architecting Next-Generation Networks

Chapter 4

58

A VLAN network is created by making the logical grouping of two or more network nodes. These nodes need not reside on the same network segment or even be attached to the same switch. All the nodes in a VLAN share the same IP network address. For VLAN standards information, refer to the IEEE 802.1q resources.

The more advanced intelligent switches can also make use of the IP address information in Layer 3 and port information in Layer 4 to prioritize applications, giving higher priority to critical traffic or applications or simply guaranteeing that the network port to which the CEO is connected always has a high priority. Some classes of intelligent switches can also make use of the packet data that relates to Layers 5, 6, and 7 to perform tasks such as content filtering and spam detection.

With the utilization of intelligent switches, network administrators have detailed control over the traffic within their network. Thus, the fact that switches can improve network performance isn’t their primary selling point; their detailed control and management capabilities make these switches a valuable addition to a network infrastructure. These multi-layer switches (those that can deal with traffic on more than just the Data Link Layer) can not only read the addressing information in each packet to determine which type of data the packet contains, but also can, when properly configured, apply business-derived policies to the network traffic such as rate limiting and load balancing.

The OSI model uses seven layers to identify a network; the TCP/IP model uses four layers to define the IP structure. These TCP/IP layers map as follows:

TCP/IP Application Layer maps to OSI Transport Layer 4

TCP/IP Transport Layer maps to OSI Network Layer 3

TCP/IP Internet Layer maps to OSI Data Link Layer 2

TCP/IP Network Interface Layer maps to OSI Physical Layer 1

For example, the Data Link Layer of the OSI model (Layer 2) has the MAC address information needed to deliver a packet to the correct destination. The IP address header information is contained in the OSI Network Layer (Layer 3), while the TCP/UDP header information and the data packet are contained in the OSI Transport Layer (Layer 4).

Layer 4 switches can identify the application that is transmitting and receiving data through the TCP/IP port from which the application traffic is being switched, while a Layer 7 switch is able to read the application layer information in the packet to determine the actual application. Although the information found at Layer 7 will allow for a more accurate determination of the application that is transmitting and receiving the packet, the vast majority of identification performed by the Layer 4 switch is sufficient for most purposes (because it uses well-known TCP/IP ports and are identified by such). Layer 7 switches are therefore used in more specialized applications for which reading the packet data at the application level is necessary.

Page 65: Architecting Next-Generation Networks

Chapter 4

59

Key Functionality of Intelligent Switches The following sections cover some of the key features that set intelligent switches apart from simple Layer 2 switches. These sections provide basic reasons why intelligent switching in the enterprise is the appropriate method to begin “future proofing” your network investments.

Quality of Service Quality of service (QoS) is the ability to make the best use of available bandwidth by prioritizing and coordinating network traffic. Certain types of applications—such as VoIP, video, and applications that require real-time speed—need to have their priority set higher than normal data traffic. These applications are delay sensitive, and if the application can’t get the bandwidth and network availability it needs, the application will fail.

QoS does not mean giving those time-sensitive applications priority over all other traffic, as the goal is for the network to continue functioning for all required purposes (even if bandwidth intensive applications are running). The traffic with the highest priority is given that priority, while less-critical applications still get the bandwidth they need to function.

Prioritization means that critical applications don’t get shortchanged on bandwidth, even when there are sudden increases in network utilization—such as that 9AM burst when all of the network users log on for the day. Without QoS controls, it would be difficult, if not impossible, to maintain service level guarantees that IT departments make to their business units. Conversely, QOS controls allow the IT department to monitor the network behavior relative to bandwidth needs on a very granular basis.

Most QoS solutions operate under what is called “best efforts.” That is, the attempt is made to provide the level of service that is requested. Combining QoS with prioritization services gives the traffic management the best possible chance of achieving the desired service level. Bandwidth and traffic management tools are still going to be a necessary part of determining whether your network bandwidth is being used most efficiently and whether there is sufficient bandwidth available for your networking needs.

Proper implementation of QoS requires an end-to-end implementation of intelligent switching, which, in turn, will improve the efficiencies of existing networks beyond that which could be achieved by simply adding bandwidth. A fat pipe is well and good; a well-managed fat pipe with QoS control is even better.

Page 66: Architecting Next-Generation Networks

Chapter 4

60

Security Intelligent switches are able to control access on a port-by-port basis. Issues of authorization (is the user allowed to do this) and authentication (is the user permitted access) are the bread and butter of switch security. ACLs on a per-port basis can quickly limit the access of any intruder that manages to penetrate your perimeter security. Multi-layer switches have the ability to analyze the contents of network traffic more closely; thus, the signature pattern of a hacker attack can be recognized, network viruses can be detected, denial of service (DoS) attacks can be caught early, and the switch network can be configured to address these threats after a security breach has been discovered. Certain types of network applications that require enhanced security models—such as e-commerce—gain an instant level of additional security when run across intelligent switching infrastructures by identifying and classifying traffic and/or monitoring ports and addresses.

All of the perimeter security in the world doesn’t prevent security issues created within the network domain. Studies have shown that the single largest threat to network security is computer viruses, followed closely by employee abuses of network resources. Intelligent switches are critical to limiting the unauthorized use of network resources by otherwise authorized employees. As a result, the end-to-end intelligent switching infrastructure provides improved security for critical corporate data and yields a higher level of confidence relating to the safety and security of the corporate network. Entire departments that should be off-limits to most users—such as finance and human resources—can still be connected to the same networking infrastructure with complete security.

By using VLANs, geographic proximity is not a requirement for departmental users. Different business units can have their own finance departments that are all connected to the regular corporate network as well as to any shared financial resources. As security is built-in to the basic architecture of the intelligent switch, there are cost savings in both the short and long term associated with the deployment of intelligent switching technologies.

Management The complexity and intelligence built-in to the current generation of intelligent switches make them incredibly easy to use for network administrators. Even with the detailed control over the network environment that an end-to-end intelligent switch infrastructure provides, the ability to manage that infrastructure from a single interface makes network administrators’ tasks significantly simpler. For example, providers of semiconductor solutions such as Broadcom offer customers a single API set that works across its family of switch products so that OEMs can build custom applications that provide end-to-end management across the networking infrastructure. As new products are added to the family, they can be easily integrated into the network to support the custom applications already created.

OEMs will offer dedicated switch-management software tools along with industry-standard SNMP MIBS that allow information to be provided to enterprise-management consoles. Depending upon the vendor, direct add-in modules for those enterprise-class consoles may also be available.

Page 67: Architecting Next-Generation Networks

Chapter 4

61

Scalability The ability to stack intelligent switches and provide a high-bandwidth connection between the switches enables administrators to provide an easily scalable network infrastructure that can be expanded by simply adding switches to the network. The improved manageability of these intelligent switches results in minimal effort to add a switch to the network. As a result, network expansion becomes an almost effortless task.

Additionally, the high-performance interconnects between the switches allow for a fair level of resiliency. For example, a switch’s ability to control another switch provides a level of high-availability and prevents a failed piece of silicon from bringing down the connections to that switch. Depending upon the type of failure and the configuration of the switches, a failover that not detectable to end users could occur, preventing a data center switch failure from bringing down the entire network.

Many intelligent switches currently on the market give network administrators deployment options of 10Mb Ethernet, 100Mb Ethernet, 1GbE, and 10GbE, allowing the switches to be installed at both the core and departmental infrastructure levels without needing to migrate clients simultaneously.

VoIP One of the driving forces behind the acceptance and use of high-performance intelligent switching is VoIP—not simplistic PC-to-PC voice communication, but telephony-quality voice connections running over the same networking infrastructure that is carrying data. The major stumbling block in the widespread deployment of VoIP has been the lack of reliable end-to-end QoS in the corporate enterprise. For VoIP to work, end-to-end control of the connection is required to ensure that a useable voice connection is maintained. Although not as susceptible to latency issues as video connections, VoIP still has stringent requirements for latency and dropped packets. As a result, reliable VoIP use has two requirements: latency and bandwidth. From the bandwidth perspective, voice is very compressible, and, as such, a single conversation doesn’t require much bandwidth; however, the bandwidth needs to be available throughout the conversation (high-volume uncompressed voice communication can be very bandwidth-intensive).

A congested network would then have both latency and bandwidth issues that would prevent the deployment of VoIP. If the switches that reside within the network are replaced with intelligent switches that are traffic aware and support QoS, the implementation of VoIP becomes much simpler.

Go to http://www.iptelephony.org to keep up with what is happening in the VoIP marketplace. Deploying a VoIP solution within your corporate enterprise requires far more than just a switching architecture to be added to your network. A significant investment in IP telephony client hardware is necessary as well as ensuring that you have the bandwidth available to support the telephony functionality. The end-to-end QoS solution provided by intelligent switching is an enabling technology for VoIP—not a solution to all VoIP issues.

Page 68: Architecting Next-Generation Networks

Chapter 4

62

Video The same technology that is critical for VoIP is needed to stream real-time video across your network. We’ve all seen what network-based video looks like—jerky movements, dropouts, out-of-sync voices, and so on. The reason for this level of performance is the lack of a QoS mechanism to guarantee that the video stream arrives and the packets that make up the stream are in the correct order.

When switches in an end-to-end solution can examine the packets that are being transmitted and determine that there is a video feed contained within it, the switches can negotiate for the necessary bandwidth to ensure a good video experience for users. Such technology takes the multimedia network experience far beyond what users are accustomed to (current technologies such as Microsoft® NetMeeting® over a non-switched network).

Wireless LAN Switching Wireless networking has become a fact of life and the flexibility that wireless network access provides has made it a key part of the networking model for many corporate businesses. The ability to provide security on wireless networks has proven to be somewhat more difficult than implementing the wireless infrastructure.

Network administrators have tended to take a brute-force approach to providing network security and still allowing wireless access. In many cases, they simply treat the wireless user like a remote user, configuring the wireless connection to use a VPN and IP tunneling with RADIUS authentication for access to corporate resources. However, technology designed for use by low-speed dial-up users doesn’t always transition well to a high-speed network. Thus, networks that support many wireless users have suddenly found their remote networking resources severely taxed—and they still aren’t completely addressing the security needs of their network.

In a switched network, network administrators can aggregate the network switch ports to which wireless APs are attached into a VLAN. This VLAN is then configured to be outside of the normal corporate network, requiring the aforementioned remote authentication mechanisms for users to access corporate networking resources. These solutions have their own problems and don’t address the issues that arise from users who move from docked (wired) to undocked (wireless) work locations within the network.

WLAN switches don’t eliminate the need for standard network authentication methods such as RADIUS servers, they simply make client configuration simpler and can allow features—such as a tunneling IP connection—that still enable users to move across APs without losing the connection. Strict security practices must still be followed, even in the inherently more secure intelligent switch environment.

The concept behind wireless LAN switching is to move the intelligence found in the wireless APs back to a wired multi-layered switch that is optimized for handling 802.11 clients (see Figure 4.3). The wireless APs become extensions of the switched network ports to which they are connected. Thus, problems such as rogue APs are no longer an issue as the AP alone no longer provides access to the network. An unauthorized AP attached to a switch port would, therefore, not be authenticated to the wired network.

Page 69: Architecting Next-Generation Networks

Chapter 4

Figure 4.3: Wireless LAN switches reduce the security exposure of wireless APs and simplify the associated management tasks by moving the higher-level functions associated with wireless access back to an intelligent switch.

As a result, the security functionality is moved to the switch with authentication and ACLs’ adapted to follow wireless users as they move around within the wireless network. Multi-layer switches allow wireless users to roam between APs, subnets, and VLANs, and provide the wireless user with smooth access into the wired network infrastructure.

As it is impossible to secure the transmission model where the wireless network operates, a very secure and robust security model must be in place. In this scenario, administrators need to be able to prevent three things from occurring: unauthorized access, rogue APs (unauthorized APs attached to the network), and unauthorized APs that might overlap with the network’s wireless infrastructure. With a wireless LAN switch, administrators can take several steps at the switch level to combat these security problems. Unauthorized users can be locked out of the network, rogue APs can be quarantined or blocked completely, and unauthorized APs can be prevented from any user associations. These actions are performed at the switch level using the same switch-management tools used to run the wired network.

Wireless LAN intelligent switches also integrate wireless network management with standard switch-management tools. The integration of wireless and wired switches within the same network, or even in the same chassis, allows IT departments to provide the same level of service to their wired and wireless users.

Management issues also become much simpler as APs become antennas that give wireless users access to a secure networking environment with the wired network’s entire security model in place (plus any additional security that the administrators choose to apply to wireless connections). Wireless encryption can then be performed at the wireless LAN switch (rather than at the APs). Thus, as the encryption algorithms are changed, improved, and updated, it is only necessary to update the wireless LAN switches without worrying about upgrading or replacing possibly hundreds of wireless APs.

63

Page 70: Architecting Next-Generation Networks

Chapter 4

64

In a wireless LAN environment, the APs serve as antennas and air monitoring devices that provide connectivity and network environment information back to the wireless LAN switch.

By moving the intelligence back to the switch, intelligent APs make for far simpler management and deployment and a much more secure environment. Adding APs to the network no longer becomes a manageability concern—whether you add 10 or 1000 APs, configuration isn’t an issue because you can easily apply the appropriate policies that configure the wireless APs at the switch level.

An interesting feature that wireless LAN switches support is the ability to assign different wireless connection rates to different users. Users who do basic office automation might only access through 11Mbps connection speeds, while a user attending an online demo might be given 54Mbps. Multi-layer intelligent switches can make decisions based on applications, so these connection speeds can be determined and assigned automatically. Obviously, the maximum connection speed is dependent upon the actual wireless networking standards implemented (802.11a, 802.11b, 802.11g). This functionality helps with the “lowest common denominator” issue that plagues wireless networks—the entire wireless network ends up slowing to the speed of the slowest device connected via the AP.

All of these features will make wireless networking a more practical and acceptable alternative. For business units that can make do with the overall performance limitations that wireless networks currently have, users who do not need high-bandwidth connections can be set up with wireless network access. Entire departments that perform low-bandwidth tasks, such as data entry, will no longer need an expensive cable plant to provide secure connectivity. Deploying shared devices, such as network printers, will lose any physical connectivity requirements; the printer can be placed anywhere in range of an AP that has an electrical outlet. Less complexity in managing, deploying, and maintaining your wireless networks means less strain on IT resources and a strong ROI for wireless LAN switches.

Wireless LAN switches are appropriate for deployment in both wiring closet and data center network architecture models.

With secure wireless LAN switching in place, the potential for wireless VoIP becomes apparent. Currently, wireless VoIP is used primarily in niche vertical markets, such as the healthcare industry, in which wireless handheld devices can be extended by adding VoIP capabilities. The ability of a multi-layered switch to examine the traffic that is contained within each packet makes it easier to address wireless VoIP concerns. The future for wireless VoIP, however, isn’t limited to handheld devices. Mobile telephony providers are looking towards wireless VoIP as a method to allow seamless mobility for cellular users between their wide-area cellular networks and the user’s corporate VoIP network.

Intelligent switching is critical in making this transition because the security and authentication mechanisms native to the intelligent switch architecture are a requirement for the successful implementation of wireless VoIP. Security and authentication can be maintained for every wireless user and device, giving different capabilities to users based on device, location, and other parameters, anywhere within the user’s network enterprise.

Page 71: Architecting Next-Generation Networks

Chapter 4

65

ROI/Convergence The utilization of intelligent switches makes the existing network infrastructure significantly more efficient. Thus, the incremental costs associated with a migration to GbE intelligent switches can be quickly recovered even as the network begins moving the client infrastructure to GbE and before the cost savings can be derived from the GbE infrastructure.

Intelligent switches provide detailed information to network administrators about the traffic passing through their networks and allow decisions to be made about routing, prioritizing, access control, bandwidth allocation, QoS, high availability, security, VoIP, and more. In addition, none of these decisions need be static. For example, quality-management tools provide choices that can be conditional based on such factors as time of day, network load, load balancing, and destination addresses—whichever features the administrator feels are appropriate for the networking environment in order to deliver an integrated data transmission model with voice, data, and video optimized to deliver real-time business communications.

Intelligent switching makes it possible to support all of the previously mentioned technologies on a single, unified, networking infrastructure. The result is a converged networking infrastructure with fully implemented support for voice, video, wireless, and data networking within that single integrated infrastructure. This converged network can result in not only an improved ROI on dollars spent on the networking infrastructure but also a better bottom line due to a more efficient IT infrastructure.

Implementing Intelligent Switching As was pointed out earlier in this chapter, to get the full benefits of an intelligent switched network architecture, an end-to-end solution is required. A gradual migration to a fully switched infrastructure will provide significant benefits to the corporate network. In fact, a comprehensive deployment plan must be evaluated with realistic management expectations applied. It is also possible to use intelligent switching to support specific applications.

Starting from a traditional backbone router and hub configuration, data center servers are connected to a networking backbone that is available throughout the corporate enterprise. In most cases, the physical network layout either aggregates everything back to the data center or uses a local wiring closet model in which the corporate backbone is extended to a location in closer physical proximity to its clients.

Regardless of the topology of your networking architecture, consider using multiple NICs in any server that is in active use within your enterprise. You can do so in many ways. The simplest way is to have more than a single NIC installed in the server. Many servers now come standard with two NICs on the motherboard, the latest of which are 10/100/1000-capable devices. That second NIC, connected to another port on the hub or switch, effectively doubles the network bandwidth available to the server.

There are also dedicated aggregate NIC interfaces, some of which include as many as four network interfaces on a single PCI card. This solution was practical with 10/100 Ethernet, but with GbE, it is a more efficient solution to add multiple NICs to the server box. Even with the order of magnitude increases in network bandwidth, the network is still the most significant bottleneck in a well-designed server system.

Page 72: Architecting Next-Generation Networks

Chapter 4

66

In either case, the ports that are connected to client computers and servers will need to be replaced because they are probably attached to older technology hubs, routers, and switches. After the intelligent switches are implemented, it is not requirement that any other network components be upgraded. Replacing the connection equipment in the data center and/or wiring closets with GbE intelligent switches will be completely transparent to users.

With no other changes to the network, significant capabilities to traffic management and bandwidth control functions will be added. In addition, the following security benefits will result from the new switching architecture:

• Access control—Network administrators will now be able, on a per-port basis, to authenticate the user that is accessing the network via the associated port. ACLs let these administrators create groups of users with permissions. Authentication for network access will be handled at the switch level, regardless of the network OS. Secure tunneling and VPN connections will now be easier to support and maintain.

• DoS protection—Well-known DOS attacks will be trapped at the switch level and not allowed to disrupt users on the network.

• Virus attack protection—Many viruses can be identified and blocked.

All of these features make use of functionality that is built-in to the switches. Existing security protections are further enhanced by making use of the security functions of the switched infrastructure.

Traffic-management functionality within the switches provides more efficient use of the existing network bandwidth with features such as bandwidth management and rate limiting. For example, users who are surfing the Web will have less bandwidth than users who are performing internal database queries. Management tools enable administrators to configure such features to have dynamic behavior, changing as the network conditions change.

Traffic-redirection capabilities enable specific application traffic to be routed directly to the servers running the applications. In most cases, these capabilities are used to share Web servers that are high-traffic sites so that the traffic generated by the Web site doesn’t slow the performance of the local network. Depending upon the capabilities of the multi-layer switch, administrators could also add firewall, VPN, network load-balancing, application load-balancing, server load-balancing, and a wealth of other features.

As mentioned earlier, stacking the switches achieves resiliency and adds a measure of high-availability to the network infrastructure. Don’t confuse resiliency with redundancy. In a redundant environment, there is backup hardware that duplicates the primary hardware and is available in case of a catastrophic failure. The goal of a resilient network infrastructure is to have a network that maximizes uptime without requiring that every piece of critical equipment be duplicated.

Once Layer 4 switching is implemented, switch intelligence has the ability to load-balance applications and traffic across multiple servers and monitor and health-check applications. The switching architecture is now working in a way that improves network and application availability without the need to invest in specific high-availability solutions. Once again, if you already have an investment in high-availability devices, the switched architecture will enhance the reliability and availability of those devices.

Page 73: Architecting Next-Generation Networks

Chapter 4

67

Resilient hardware can be made up of redundant components. A complete set of duplicate intelligent switches would provide complete redundancy. When you are focused on resiliency, you would want certain functions of your core switches to be redundant, such as chassis with dual hot-swappable power supplies and support for hot-swappable switch blades. This minimal redundancy improves the resiliency of the network infrastructure without requiring the expense of duplicate hardware, yet it still provides the additional reliability that characterizes a high-availability working environment.

Stacked switches can also be considered hot-swappable in the sense that additional ports (or replacement ports) can be added without bringing down the network. In the event of a switch failure, a newly added switch can obtain the configuration information of the switch it needs to replace from the server, then be placed in the existing stack where the logical architecture of the stack will be rebuilt automatically without affecting the stack’s normal operation.

Managing the stack is also simplified by the fact that there is a single IP address for the entire stack (in-band management) and all of the switches in the stack are treated as if they are a single switch. Management instructions are passed to the top of the stack and redistributed through the stacked switches without direct action by the systems administrator.

In Figure 4.4, each switch has twelve 10/100/1000Mb Ethernet ports that support copper or fiber connections. These connections can be routed to individual clients or servers. Each switch in this diagram has high-speed expansion ports that allow the switches to be daisy-chained together (that is, stacked). The top switch is then connected via the high-performance uplink interface to the bottom switch, completing a connection loop that allows redundancy and reliability improvements.

Figure 4.4: Three stackable multi-layer switches.

Page 74: Architecting Next-Generation Networks

Chapter 4

Upgrading your backbone to 10GbE will provide significantly more bandwidth to play with on the network; however, to fully realize the benefits of this increased bandwidth, the clients need to be upgraded. As discussed in Chapter 2, adding GbE client NICs is a small incremental cost over 10/100 NICs and most desktop computer vendors are moving towards making GbE connectivity standard in their line of computers targeted directly at business consumers. To maximize the benefits of intelligent switching, an end-to-end implementation is required.

While researching intelligent multi-layer switches, you will often run into the term switch fabric. The switch fabric is the software and hardware combination that handles the data traffic that moves into a switch node by moving it to the correct outgoing port. There are multiple switch units (the actual integrated circuitry that handles the data manipulation) in a switch fabric and software that controls the switching paths.

An example of a switch fabric is the Broadcom® StrataXGS® BCM5670. The 5670 is an 8-port, non-blocking, 160Gbps switch fabric that supports eight high-speed ports. This fabric would be combined with other hardware and software to build an actual intelligent switch. The switch would then be part of a chassis or a standalone switching product with or without stacking capability.

Fortunately, the switches will support legacy Ethernet architectures, not just GbE. While the performance benefits of GbE won’t be realized by legacy clients, the other benefits of the switch are available. Because the switch is intelligent, there is an awareness of whether a port is running with sufficient bandwidth to service the requests that the attached computer is making for network resources.

Figure 4.5 shows a good example of a stackable switch. The Nortel BayStack® 5510 family of switches offers 24 or 48 10/100/1000Mbps ports for desktop switching and provides high-density wiring closet connectivity to GbE desktops. The user can stack as many as eight discrete switches and a maximum of 384 ports in a single stack.

Figure 4.5: The Nortel BayStack® 5510 stackable Layer 3 switch uses the Broadcom StrataXGS switch fabric.

Using technology based on the Broadcom StrataXGS BCM5670 switch fabric, the BayStack 5510 not only offers a wealth of GbE client ports but also a 40Gbps full-duplex stacking architecture on each switch. This means that each switch can be communicating with other adjacent switches in the stack at 40Gbps, simultaneously transmitting and receiving data at that speed, for a total of 80Gbps stacking bandwidth per switch or 640Gbps total bandwidth for a fully configured eight-switch stack. These are meaningful numbers considering that the Layer 3 routing for that switch is performed at wire speed.

68

Page 75: Architecting Next-Generation Networks

Chapter 4

The block diagram in Figure 4.4 and the example in Figure 4.5 demonstrate discrete switching hardware products. You are just as likely to encounter chassis-mounted products, especially given the port densities allowed by the Broadcom switching fabric products. Chassis blades with 48 ports are possible (see Figure 4.6), giving very high densities with multiple switch blades in a single chassis. Data center switch installations can benefit from these high-density switches.

Figure 4.6: IBM eServer™ BladeCenter technology.

The consolidation is made possible by these rack-mounted chassis—with the combination of blade servers and multi-layer switches in a single package—is quite impressive (Figure 4.6). Communication between all of the devices in the chassis, by means of the ultra-high performance switch backplane, give the same sort of resiliency provided by the external connections on the stacked switches.

The Broadcom switch fabric chipsets that are prevalent among the top switch vendors offer an important capability—they support both copper and fiber connections. Thus, the vendors building these switches are able to offer both connectivity types in the same product, increasing the flexibility of these switches in the data center or workgroup role.

The IBM eServer™ pictured in Figure 4.6 combines both blade servers and blade switches to provide a compact and powerful networking host for your data center. These chassis offer both resiliency and the necessary redundant components to keep networks up and running with availability.

The blade model also offers easier scalability than the stacking model. With hot-swappable blades, it is only necessary to add another blade to the chassis to expand network resources. There is no need to connect a separate stacking connection and there is no need to find the space to put another stackable switch or another power outlet. Simply insert the new blade in the chassis and configure it as required. There is also greater flexibility with the chassis model if a Layer 3 intelligent switch or a full-blown Layer 7 switch need be added. The functionality can be added while obtaining the redundancy and resiliency that the chassis provides.

Many corporate networking environments are already built around rack-mounted servers in the data center. Consolidating server and multi-layer switches into the same chassis will, according to an IDC estimate, show a reduction in the cost of ownership of 48 percent over 3 years. IBM has gone a step further, including a Layer 2 through 7 switch with all of the performance and configuration potential that the added switch intelligence offers. The IDC report claims that such an environment will realize a total reduction in the cost of ownership of 65 percent over 3 years.

69

Page 76: Architecting Next-Generation Networks

Chapter 4

70

Summary Intelligent switching is the core technology to enable next-generation networking. It is the enabling technology that will allow dynamic networks to support fully implemented and reliable wireless networking, security, VoIP, video-on-demand, Web services, and many of the next-generation technologies that will provide a competitive business advantage for the network users.

It is important to consider that one of the major intangible benefits of a well-designed intelligent switching architecture is an improved user experience. Better response time for network applications, fewer network slowdowns for general use, and the availability of networking resources for users who need those resources when they need them all contribute to this improvement.

Decisions made about the design of your networking infrastructure will have long-term effects that can disrupt or enhance the future growth of your business. Dollars spent on IT infrastructure investments today should be as future-proof as possible, and intelligent switches are the place to start.

Intelligent switching is one of the few technologies applicable to future networking improvements that brings immediate tangible benefits to an existing network infrastructure and is the key starting point for networking professionals looking to improve their corporate network environments. It is the logical place to start to build a high-reliability, high-availability networking infrastructure that will allow your business to grow as necessary and offer the functionality to allow IT administrators to add the latest state-of-the-art technologies to enhance their line-of-business applications.

Given the cost of defending your networks from external and internal threats—such as DoS attacks, widespread virus propagation, and unauthorized internal user access—the incremental costs of moving to intelligent switching could be justified on that one feature set alone.

High-performance intelligent switches such as those from Broadcom are capable of maintaining wire-speed performance regardless of the additional tasks that the switch is accomplishing. Thus, adding critical features such as content filtering and spam blocking can be done without impacting the end-user experience.

Stack switches enable administrators to add capacity as necessary without additional management headaches because the unified switch management application is as simple to use with one switch as it is with a full stack. Thus, adding capacity doesn’t mean that you need to add staff to support the growing network infrastructure. This realization helps to simplify your overall management structure and expand your network infrastructure without spending money on staffing. Broadcom is dedicated to building high-performance, high-availability, scalable, robust networking infrastructures that are cost effective and provide the IT professional with the best tools for the job.

Broadcom®, the pulse logo, Connecting everything®, the Connecting everything logo, and StrataXGS® are trademarks of Broadcom Corporation and/or its affiliates in the United States and certain other countries. Windows® and NetMeeting® are, trademarks of Microsoft Corporation. BackStack® is a trademark of Nortel Networks. EServer™ is a trademark of International Business Machines Corporation. All other trademarks or trade names are the property of their respective owners.

Page 77: Architecting Next-Generation Networks

Chapter 5

71

Chapter 5: Server Migration and Optimization: Maximizing ROI for Existing Assets and Future Growth

It is difficult to overstate the importance of servers in your network. If the networking infrastructure is the circulatory system, the servers are the organs: there are critical ones, such as the brain and heart, and even a few in every network that are removable, just like the appendix. In any case, the network would have little reason to exist without the servers that populate it, offering up all sorts of services to the network users. Servers are also the point at which Gigabit Ethernet (GbE) and state-of-the-art storage technologies meet. The convergence between server, networking and storage technologies has the potential to change the way that servers and storage are treated in the corporate networking environment.

In this chapter, we are talking about complete, full-featured server technologies, not the small dedicated hardware devices that provide network services, such as print servers.

Server Technologies Servers have evolved over the past two decades from those that offer the simple file and print services available in the early generations of PC networking to the complex multiprocessor boxes that are so common today. But it’s not just the hardware that has changed; the evolution of servers has spawned specific server technologies designed to fill specific environmental niches. Let’s take a look at the server landscape of today.

File and Print Servers Much like the early days of network file serving, today’s file and print servers fill almost exactly the same role. However, the technology of the multipurpose file and print server these days tends to be relegated to either very small organizations or to small departments in larger organizations. In either case, the basic file and print server is being supplanted by network devices that fill those same roles but don’t require a full-blown dedicated server. Print services can easily be handled by dedicated print server devices, which work as Plug-and-Play (PnP) network devices to which any printer may be attached, and rarely cost more than $200. Storage can be handled by dedicated networked attached storage, which supports access controls that the client PCs understand and work in a PnP fashion to attach to a network. Dedicated computers acting as file and print servers are relics of the early days of computer networking. Each organization needs to evaluate whether it wants to use file servers as print servers.

Database Servers If there was ever a business need that drove server technologies forward it was the need for a database server. Dedicated boxes that run everything from small local database applications to large corporate enterprise applications that were once the domain of big iron mainframes, database servers are usually the point at which the cutting edge of server hardware technology meets the realities of the business process. In this space, large multiprocessor boxes with gigabytes of memory and direct channel access to fast storage are the ‘bread and butter’ of the corporate computing world.

Page 78: Architecting Next-Generation Networks

Chapter 5

72

Application Servers Plenty of business applications are best served by dedicated hardware. These application servers have a broad range of requirements that depend on their specific role and can range from enterprise resource planning (ERP) to customer relationship management (CRM) to custom-developed in-house applications. These critical, line of business (LOB) applications are sufficiently important to have their own hardware dedicated to them, and the expense of the dedicated hardware is no longer a barrier to the adoption of these application software technologies. Some application servers, such as fax servers, combine both server and client software plus a dedicated hardware component installed in the server itself.

Email Servers Fitting in the space right between database servers and application servers, email servers share the attributes of both. Running a dedicated email application such a Microsoft® Exchange Server in a large corporate environment requires the processing power usually associated with database servers, yet they also fit the application server model, as email is probably the most common business application that gets dedicated servers.

Storage Servers One of the newest dedicated server technologies, the storage server, such as Microsoft’s Windows® Storage Server 2003, provides the manageability of the network operating system (OS) as well as the ability to handle the concatenation of multiple external storage devices by using technologies such as iSCSI. This advance brings the type of storage environment formerly found only in dedicated fiber channel SAN environments to any IP-based network. This development gives businesses the ability to add the advantages of dedicated storage networking without the expense and aggravation of needing to add a second, parallel, dedicated storage network. GbE and its support for technologies such as TCP/IP Offload Engine (TOE) and Remote Direct Memory Access (RDMA—both TOE and RDMA were described in detail in earlier chapters), will make IP-based storage networking a common application found in corporate networks.

Web Servers Web servers, while once thought of as just server applications, have evolved enough to even have dedicated OSs. Although many Web farms run on versions of UNIX® or Linux® OSs, even Microsoft sees the needs of the Web server OS as being different from the company’s general-purpose OSs, shipping a product called Windows Server 2003, Web Edition. A dedicated Web server will usually be running a stripped-down version of the selected OS, with hardware focused on supplying data or serving Web requests as quickly as possible. Usually, if the task runs beyond serving Web pages, the Web servers, as Figure 5.1 shows, will be sitting in front of a bank of dedicated servers that offer the actual back-end processing necessary for the business purposes of the front-end Web farm.

Page 79: Architecting Next-Generation Networks

Chapter 5

Figure 5.1: In a Web farm scenario, the traffic from the Internet is routed to one or more Web servers, which themselves connect to multiple back-end servers that offer the appropriate response to the Web request.

73

Page 80: Architecting Next-Generation Networks

Chapter 5

74

Blade Servers The other server types we’ve discussed so far are differentiated primarily by software, but blade servers represent a specific hardware technology. These servers pack multiple individual servers on blades within a single chassis. They offer advantages to any server room that requires many servers, a certain amount of interchangeability with the hardware, and simplified management and architectural considerations. Server virtualization, partitioning, and other cutting-edge technologies are well served by the blade server model. However, the hardware technologies that can be applied to blade servers have specific criteria that might not need to be addressed in more traditional server hardware. As a result, the selection of blade servers requires an even greater amount of attention than that paid to the purchasing decisions used for normal server boxes.

The average network will include multiple types of servers. Even a small business environment is likely to have file and print servers and email servers. Large corporate environments will include many or all of the types of servers described; multiples of each server type; and infrastructure, such as clustering and grid computing, that make use of special types of server technologies.

Servers are where most of the newest computing technology undergoes its baptism of fire. Some dedicated client PC applications really tax the system hardware (for example, video subsystems on desktop computers), but the toughest test of any technology will usually be in the server environment.

A look at the processes on a desktop computer will show you that, even on a heavily used system, the CPU is sitting idle most of the time. In a server, that just isn’t the case; there is always some task that needs attention, from serving direct requests to system housekeeping demands. In a perfect world, all software is well written and works cooperatively to maximize the performance of the computer on which it is running; in the real world, that just isn’t the case.

Software designers, even for server applications, often write their programs as if their software is the only thing running on the computer. As this is rarely the case, you end up with a situation in which the hardware needs to make up for the software’s deficiencies. Thus, the server hardware needs to be up to the task; it needs to be able to run efficiently when pushed to the limit of the hardware’s capabilities.

Defining the “Cutting Edge” It is very important for IT managers to be aware of the technologies that drive the server marketplace. Determining which products to use and getting the greatest return on their server investment while applying as much “future-proofing” as possible to the server environment can only be achieved by having a clear understanding of the technologies that make up a server and the impact of those technologies on overall server performance.

Although understanding the capabilities of the individual components that make up a server system is important in the server-selection process, don’t underestimate the expertise required to build servers and combine a selection of components from a wide variety of vendors to offer customers the best performance value at any given price point.

Page 81: Architecting Next-Generation Networks

Chapter 5

75

By “overall” performance, we are referring not just to the speed of the hardware components but also to the entire list of critical evaluation points required by IT professionals: performance, reliability, availability, serviceability, scalability, and security. Let’s define each of these terms for the purposes of this publication:

• Performance—Performance in this instance describes the traditional measure of hardware performance: how fast are the individual hardware components that make up the complete server? These basic components include the CPU, motherboard chipset, embedded I/O devices, network interface cards (NICs), and hard drives.

• Reliability—Reliability describes not only the reliance that can be placed on the server but also the expectation that the hardware is suitable to task; that it can be counted on to run the necessary business applications.

• Availability—Availability refers to how much downtime can be expected with any given configuration. The target of many high-end systems is often referred to as five 9s of availability or 99.999% up time. But every extra 9 past that first one increases the cost of the server. IT managers need to understand the technologies to get the greatest number of 9s without spending their dollars in the wrong places.

• Serviceability—Serviceability refers not only to the measure of how difficult it is to service and replace components of the server system, such as drives, cards, and memory, but also to remote management and alerting technologies offered in the server platform. How much information is available about the health of the system without opening the case? Are their dedicated software applications that keep an eye on the component hardware?

• Scalability—Scalability is the capability of the server to grow to meet the expanding needs of the business environment as well as the ability to be “right-sized” or configured appropriately for the defined server task in the first place. The inability to scale server capabilities (memory, storage, and networking) can relegate an expensive piece of hardware to the junk pile if the application software needs of the organization expand.

• Security—These days, it might have been best to put security as the first priority for any technology decision. With servers, we are talking not just about the security options offered by the OS but also the security offered by the hardware, which ranges from the password-protected BIOS to chip-level antivirus scanning and protection

It is also important to note that these six measures of suitability to task for server computing are not separate and discrete components. Every component of the server system, from the basic core logic to the application software running on the selected OS, should be subject to an evaluation based on these six criteria.

The explicit combination of software and hardware features chosen by the IT manager will have a direct impact on which of these criteria is most important. The IT manager should weigh these criteria based on the business needs the server will be addressing, and understand the tradeoffs that need to be made in terms of cost versus capability. Based on this evaluation, the IT manager can then spend his or her money where it will do the most good.

Page 82: Architecting Next-Generation Networks

Chapter 5

76

Understanding Performance-Oriented Technologies There are many technologies that combine to offer the performance and reliability that users demand in their server products. From the core I/O logic of the computer to the network connections, each component within the server system plays its own role in providing the user experience demanded by IT departments.

Remember that desktop computers and servers are designed with different duty cycles in mind. Although desktop computers have often been used in the server role with success, a desktop computer is designed with the expectation that it will not be stressed in a 24 × 7 role. Servers are designed to operate with the stresses and duty cycles expected in a high-usage role and still provide high availability.

Core I/O Components Users tend to think about server performance in terms of CPU speeds, hard drive access times, and availability of memory, but it is the system I/O core logic that ties all of these components together and makes high-performance servers possible. The core logic chipset determines the capabilities of the server, controlling the amount and speed of available memory and CPU, determining the bandwidth capabilities of the system bus and CPU support, and defining the aggregate bandwidth available to the system from both external and internal buses. This core logic of a server is generally defined by two sets of components, referred to as the North Bridge and the South Bridge.

North Bridge The North Bridge is the chipset that controls the connection of the system’s CPUs and the access to the system’s memory. Products such as Broadcom’s ServerWorks® Grand Champion™ Enterprise Quad Processor SystemI/O™ Platform take full advantage of the Intel® architecture and manage the flow of data with support for as many as four Intel Xeon® processors, 64GB of main memory, and 6.4Gbps memory bandwidth. The chipset also offers three I/O channels, each of which supports as many as 1.6Gbps bandwidth. Broadcom offers “extensive RAS features” with the term RAS standing for “reliability, availability, serviceability.” These RAS features include 128-bit ECC support, memory mirroring, memory hot swap, and spare memory capabilities. This ServerWorks core I/O chipset even supports the Chipkill™ technology, which allows a server to recover from the failure of an entire bank of memory (see Figure 5.2).

South Bridge The South Bridge links the North Bridge to the I/O components in the system. To do so, the South Bridge needs to connect to the North Bridge as well, and ideally can aggregate bandwidth over the available North Bridge connections (in the case of the example ServerWorks chipset, the three 1.6Gbps channels described in the North Bridge section). There are multiple types of I/O connections in every server, which can include the AGP bus, ATA/IDE, Serial ATA (SATA), SCSI, USB, IEE 1394, InfiniBand™, and PCI Express™ (see Figure 5.2).

Page 83: Architecting Next-Generation Networks

Chapter 5

Figure 5.2: The relationship of the North and South Bridges to CPU, memory, and I/O subsystems.

Storage The internal storage subsystems of the computer will always play an important role in overall server performance. Despite the movement of network storage to external devices and storage networking, a server loads its OSs from local storage and is required to use that same storage for the operation of the OSs; swapping data from memory to disk, memory virtualization, page files, and so on. There are a variety of common storage technologies found in server hardware; IDE/ATA, SATA, SCSI, and Serial attached SCSI.

IDE/ATA IDE, or integrated drive electronics, makes use of the advanced technology attachment (ATA) implementation that places the drive controller electronics on the drive itself. ATA has gone through no less than seven revisions over its life cycle with ATA/133 being the current high-performance version of the specification. Despite the high speed of the data transfers, ATA drives are suitable only for low-end servers as a result of the limitations of the technology that limit cable lengths, number of drives, and overall throughput. This technology is now often referred to as Parallel ATA so as to avoid confusion with the newer and faster SATA technology.

77

Page 84: Architecting Next-Generation Networks

Chapter 5

78

SATA SATA offers faster data transfer rates (150MBps in its initial implementation, 300MBps for the second generation), lower voltage requirements, and thinner cables requiring fewer connections (just two data channels, allowing a 3-meter long thin cable rather than the flat 40/80 18” wire ribbon cables used by Parallel ATA). Rather than the master/slave configuration of two drives on each cable for Parallel ATA, SATA treats each of the two drives on the cable as if each were the master drive on its own port. Although Parallel ATA was evolved for desktop computers, SATA was designed to be used in entry-level servers offering a low-cost, high-performance storage solution for entry-level and non–mission-critical servers.

SCSI SCSI, or small computer systems interface, is a parallel interface standard that has gone through almost a dozen iterations and currently supports speeds as fast as 320MBps in its fastest incarnation. It is the standard drive interface for high-performance computing both in workstations and servers. SCSI devices do not have the two device maximum limitation that ATA and SATA devices share, supporting as many as 15 devices per channel, and the maximum cable length is measured in meters, not inches. SCSI devices are able to process instructions in parallel, meaning that greater throughput is possible than in ATA devices that must process a single instruction at a time. The peak performance of SCSI devices is critical for applications such as heavily used databases, streaming audio and video, and any other bandwidth-intensive application that draws directly from disk storage. The instruction processing capability of SCSI means that OSs that offer full SCSI support will run faster on SCSI drives than on ATA drives rated for equivalent performance. The only downside to SCSI is that drives with SCSI command a premium price over ATA drives and, at this time, ATA drives offer higher capacities (which results in a lower cost per megabyte).

Serial Attached SCSI Serial attached SCSI is a full, dual-ported implementation that supports a maximum of 4032 devices at speeds as fast as 3Gbps. Rather than the shared-loop technology used by standard SCSI, serial attached SCSI uses dedicated point-to-point connections for each device. Serial attached SCSI was designed to support three distinct protocols: the Serial Management Protocol, used to manage the point-to-point connections; the Serial SCSI Protocol, used to leverage existing SCSI devices; and, most important, the Serial ATA Tunneling Protocol, which allows a seamless interface between Serial ATA and serial attached SCSI technologies. Both technologies use the same connector configuration, and with the correct electronic support (in the core I/O logic), offer an upgrade path to users. The goal is to allow SATA drives to be connected to serial attached SCSI interfaces, providing a lower-cost storage alternative. (SATA drives will attach to serial attached SCSI connectors; the reverse, however, is not true, with the connector on the serial attached SCSI drive having an extra hump to prevent its attachment to a SATA connector. SATA drives, lacking this hump, will still connect to serial attached SCSI connectors.).

Page 85: Architecting Next-Generation Networks

Chapter 5

79

RAID One common server technology that applies to all forms of storage is the Redundant Array of Inexpensive Disks. RAID storage is a de facto standard in server implementations. RAID currently has no less than nine standard levels of implementation:

• RAID 0—RAID 0 is the striping of data across multiple drives, which provides the highest possible performance but offers no fault tolerance. If one drive in the stripe set fails, all data is lost.

• RAID 1—RAID 1 is the mirroring of data across pairs of disks. Data is written to both drives simultaneously. The primary downside is that RAID 1 requires a 100 percent duplication of disk drives.

• RAID 2—RAID 2 is very rare and involves an ECC striping of data to the drives at the bit level to improve fault tolerance related to data corruption.

• RAID 3—RAID 3 uses byte-level striping with a dedicated parity disk. Unfortunately, this configuration is unable to service simultaneous requests, and as such, is rarely used.

• RAID 4—RAID 4 is basically RAID 1 plus a dedicated parity disk. There are performance advantages to the multiple disk stripe set, but they can be offset by write bottlenecks on the single parity drive.

• RAID 5—RAID 5 performs striping of data at the byte level and of ECC information across all the drives in the stripe set, which results in a combination of good performance with excellent data protection. RAID 5 is the most popular fault-tolerant RAID implementation.

• RAID 6—RAID 6 performs block-level (as opposed to byte level) striping of data and parity information across all the disks in the stripe set.

• RAID 0+1—RAID 0+1 creates a RAID 0 stripe set, then uses a RAID 1 mirror of the initial stripe stet to provide fault tolerance.

• RAID 10—RAID 10 creates a RAID 1 mirror set, then creates a RAID 0 stripe set over the mirrors.

RAID sets can be created in both hardware and software. For example, Windows Server (all versions) can perform RAID 0 and RAID 1 functions using the facilities built-in to the OS. However, doing so adds to the OS overhead, as the OS is now responsible for managing the disk hardware in a way that is much more CPU intensive than just accessing the file system. For this reason, hardware RAID controllers are the standard for server (and workstations). In this case, the RAID configuration is handled by the firmware and processing of a dedicated RAID controller with the OS seeing just a standard, high-performance hard drive. In this way, no additional overhead is required by the server OS.

Page 86: Architecting Next-Generation Networks

Chapter 5

80

GbE GbE is the current standard version of the Ethernet technology that has existed since the 1970s. 100Base-T is the standard for GbE over copper wiring (Category 5 Unshielded Twisted Pair). Chapter 2 provided you with an overview of the current state of the industry for GbE. For the purposes of this chapter, it is important to consider GbE an enabling technology. The high-performance of GbE (and eventually 10GbE) makes considerations using related technologies—such as TOE, RDMA, and iSCSI—practical business choices for the IT professional.

TOE TOE engines allow the NIC to handle the processing of the network transport protocol instead of relying on the OS and the server CPU to perform this work. All transactions with the host processor are handled at the session layer, which leverages an application use of large files to reduce the number of interactions the host CPU needs to have with the data. TOEs can offer either full or partial offload. The full offload completely removes the responsibility for dealing with the IP protocol stack and its contents from the host NIC. The partial offload handles the data transmission/reception information and relies on the host NIC IP stack to handle the transmission, termination, and error handling for connections.

There are many server applications that are communication-centric, rather than compute-centric, such as email and Web servers. The tight integration of TOE with GbE and the core chipset technology allows for significant performance improvements. Especially with GbE, these communication-focused applications can consume far more CPU cycles than you would generally think, bottlenecking the server at the CPU; the use of TOE, especially if it is tightly integrated with the core logic, will prevent the communication aspects of the computer’s application from causing the CPU to be the bottleneck. This doesn’t mean that the CPU will never run out of resources, but with TOE, the cause won’t be the creation of the traffic that comes and goes over the NIC.

As Chapter 2 discussed, industry-standard TOE implementations, such as the Microsoft TCP Chimney initiative, means that TOE will be an industry standard that can be used to replace non-standard proprietary network co-processor technologies. The inclusion of TOE support in core logic chipsets can only make this standard implementation simpler.

RDMA RDMA is a technique that allows the data in the memory of one computer to be transmitted to the memory of another computer without involving the host CPU or host OS on either computer. RDMA can provide sufficient performance over GbE networks with sufficiently low latency to be suitable for use designing cluster applications that formerly required dedicated connections.

RDMA is a more important technology than you might think; the push for grid/utility computing, and high-performance computing that uses resources spread over multiple computers means that the features offered by RDMA are critical to the successful implementations of these technologies. Using RDMA allows applications to exchange data, bypassing the CPU and the OS, which, in turn, results in drastic reduction in latency. Zero-copy receive and transmit operations write directly into the application’s buffers. Doing so relieves the strain on the server’s memory subsystem because the extra data copies that are maintained in more traditional networking stacks are no longer there.

Page 87: Architecting Next-Generation Networks

Chapter 5

iSCSI iSCSI is a standard for using SCSI commands over IP-based networks. By supporting GbE at the physical layer, iSCSI can be used to build storage networks over standard Ethernet networking. The OS interaction with iSCSI is basically the same as interacting with local SCSI devices with the exception that the SCSI commands are transmitted over the GbE connection to the target device (see Figure 5.3). Although not as fast as a dedicated fiber channel storage network, iSCSI simplifies implementation and adoption by lowering the complexity and cost of storage networking.

The inclusion of iSCSI support means that users will be able to create storage networks as necessary, without the need for dedicated networking hardware. iSCSI will allow users to place storage resources wherever they are needed in a GbE network. The iSCSI support will result in lower costs for storage networking and greater utilization of existing Ethernet networking.

Figure 5.3: iSCSI storage networking runs on the same network infrastructure as the standard network, functioning as remote SCSI drives available to any OS that can provide an iSCSI initiator. Applications running on servers with iSCSI initiators send SCSI commands to the storage servers.

Technology Integration The future (and present) of servers is the integration of these technologies that improve performance along with reliability, availability, and serviceability. For example, the integration of GbE into the server means that the applications that can take advantage of it will naturally evolve and become more common. A look at the Windows Server 2003 (WS2K3) market in mid-to-late 2004 will show a huge number of iSCSI-based products being released. The adoption of GbE is rapidly making this technology the standard for Ethernet networking.

81

Page 88: Architecting Next-Generation Networks

Chapter 5

82

As this technology becomes more widely accepted, the next generation of server purchases will reflect that, meaning that technologies such as GbE and the infrastructure to support high-performance computing will become the standard for networks in the immediate future. Chip vendors will be providing greater integration in their board-level products and vendors such as Broadcom, with its large variety of chip and board-level products, will make high-performance servers cost effective for even the small-to-medium–sized business market, as Table 5.1 shows.

Technology Benefit

Integrated core I/O technologies More efficient server operations GbE Faster networking with backward compatibility TOE Reduced CPU utilization Reduced CPU utilization Improved application performance iSCSI Storage networking without a dedicated network RDMA Clustering support

Table 5.1: Technology benefits to the server and networking environment.

Which components are found on a motherboard or server blade? Common components found on the system boards of both servers and desktops include:

CPU and CPU sockets (from one to four on a single motherboard; 8-way systems are a special case)

I/O Ports—External and internal

Storage connectors—ATA/SATA/SCSI for both hard drives and CD/DVD drives

Memory sockets

Add-in card slots (PCI, PCI-X, PCI-Express, AGP) or backplane

North Bridge

South Bridge

Battery, BIOS, power supply connector

Technology Convergence As the marketplace moves forward, many of the technologies that make up a good server won’t just be integrated into the server product, there will be a convergence on the Ethernet and integration with the system core logic that will reduce costs while increasing the performance and tightening the integration of the various subsystems. An example of this technology is the Broadcom CIOB-E Grand Champion Dual Gigabit Ethernet (Copper)/PCI-X™ SystemI/O Bridge.

The Champion™ I/O Bridge-E (CIOB-E) is an example of the way that Broadcom is integrating GbE into the core I/O subsystems of a server. Containing dual GbE MAC controllers, dual physical layer controllers, and a 64-bit PCI-X, the CIOB-E is tying networking and computing technologies together and is eliminating the need for a discrete GbE interface, reducing the necessary board real estate and facilitating the use of GbE in space-constrained server systems such as the current state-of-the-art blade server systems. The CIOB-E was the first integrated core logic GbE technology to hit the market in late 2002.

Page 89: Architecting Next-Generation Networks

Chapter 5

83

The integration of GbE into the core logic also reduces the end-user cost of the servers and aids in making GbE the networking standard in the corporate environment. An additional benefit is that the CIOB-E still uses the same software drivers as standalone Broadcom GbE controllers. This software commonality makes it possible to implement Broadcom GbE technology across the enterprise, using products such as the add-in boards equipped with the Broadcom Converged Network Interface Controller (C-NIC) in computers that lack embedded GbE.

GbE is fully backward compatible with the previous 100Base-T Ethernet standard. Thus, there is no technical issue in adding servers with embedded GbE support to your existing Ethernet networks. No special accommodations are needed, and when you upgrade your switch infrastructure to GbE, your servers will be poised for immediate performance gains.

Converged Network Interface Cards In May 2004, Broadcom opened the door for Ethernet convergence products with the NetExtreme® II C-NIC, a product that combines four separate technologies: TOE, RDMA, iSCSI, and embedded in-band management pass-through into a single chip that allows for remote control of the server over a single network connection. This combination of features means quite a bit in the server environment in terms of improved efficiencies in server utilization, usability, and manageability.

The dedicated controller is able to offload network operations from the host CPU and supports technologies that go beyond basic Ethernet networking. The TOE, RDMA, and iSCSI support mean that complex management and storage networks can be configured with the addition of special hardware. Servers that make use of the C-NIC can be dropped into a network that uses iSCSI for remote storage and be ready to run. RDMA means that the server is cluster ready for OSs that support the technology.

This new class of C-NIC products ushers in an era: It is time to start thinking about converging your storage traffic, cluster traffic and networking traffic over one IP network. Consider new server technologies that enable IP convergence, such as:

OSs that can accommodate TCI/IP offload

iSCSI for block-level storage networking

Clustering x86 servers for complex computing applications

Benefits of the converged technology presented by the C-NIC:

Reduction in network latency

Significantly lower CPU utilization—Tests using the NTTTCP benchmark showed that use of the C-NIC, running Microsoft’s TCP Chimney software improved CPU utilization as much as five times over a standard GbE controller

Performance gains across all mainstream server applications

Virtual “always on” remote management eliminates the need for dedicated network management

Page 90: Architecting Next-Generation Networks

Chapter 5

84

In the blade server environment, this single-chip solution does more than just save space on the PCB; it also minimizes the complexity of the backplane due to the need for a single set of paths that make for simpler routing and a reduction in crosstalk. The single switch fabric for these networking technologies makes development simpler, and the common management capabilities are supported by all widely used management tools.

Scalable and Configurable I/O This chapter started by discussing the system I/O bus. If you’ve asked yourself why this is important, it’s as simple as this: overall, the I/O subsystem has a greater impact on total system performance than the CPU. Low-end I/O chipsets don’t allow the CPU to perform at its fullest potential. The core logic chipsets interface directly with the CPU front-side bus, the memory subsystem, and with external components via various interconnect technologies, some of which are described earlier in this chapter. The ultimate purpose of these chipsets is to facilitate the transfer of data without any bottlenecks so that the processors, memory, and the various peripheral components can do their jobs without interruption.

Core logic must be designed to take advantage of the different types of interconnects available as well as to provide support for the CPUs commonly in use in the business environment. The right core logic components are critical to cost-effective server design and utilization.

Properly designed and implemented core logic results in the following benefits:

More impact on performance and functionality than other system components

More impact on reliability, availability, and scalability than other parts of the system

More flexibility to integrate additional functionality in the core logic

Interconnects Current implementations of core logic have to make choices about the interconnect technologies they support. There are currently three interconnect technologies commonly found in servers: HyperTransport®, Peripheral Component Interconnect Extended (PCI-X), PCI-Express.

HyperTransport Originally proposed by AMD and turned over to the HyperTransport Consortium, HyperTransport is a direct, high-speed, high-performance, point-to-point link for integrated circuits. It supports a dual bus with unidirectional point-to-point links operating at a data throughput speed of as fast as 22.4GBps. This is an aggregate bandwidth and there currently can be a maximum of three HyperTransport links per system. The link width can be 2, 4, 8, 16, or 32 bits and the bandwidth, in each direction, can range from 100MBps to 11.2GBps. It provides multiprocessor support and supports both the coherent and non-coherent memory models.

The HyperTransport Release 2.0 specification includes the ability to map PCI, PCI-X, and, new to this version, PCI-Express, providing broad technology support. Release 2.0 is backward compatible with HyperTransport Specification 1.x.

Page 91: Architecting Next-Generation Networks

Chapter 5

85

PCI-X Developed jointly by IBM, Hewlett-Packard, and Compaq, PCI-X doubled the data rate of the PCI bus. The current architecture supports one 64-bit PCI-X slot running at 133MHz with the rest running at 66MHz allowing for a total aggregate bandwidth of 1.06GBps or exactly double the 532MBps of the standard PCI bus. The PCI-X bus is backward compatible with the original PCI bus, but if PCI cards are used, the entire bus slows down to PCI speeds, negating the advantages of PCI-X. PCI-X does offer fault-tolerance features not found in PCI, allowing the bus to reinitialize a card or to shut a card down before it fails completely. At this point, PCI-X can definitely be considered old technology, having been first introduced in 1988.

PCI-Express PCI-Express is the latest in a series of I/O interconnect standards. The standard more than doubles the data transfer rate of the original PCI bus. Unlike the single parallel data bus of the original PCI specification, which was designed for desktop computers, the PCI-Express standard uses two sets of point-to-point data lanes. In addition, PCI-Express was designed to offer support to all types of computing environments, from embedded devices to high-end servers.

Early motherboard designs with PCI used the PCI connection to link the North Bridge and the South Bridge; current implementations use dedicated high-speed interconnects between the two bridges, which results in much faster communications between the core logic and the peripheral chipsets. For example, initial implementations of PCI-Express support speeds as fast as 200MBps (PCI offers 133MBps). Because PCI-Express is a point-to-point connection, there is no bus sharing (as found in PCI); each device gets a dedicated connection. This setup significantly reduces the chance of contention that causes an overall performance degradation in the computer.

PCI-Express also includes support for such high-end features as hot swapping/hot plugging, isochronous data transfer, error handling at the link level, and quality of service policy management. Multiple virtual channels per physical link are also supported. Because PCI-Express provides software compatibility with PCI, existing OS drivers will still function. Hardware compatibility is provided by extending the PCI bus slot, adding a connector that sits behind the PCI slot on the motherboard, so additional hardware is needed for full support, but legacy devices with drivers for the current OS will function as if the PCI-Express interface was not there.

CPU Support In the x86 universe, there are a number of CPU types that need to be supported. These include the Intel IA-32 and Extended Memory 64-bit Technology (EM64T) architectures, and the AMD 64™ architecture as represented by the Athlon® and Opteron® processors.

Page 92: Architecting Next-Generation Networks

Chapter 5

86

IA-32 The term IA-32 is basically interchangeable with the current generation of x86 processors from both Intel and AMD that have been on the market since the release of Intel’s first 32-bit processor. It defines the 32-bit instruction set used by these families of processors.

The current server class IA-32 CPU from Intel is the Xeon processor. The Xeon processor differs from the desktop Intel P4 processor primarily in the size of the L1 and L2 cache and support for multiprocessor systems. Entry-level single processor servers may use P4 processors rather than Xeon server-class CPUs.

AMD Opteron The Opteron processor is AMD’s eighth generation of x86 CPU and its first generation to support the AMD 64-bit introduction set, allowing the CPU to access more than 4GB of system memory. The processor can run both 32-bit and 64-bit applications and suffers no performance penalty when running 32-bit applications.

Unlike Intel processors, the CPU contains an integrated DDR SDRAM memory controller, as Figure 5.4 shows, which negates the need for a North Bridge and significantly reduces the latency experienced when the CPU accesses memory. This on-chip controller can be disabled to allow the use of different memory technologies (with a traditional North Bridge), but then the advantages of the built-in memory controller are lost. Future memory technologies will require their own specific Opteron releases. In multiprocessor motherboard configurations (as many as eight processors), inter-processor communication occurs via HyperTransport links.

Page 93: Architecting Next-Generation Networks

Chapter 5

Figure 5.4: Compared with the architecture that Figure 5.2 shows, the AMD architecture uses the on-CPU memory controller to bypass the North Bridge and improve the speed of memory access and overall system performance.

HyperThreading is a multithreading technology supported by Intel that allows OSs that support multiprocessing to treat a single Xeon or P4 CPU as if it were a dual-processor computer. This functionality is found only in the more recent iterations of Intel Xeon and P4 processors. At this point, AMD is just beginning to release a dual-core version of the Opteron Server CPUs, which also appear to the OS as two independent CPUs. The Intel HyperThreading technology is available in both client and server CPU products. Currently, AMD is offering this capability only in its Opteron line of server CPUs.

87

Page 94: Architecting Next-Generation Networks

Chapter 5

88

AMD Athlon 64 and Athlon 64-FX Also built with the eight-generation AMD processor technology, the Athlon processors share many features with the Opteron, such as the AMD 64 instruction set, but do not support multiprocessor computing. The 64-FX shares more of the Opteron’s features, such as support for dual-channel DDR RAM, and offers higher clock speeds than the standard Athlon 64.

AMD does not use the actual clock speed of its processor in the processor names, unlike Intel. The AMD names represent performance when compared with a pre-established standard system.

EM64T EM64T is Intel’s extended architecture 64-bit implementation of the AMD 64 architecture (it could be said that Intel CPUs that support this technology are “AMD compatible”). Although the initial release is not identical in function to the AMD 64, future versions are expected to be 100 percent compatible. Microsoft Windows XP® for 64-bit computing will run on either the Intel or AMD 64-bit extension technologies to the x86 architecture.

Complete details about the Microsoft Windows XP 64-bit Edition can be found at http://www.microsoft.com/windowsxp/64bit/default.mspx.

What about IA-64 and the Itanium® processor? IA-64 is Intel’s 64-bit CPU architecture that was introduced with the Itanium processor. IA-64 CPUs do not directly execute x86 code; instead they virtualize the x86 instruction set, resulting in a significant performance penalty when compared with execution of native IA-64 code. The chipset technologies discussed herein apply only to x86 architecture CPUs. The future development of the IA-64 processor family will determine the availability of non-Intel core logic components for system board design.

Page 95: Architecting Next-Generation Networks

Chapter 5

89

Checklist for Buying Next-Generation Servers for Your Networks

Use the following general checklist when looking to purchase next-generation server technologies for your organization’s networks:

• Determine the role of the server within the network. Different roles will require you to apply different weights to the server selection criteria.

• Determine the minimal requirements for the server in the needed role. Calculate processor, memory, and storage needs based on the role and/or applications that the server will be using.

• Determine the networking requirements of the server. Will it need a single NIC or multiple NICs? At this point in time, select only servers with support for GbE. There is little cost difference, and it will work in existing 100Base-T networks.

• Single or multiprocessing CPUs—if SMP is required, is Intel HyperThreading or AMD dual-core Opteron technology sufficient or will multiple physical CPUs be required?

• Determine where the server bottlenecks are likely to occur. Will the server likely be I/O bound? Network bound? CPU bound? Size the server accordingly and select components that will minimize the chances of server bottlenecks.

• Will storage be local or remote? If local, what type of hard disk support is appropriate for the server? If remote, will iSCSI be required or be a future-growth path?

• Does the server role require specialized server technologies such as blade servers or utility computing? If so, determine what the requirements are for the device application.

• Don’t neglect the manageability aspects of the server choice. Does the server need to integrate with an existing network management tool? If so, make sure adequate support is available on your selected hardware.

• Are there specific availability requirements for the server? Consider the advantages of products that combine functions in terms of performance and reliability.

• Does your selected hardware have a sufficient degree of “future-proofing?” If you have maxed out your selected server platform, you run the risk of having it become obsolete immediately upon installation. Make sure that your choices have overhead to support your planned (or unplanned) growth.

Summary All of the technologies discussed in this chapter are being integrated into current and future-generation network server and high-performance networking products. It is no longer just a case of searching out the fastest CPU and expecting that to overcome any other deficiencies in the products; server core logic is a much more critical component of next-generation server design.

Selecting your next server platform is not a trivial task. The expense of setting up new servers and allowing for network growth that can accommodate business growth can seem to require a bit of a magic at times. Understanding the technologies that are being deployed and advanced with the next generation of products is an important part of the evaluation process that your server purchases need to go through.

Page 96: Architecting Next-Generation Networks

Chapter 5

90

It’s no longer enough to commit to a vendor because the vendor offers you the best price on its current technology; finding yourself in ‘rip and replace’ mode is something that no IT professional desires, and a detailed understanding of the technology you are purchasing—especially if it is something that might have already been pushed in the mundane category, such as servers—can only help you make better choices for your next selection. The technology is constantly changing and you don’t want to be caught off guard.

Broadcom has taken the leading edge in designing and creating the next-generation products and has taken the industry lead in engineering the cutting-edge technologies that leverage standards and provide optimal performance. Its products are CPU-agnostic with support for AMD and Intel server CPUs along with support for the three major interconnect technologies currently available. Broadcom is also the leader in the convergence space, building more functionality into the critical components that are the building blocks of your next server purchase. These investments in convergence technologies bring the user increased reliability, improved efficiency, and a higher return on investment for products that make use of these technologies. Because of its broad product portfolio and exposure to a wide variety of markets, Broadcom is driving the convergence of voice, video, and data services over both wired and wireless networks. The company is integrating its broad range of networking and communications innovations into next-generation products that bring breakthrough technologies to a much broader audience. The company continues to create solutions that make next-generation networking affordable for even midsized businesses, making the latest in technology available to a larger marketplace.

Broadcom®, the pulse logo, Connecting everything®, the Connecting everything logo, ServerWorks®, NetXtreme®, Champion™, Grand Champion™, and SystemI/O™ are trademarks of Broadcom Corporation and/or its affiliates in the United States and certain other countries. Intel®, Intel Xeon®, and Itanium® are trademarks of Intel Corporation. UNIX® is a trademark of Unix System Laboratories, Inc. Linux® is a trademark of Linus Torvalds. Chipkill™ is a trademark of International Business Machines, Inc. InfiniBand™ is a trademark of InfiniBand Trade Association Corporation. PCI Express™ and PCI-X™ are trademarks of PCI-SIG. HyperTransport®, AMD 64™, Athlon®, and Opteron® are trademarks of Advanced Micro Devices, Inc. Microsoft®, Windows®, and Windows XP® are trademarks of Microsoft Corporation. Any other trademarks or trade names mentioned are the property of their respective owners.

Page 97: Architecting Next-Generation Networks

Chapter 6

91

Chapter 6: End-to-End Security: How to Secure Today’s Enterprise Network

Security is a concept and responsibility that in some way touches on almost every decision made about corporate network environments. Whether it is a primary concern or simply a check box item on a list, the security consequences of every action that affects your networks must, at some point, be considered and, in most cases, acted on. A comprehensive end-to-end security model is not just a good idea; it is an absolute necessity.

As network security has become more critical, it has also, unfortunately for the network administrator, become increasingly more complex. It is no longer enough to simply lock down the perimeter of your network (though that is still a critical task). Let’s consider some of the additional issues that affect the security of the network:

• User/client issues—Consider the nature of how all of the client computers access the network. It’s no longer the simple matter of wired network clients; there will be wireless access, remote access (via VPN or Internet access), and the occasional visitor to the business who needs access to network resources. Additionally, consider the fact that networks on the other side of the remote access connection may not, of themselves, be secure.

• VoIP and wireless capabilities and their associated applications—Although the issues regarding securing wireless networks are well documented (and will be addressed later in this chapter), adding VoIP capabilities to your networking infrastructure brings its own set of security complications. Additionally, applications that make use of wireless networking and VoIP may require specific configurations for your security model (control over specific IP ports, and so on) in order to operate correctly.

• Management issues—As additional clients and devices are added to the network, security management becomes more complex. Complexity increases the cost of providing security and demonstrates the need for centralized management (of devices and security) to reduce expenses.

Page 98: Architecting Next-Generation Networks

Chapter 6

• Cryptographic issues—Strong cryptographic protection can be an integral part of securing your environment. If this route is chosen, however, it is important to select a managed solution that has as little impact on overall network performance as possible, while still offering the level of cryptographic protection deemed advisable for your environment. The cost of such protection must also be factored into the decision.

• Return on investment (ROI) issues—ROI considerations relating to the general expenses of securing the network and its devices must be considered. You must examine the costs associated with securing client devices and assuring that they can be authenticated to the network and have some level of tamper resistance.

• Compliance issues—It is basically impossible these days to build any sort of network and not have some concern about government regulations. Such compliance could range from a completely Federal Information Processing Standards (FIPS)-compliant security model or minimal protection that incorporates government-approved encryption standards such as AES or 3DES. There might also be other regulatory standards that affect the security of information on the network, such as the Health Insurance Portability and Accountability Act (HIPAA) of 1996, the Sarbanes-Oxley Act of 2002, SEC Rule 17-4a, or any industry-specific requirements that will need to be addressed.

Securing from the Outside In As Figure 6.1 shows, the traditional security model starts with securing the perimeter of the network.

Figure 6.1: A simplified view of the edge security model.

92

Page 99: Architecting Next-Generation Networks

Chapter 6

These edge security devices can range from hardware appliances that are specifically designed to perform tasks such as providing a firewall, content filtering, spam and virus suppression, or any operation that should be performed on traffic before it enters the network perimeter. Software is also available to provide these same services, ranging from comprehensive products—such as Microsoft’s ISA Server, which combines different tasks on the same hardware—to single-solution products, such as CheckPoint’s firewall software. Although combination devices (those that combine different types of security products) are often used, there are still many solutions best served by dedicated devices, such as the VPN server that Figure 6.2 shows. By not using combination devices to provide perimeter security, the network doesn’t get locked into a particular technology offering.

On the macro level, for example, although it might be tempting to roll all of your edge security into a single device that combines firewall, content filtering, email scanning, IP management, and so on, the reality is that the technologies advance at their own pace and there is no reason to be locked into a trailing edge solution because changing a single aspect of your perimeter security design means ripping it out completely. Dedicated, application-specific appliances (at the server level) tend to offer the best way to future-proof your network security, upgrading each technology as necessary without impacting the others.

Figure 6.2: In the traditional remote access scenario, remote users connect to a VPN server that authenticates by using a RADIUS server.

What we find ourselves with today is a mix of software and hardware components without a network security model. Although this situation isn’t likely to change any time soon, there are noticeable advantages to hardware-based security over a pure software approach.

93

Page 100: Architecting Next-Generation Networks

Chapter 6

94

Software or Hardware Security? In the software-only security model, all of the authentication, confirmation, and processing of security information is handled by software running on top of the client device operating system (OS). If the software is doing a relatively simple task, such as client authentication, there isn’t much overhead on the computer; if the task is more complex, yet even as simple as scanning email for spam or an antivirus application, the end user tends to notice a decrease in performance while the application is executing. And if the task is a complex one at the packet level, which involves handling the IP stack and dealing with individual packets, it is a task that can prevent the computer from doing any other processing while it runs. Server-based security software can be written to thread as much of its activity as possible and minimize the impact on foreground, or primary server, applications, but by the time you get to the point where the data is being processed sufficiently quickly, you probably find yourself dedicating the server to the security process.

The current security trend is to add security within the network as well as on the perimeter. This trend usually entails distributed firewalls (if not a firewall on each client), identity management for every network device, strong authentication for every user, and comprehensive security management tools to coordinate all of these independent security solutions. Key and certificate management solutions offer the network security administrator the ability to maintain an overview of all of the certificate key pairs issued in their networking environment.

At this point in time, for many people, security means nothing more than having a good password. But simple password-based authentication systems may soon be found only in the most basic of networks. Strong authentication methodologies such as smart cards, hardware tokens, USB tokens, and biometrics are all major areas of growth in the security marketplace. Administrators want a greater level of confidence that users and devices attaching to their networks and network resources are who they actually claim to be. For this reason, server-class OSs provide direct support for additional authentication methods such as smart cards or biometrics.

A hardware-based security option in which the security information is embedded into the client device, has quite a few advantages over a software security system. It is far more difficult to hack into a hardware-based cryptography system. Software, by its very nature, is accessible; there will always be a need to upgrade or modify software solutions. But the flexibility that the software offers is offset by the need to protect the access to the software, which makes it inherently less secure.

As Figure 6.3 illustrates, a security system is only as secure as its root layer. With a hardware-based encryption system, it is much more difficult for the root of your security system to be compromised by hacking or any form of unauthorized access. The creation of trusted client devices that can be added to your network and incorporated directly into your security model requires the implementation of embedded hardware for cryptography in each of your client devices.

Page 101: Architecting Next-Generation Networks

Chapter 6

It should be noted that we are discussing embedded hardware security, not simply a security appliance. With an appliance, you basically have a dedicated server device, running an OS, with an application running on top of the OS that provides the security functionality of the appliance. With the embedded hardware security model, the secure devices include chip-level security that is at a level below that of any OS or application. Thus, the security model isn’t able to be compromised by problems or bugs in the application or OS, nor are they susceptible to malicious attacks focused on the OS that an appliance would run.

Figure 6.3: In the key-based cryptography scenario, all security functions are dependent upon the root key remaining secure. Because each layer is enclosed within the next layer, only that root failure can easily compromise the security model.

As Figure 6.4 shows, embedded hardware encryption technology enables you to use a single security methodology to authenticate all sorts of different network-connected devices ranging from wireless access points to VoIP phones to desktop computers. All of these devices are the common everyday components that are found in networks; the embedded hardware security technology means that the implementation of the key-based cryptographic security model will guarantee that the devices are what they claim to be, adding security to the network infrastructure. With a strong embedded security standard in place, it should be possible to mix and match devices even if the chip-level security is being provided by multiple vendors (or rather, that there are multiple vendors offering products that implement this technology using different OEM chip providers).

95

Page 102: Architecting Next-Generation Networks

Chapter 6

Figure 6.4: The key management server handles the authentication of the network devices equipped with embedded hardware security devices.

Identity Management: Identifying Who and What is on the Network In the current context of network security management, identity management is a software technology designed to simplify access to network resources, thereby improving efficiency of network-based tasks. The biggest push right now in identity management is a technique known as federated identity management (FIM).

In FIM, software is used to authenticate users across networks with the goal of allowing unrelated networks to share resources with users who need access that may, for example, cross company boundaries in an environment in which more than a single vendor is working on a project (see Figure 6.5). The goal is to make the necessary resources available from each party’s network without needing to create user accounts for every user involved in the project on all of the disparate networks that might be involved.

The concept can also be considered a “single sign-on” methodology for multi-network projects. At this point in time, FIM software is available from several vendors who offer complete solutions ranging from FIM-enabled management to software developer kits for adding FIM capabilities.

Identity management is not the same as a meta directory. In the case of FIM, there is no all-encompassing directory service; rather, there is a credential authentication mechanism that allows for foreign users to get credentials on a network without the need to create a new account for that device or user on the host network.

96

Page 103: Architecting Next-Generation Networks

Chapter 6

Figure 6.5: FIM software gives the user access across corporate network boundaries without the need for creating an account for the same user on each network.

Software identity management, however, doesn’t assure the network administrator that the device being connected is supposed to be allowed access. Simply being able to authenticate the user of a device doesn’t mean that the device is one that should be allowed to access network resources. Hardware embedded identity provides advantages that are difficult for software identity to match. For example, it’s much more difficult to spoof hardware identity than software identity. Also, as hardware identity is included in the production of the network device, it is, in the long term, cheaper than any software technique.

There are many things at the hardware level that can specifically identify a computer to the network. The most commonly used item is the media access control (MAC) address of the network interface card (NIC) in the computer. Because most NICs are embedded these days, the MAC address is a fairly reliable ID.

The idea of embedded hardware security takes the concept a bit further; not only is there an identity to the device that is unique, specific to the device, and stored in hardware, but the same chip also includes part of the security engine that authenticates devices.

This hardening of the actual devices also makes for simplified identity management, allowing a more mechanized approach to managing the workload of handling domain access, passwords, and all the security-related actions relative to users and devices.

97

Page 104: Architecting Next-Generation Networks

Chapter 6

98

Managing the Proliferation of Client Devices One of the major advantages of hardware-embedded identity technology is that, in protecting network clients against attacks, it offers network administrators a less-expensive and more secure client management system. When implementing this security model, you really need to have an end-to-end solution available for the security infrastructure—this is not something that you want to piece together from multiple vendors.

An example of a complete security-based client management system technology is Broadcom’s BroadSAFE™ system. BroadSAFE, a certificate and key management solution, consists of three components:

• A key management server capable of handling millions of managed devices

• Key management software to direct and manage clients

• An inexpensive hardware security module that can be incorporated by OEMs directly into hardware client devices.

Though the clients require an embedded hardware identity module (which could presumably be added to a PC via an add-in card so that it is possible to include legacy computers in this security model, but would need to be designed into most devices), the key management server is a standard network server that has had a hardware security module (as Figure 6.6 shows) installed in it.

Although the thought of an add-in card to provide embedded security might seem contradictory, it actually isn’t, for a number of reasons. The first is that the card will primarily be there to provide authentication to network resources; if the card is removed, there is no authentication. Second, if the card is providing local authentication, applications at the OS level can be written so that if the card isn’t present, the applications or OS simply will not run. Third, if you don’t have physical security for your desktop computers, you’ve got problems beyond that which a simple hardware security option can solve.

The security concerns over an add-in card to provide legacy support are a client-side issue. The hardware security module that would be installed on the key server is a requirement to allow the authentication system to run and gives you the ability to use any type of server you have available as the head-end key server. As mentioned earlier, if you can’t provide physical security for your critical servers, you have more problems than can be solved by computer hardware or software solutions.

Page 105: Architecting Next-Generation Networks

Chapter 6

Figure 6.6: A hardware security module PCI card includes the embedded silicon functions necessary to support key management in the head-end server.

Secure Devices The concept of securing network attached devices is a simple one; any client on the network is both a target for attack and a potential vector for infection. Embedded hardware authentication means that the client device cannot attach to the network without passing through the authentication process.

In some situations, the value of this type of authentication is quite clear. Consider all of the well-known issues with controlling wireless access and limiting that access only to approved client devices. In terms of current security, the administrator might be required to enter a MAC address for any client device that is approved for access. With hundreds of wireless devices, this task is not simple. An issue also arises if a device needs to have limits placed on its access to the network or if its access privileges need to be revoked. In addition, and especially in the case of wireless networking, the MAC address may not actually refer to the computing device attached to the network but only to a notebook computer with a wireless adapter in a PC Card slot or a USB connected wireless adapter attached to any supported device.

Trusted client identification means that with embedded hardware identification in that notebook, you know that not only is the connection allowed, but the device on the other end of that connection is also trusted. The embedded authentication hardware offers not only a higher degree of trust (when compared with a software solution) but also a significantly greater degree of tamper resistance than a software solution. It is far more difficult to mess with the silicon embedded on a circuit board in the hardware than to crack a software authentication scheme.

99

Page 106: Architecting Next-Generation Networks

Chapter 6

The BroadSAFE technology is not limited to use with Broadcom technologies; it uses security functionality that is compatible with industry standards such as those created by the Trusted Computing Group and Microsoft’s Next Generation Secure Computing Base (a PC specification that is compatible with current PC implementations but offers enhanced security and privacy features).

Who You Are vs. Who You Say You Are The point of the BroadSAFE system is to provide a strong cryptographic authentication system that lets your security determine not just “who you say you are” but actually “who you really are” for every device on your network with embedded hardware security. This distinction is one which is often overlooked in today’s security models.

In the BroadSAFE model, none of the security management messaging is sent in the clear (see Figure 6.7). All communications between client devices with hardware security and the head-end server with the hardware security module (HSM) is done over encrypted links. Nothing ever leaves the tamper resistant hardware in clear text.

Figure 6.7: The Key Management Server maintains an encrypted link to devices with hardware security via the HSM in the server.

BroadSAFE includes support for Automatic Device Enrollment. This functionality reduces the need for IT staff involvement in the deployment and management of trusted clients such as VoIP phones, network switches and desktop computers. This feature saves administrators the time involved in identifying new devices to the network; it does not prevent them from controlling the authentication of these devices. The key functionality enabled in the BroadSAFE devices is also available for use in cryptographic acceleration such as Public Key and Symmetric Key Acceleration.

100

Page 107: Architecting Next-Generation Networks

Chapter 6

101

The BroadSAFE technology can be used in several applications:

Key management—A secure key management environment

Identity management—Manage multiple client identities and protect the integrity of digital certificates

Authentication—Foolproof authentication that isn’t subjected to the issues of protocol spoofing

Link security—Secure key distribution coupled with strong hardware encryption

Cryptographic compatibility—Can be used as a foundation for all standard cryptographic protocols

Minimizing Performance Impact One of the serious concerns that administrators have about adding security features to their network is the impact on the end-user experience. Adding procedures that impact the way that end users work is rarely an acceptable action; even with the best of intentions and good explanations and training for end users, they tend to feel put upon if they must actively participate in network security processes.

Even when security that doesn’t require the direct interaction of end users is implemented, there are still issues to be considered. Implementing a software security scheme that uses encryption has the potential to slow network traffic to the point that end users notice. It’s possible to do encryption at speeds that won’t impact end user performance, but the more secure the encryption is, the more processing overhead that will be introduced. As the processing workload increases, processor cycles devoted to handling secure encryption can noticeably slow other applications. Such is particularly the case if the security model includes using biometrics or other software-based personal identification methods.

Finding the proper tradeoff between user impact and security is a delicate tightrope for the network administrator. It would be nice to say that nothing takes priority over securing the network; however, a network that is so secure it has a negative impact on the workflow process ends up costing the business money in terms of lost productivity and the opportunity to grow.

This is where embedded hardware security really can shine. With integrated hardware handling the secure authentication of the client device, the bulk of the issues concerning securing the device and its connection is offloaded from the client to the embedded hardware (as Figure 6.8 shows). Doing so truly helps to minimize the impact on the end users.

Page 108: Architecting Next-Generation Networks

Chapter 6

Figure 6.8: In the software-based security model, the client system CPU needs to handle not only the operation of the computer and its applications but also the encryption/de-encryption of all network traffic. A hardware solution offloads the CPU intensive security encryption and leaves the CPU free to do other tasks, reducing the impact of the security upon the end-user.

The integration of an embedded hardware system has additional performance advantages that should be considered. For example, your traditional server-centric network model is unaffected; there is no requirement for changes to your networking infrastructure beyond adding the key management capabilities to an existing server (or adding a dedicated key management server head-end). The hardware security component will not need to have security patches distributed to it to counter new threats, nor will its management require the installation of additional software.

Securing VoIP Applications As technology has progressed, it has become an accepted fact that networks are converging. The most obvious point of convergence is voice/data networking with the inclusion of VoIP capabilities in most high-end network switches. But simply providing a VoIP architecture is only part of the story; supporting and securing VoIP brings its own set of problems to the network administrator.

It’s not that adding VoIP to your networking environment brings additional security problems beyond that of any new application that needs to talk to the outside world; it is just that VoIP has its own particular areas of vulnerability. These vulnerabilities break down into four areas:

• Access control—You want to be sure that only authorized VoIP devices are connected to your network. As mixed-mode cell phones hit the market (cell phones that operate as normal cellular phones unless they recognize a wireless network, in which case they do VoIP), it becomes much more important to only allow authorized VoIP devices to access your network.

• Data control—Your VoIP infrastructure must share the network peacefully with your data network. QoS controls need to reserve sufficient bandwidth to allow telephony to operate without impacting the users’ data communications needs and still permit acceptable voice quality phone calls.

102

Page 109: Architecting Next-Generation Networks

Chapter 6

• Disruption—Any disruption to your networking infrastructure or loss of connectivity to the outside world can bring down your VoIP infrastructure.

• Eavesdropping—In a much more realistic way than the potential for eavesdropping on data communications, voice communications are inherently less secure. This area might need to be addressed.

With voice calls, there is an expectation of privacy that users have. Users expect the same level of security they get on a plain-old telephone system (POTS) line for calls made through a VoIP infrastructure.

What the administrator must keep in mind is that all server, media gateways, gatekeepers, and IP voice terminals are susceptible to attack. There are a variety of common security threats that the VoIP network must face; let’s take a look at the most common and ways to avoid them.

Figure 6.9 shows a basic network model configured for use with VoIP. The addition of PBX equipment into the networking mix introduces another point of potential attack on your network and needs to be considered in your planning for secure networking.

The security problem that users tend to worry most about is packet sniffing or call interception. The easiest way to resolve this issue is to make sure that all of your telephony devices are on a secure switched LAN infrastructure to limit the potential for sniffing or interception problems. In this way, VoIP traffic is always limited to a specific link and not broadcast over the entire network.

Figure 6.9: Adding VoIP to your existing network infrastructures combines both the advantages and problems of the Telco network with the data network. Site-to-site telephony need never enter the public telephone network infrastructure, which provides additional security for intra-corporate phone calls. It does require a secure link to the Internet or a dedicated mesh topology network connection between sites.

103

Page 110: Architecting Next-Generation Networks

Chapter 6

104

Virus or Trojan Horse applications that are designed to capture or redirect voice traffic are a potential problem that should be dealt with by your existing antivirus solution. If you are heavily investing in VoIP, it makes sense to use a gateway appliance (or application) that filters traffic and watches specifically for virus, Trojan Horse, and malware attacks on your network.

The potential for unauthorized access to your voice network can be greatly limited by using in-line intrusion prevention systems and application access controls. If a user doesn’t have the right to make use of the VoIP infrastructure, you should be able to stop the user by using standard controls for preventing unauthorized access to network resources.

Application-layer attacks on your infrastructure can usually be prevented by keeping your OSs updated with the latest security fixes. These attacks are generally in the form of exploits against security flaws in the client OS or common applications that access the Internet. Keeping your software patched and updated will prevent most of these attacks.

It is extremely important that you keep all applications and OSs patched and updated—this idea cannot be stressed enough!

The potential of falsifying caller identity, also known as identity spoofing, can be limited by utilizing software utilities that notify the administrator of unknown devices attaching to the network. They can also be limited by using personal authentication mechanisms that use embedded hardware security. Ideally, you want to develop an infrastructure in which unknown devices attaching to the network get no services provided until the device and/or its user are authenticated. And, of course, you will want to do so with as little IT staff interaction as possible.

One of the most basic attacks found on telephone networks is toll fraud, with many stories of the tricks that are used by non-employees to acquire access to lines to make illicit long distance phone calls. VoIP, although not immune to those attacks, does have the advantage of being able to use a software gatekeeper that can prevent unauthorized toll calls from being placed.

Network Denial of Service (DoS) attacks are especially nasty if your voice communications are carried over your data networks, giving the potential for all communications capabilities to be disrupted by a network DoS attack. As Figure 6.10 illustrates, one of the ways to minimize a DoS attack is to segregate voice and data traffic on their own network segments.

Although congestion issues at the network switch level would still be a potential issue, network traffic management applications would also be useful in making sure that network attacks don’t cause problems with your VoIP connectivity. QoS and other technologies go a long way to making VoIP a practical option in mixed data/voice networking.

Page 111: Architecting Next-Generation Networks

Chapter 6

Figure 6.10: Using the VLAN capabilities of your network switches to separate the voice and data traffic onto their own network segments minimizes the chances of both voice and data communication being affected simultaneously by a DoS attack on your network.

Although not strictly a technical issue, repudiation of a call (denying it was made) is a problem that can be completely eliminated by making sure that you authenticate users before they access a telephony device, thereby guaranteeing that a call was made and determining the identity of the caller. This feature can make tracking the business process simpler, and give the business manager ideas on how to improve the process workflow. This doesn’t necessarily mean that you will need to log on to your phone before each phone call; it could be something as simple as entering a passcode into your phone system at the beginning of the business day (or using any common user authentication mechanism—PIN, password, biometrics, and so on). Or with future integration, your VoIP phone network can be linked to your data network authentication mechanisms. Common Telco attacks that rely on fooling the human element of the equation (known as trust exploits) can be minimized by using a restrictive trust model that links calls to users (and makes the user aware of that) using private VLANs to limit trust-based attacks.

In addition to the techniques previously outlined, integration of media encryption into IP telephones and media gateways is also highly recommended to prevent sniffing/eavesdropping of voice and signaling packets. Several encryption algorithms such as DES, 3DES, AES, RC4, and RC5 are commonly used in these devices. Wherever possible, the use of endpoints with hardware acceleration for these functions is recommended over software implementations.

105

Page 112: Architecting Next-Generation Networks

Chapter 6

106

Defining Data Encryption Standards

DES (Data Encryption Standard)—An ANSI standard that uses a 56-bit key and the block cipher method, which encrypts data into 64-bit blocks.

3DES (3xData Encryption Standard, also known as Triple DES)—A mode of the DES algorithm that uses three 64-bit keys to create an overall key length of 192 bits. The first encryption is then encrypted by the second encryption and that result is then encrypted again by the third 64-bit key.

AES (Advanced encryption standard)—The successor to DES, this standard uses a symmetric 128-bit block encryption technique.

RC4/RC5—Symmetric encryption algorithms originally developed by RSA Security; they are used for encrypting streaming data.

Securing Wireless Networks and Applications Although wireless security has its own peculiar requirements and restrictions, it should not be considered a separate entity for your overall network security model. Your fundamental security structure should make allowances for wireless connectivity and be configured so as to add the specific requirements of wireless security into the mix.

For example, in Chapter 4, we discussed implementing wireless VLANs. This technology has the advantage of providing security controls for wireless access points as they are added to your network. In a standard networking environment, a wireless access point could conceivably be added anywhere that there is a network connection, creating the potential for rogue access points. By using wireless VLAN technology, access points must be recognized and authenticated by the wireless VLAN before they can be used to allow clients to connect to the network.

This is fundamentally a very secure architecture, but it makes no allowance for user authentication and it requires the use of a specific piece of hardware (the wireless VLAN) specifically for this purpose. Authentication is too important a piece of the security model for it to be pushed down the pipe, as it were. Devices should be forced to authenticate before access to any network resource is allowed. If a device cannot be authenticated, even its basic connectivity to the network should be prevented, which is where technologies such as the VLAN excel.

As Figure 6.11 shows, using the embedded hardware security model we have discussed earlier in this chapter allows you to add wireless access points and wireless clients and authenticate them to your network by using the same security model for any device that has the embedded hardware security. Thus, unlike the VLAN methodology, additional user or device authentication beyond that provided by the embedded hardware may not be necessary. But the option to provide it should also be considered.

Wireless VLAN technology provides several advantages. Thus, the concept of VLAN technology makes as much sense in a wireless environment as it does in the wired environment. Even the ability to put the various wireless technologies on their own wireless VLAN segments might be justification enough in a large wireless environment. Such is especially true given the performance issues that are encountered when mixing legacy 802.11b networking with the faster wireless technologies currently being offered. Wireless VLANS also bring management benefits to your wireless networking environment in the level of detailed control that they offer over the ability of wireless clients to access the network; they provide the ability to limit access to specific segments and resources.

Page 113: Architecting Next-Generation Networks

Chapter 6

Moving the device with the embedded hardware authentication technology from location to location within your network does not add to the administrative workload. There would be no need, for example, to reconfigure a VLAN when moving a wireless access point; the authentication of the WAP would not require any user interaction; it would be automated between the key management server and the end point device. Unless other steps were taken, that WAP would be an authenticated device regardless of where it was attached to the network. Clearly, this setup presents a huge security problem to the network administrator.

Figure 6.11: The embedded hardware solution to the wireless security problem makes it easier to integrate wireless networking into your secure computing environment without additional wireless-specific hardware.

107

Page 114: Architecting Next-Generation Networks

Chapter 6

108

Remote Users

One of the biggest concerns IT has is with remote users. This problem has become exacerbated by the use of wireless networking in the home. The problem is that home users rarely correctly set up security on their home networks. As a result, when a remote user at home logs on to the corporate network via a VPN, the user is potentially opening the corporate network to anyone who can find that home user’s wireless network.

Broadcom, which produces more than 70 percent of the chipsets used for wireless networking adapters, has come up with a solution called SecureEasySetup™. SecureEasySetup turns on wireless security by default, but automates the process to make sure that the end user actually implements wireless security. The setup wizard asks only two questions of the user, and the answers are used to generate the Wi-Fi Protected Access™ (WPA) key. Thus, if the user needs to add devices or reinstall a device, the user only needs to answer the same two questions with the same two answers rather than remembering a randomly generated key that the user most likely would have had to write down somewhere (and perhaps lost).

The setup wizard configures the Service Set Identifier and WPA security for both the client and the home network access point. The wizard uses WPA rather than the less secure WEP security model. WPA is an interim implementation of the upcoming IEEE 802.11i wireless security model. Forcing the user to configure the wireless network with security is a solution that can only help prevent unauthorized access to networks. The method that Broadcom has chosen has little impact on the end user, a key to any successful security tool.

Enabling Convergence and the Four-Function Box There are four common functions that client devices use to protect themselves and their networks from malicious attacks: VPNs, firewalls, intrusion prevention systems, and antivirus solutions. At this time, vendors are building appliances that combine all of these functions for both the client and network sides of the equation. Although the issues brought up earlier regarding multifunction devices are still applicable, primarily in the server environment, a simplified hardware-based client security model holds a lot of attractions. The basic problem this model faces is providing these services at performance close or equal to wire speed. If its performance provides anything less, it will create a more complex issue—how to deal with the bottleneck that the device can introduce into the network.

By building these devices using embedded hardware security, the devices are able to offload a big part of the workload to silicon specifically designed to handle the cryptographic processing that can significantly slow a general purpose CPU. Hardware acceleration, using dedicated silicon, can increase the potential workload of the security device without adding additional bottlenecks to performance. In smaller multifunction security devices, designed for the remote office or home user, it will be possible to build hardware accelerated devices that have no visible performance impact to the end user.

Secure hardware-based cryptography may well be the most efficient and strongest way to provide security to your networks and devices. The nature of strong cryptography, however, dictates that considerable processing power be available to make it work. Dedicated, embedded cryptography processing engines are the only way to make this technology successful in broad application use because the key to acceptance is ease of implementation and minimal impact on the user environment.

Page 115: Architecting Next-Generation Networks

Chapter 6

109

Summary With the increasing rate of targeted attacks on corporate networks, it’s important for IT professionals to keep their network infrastructure secure—protecting valuable intellectual property and sales data while keeping networking uptime to a maximum and minimizing any potential disruption to ongoing operations. Network security is needed at many layers, and is complicated by the addition of mobile networking with remote access, WLAN, employee needs, external contractors, guest visitors and unscrupulous deviant hackers.

Knowing that IT managers need a flexible method of providing individual security policies to adapt to the ever-changing requirements of enterprise networks, Broadcom is committed to offering top-notch security without the high price tag normally associated with hardware-based security solutions. Because of its broad product portfolio, which includes processors, controllers, switches, storage, VoIP and wireless, the company is one of the few companies that can fulfill this kind of commitment. Broadcom’s market dominance in many of these areas also makes it possible for the company to drive the acceptance of standards that benefit the secure hardware user.

As a result of BroadSAFE-enabled client silicon, OEM customers now have the option of including a factory-installed identity that is stored securely within the device. Once a product has an identity, a security system can be built around it through a management server with an HSM installed in it. Because the product has an identity stored in hardware, the product itself can now be managed remotely.

BroadSAFE is an extremely cost-effective certificate and key management solution that can be extended to other client devices within the network such as IP phones, NICs, cable modems, switches and wireless devices. The keys that are stored in hardware, as described in this chapter, are essentially the private component of an identity certificate. Hence, the advantage is in the embedded key management module that utilizes these private keys. The result is better security at a lower cost, with centralized key management services in the HSM.

Even current software-based security technologies, such as biometrics, will benefit from the addition of embedded hardware security. Hardware security adds another level of security enhancement, and when implemented properly, will not affect end-user performance. In fact, this additional security is actually transparent to end-users.

Security systems such as BroadSAFE will become an important addition for next-generation security devices; hardware security offers advantages in attaching a broad variety of devices into secure environments with fewer problems than developing compatible software solutions for many different technologies, and hardware-based security is significantly more difficult to hack into than software-based security.

Gigatbit Ethernet technologies, improved wireless designs, converged NICs, LAN-on-a-motherboard and secure network devices are all examples of the next-generation networking components that users can expect to have in their environments. These are some of the leading-edge technologies that can be invested in right now to help future-proof the corporate IT buyer’s purchasing process.

Page 116: Architecting Next-Generation Networks

Chapter 6

110

The next-generation of cutting edge advanced networking devices are likely to be Broadcom components, software, and system designs, as manufacturers look for new and innovative means of communication and move more deeply into the convergence of voice, video, and data services. Broadcom’s broad product portfolio enables the company to drive these convergence technologies over both wired and wireless networks, while bringing value to the OEMs and consumers of these devices.

The hardware security modules in Broadcom’s new Gigabit IP phone chipsets perform voice encryption and authentication and elevate the phone’s system security using a unique identifier embedded in each chip that is virtually impossible to decode, hack, or steal, thereby providing assurance that the identity of each phone in the network is genuine. These advanced security features allow IT managers to ensure the integrity of corporate voice communications. Broadcom has responded to the need for resilient secure networking by integrating a wealth of security-based intelligence into its next-generation Ethernet switch chips, raising the bar and setting the standard for security in switching silicon. Security starts with Broadcom’s exclusive wire-speed Layer 2 to Layer 7 application-aware security processor. Multiple engines enable users to enforce secure policies based on a wide variety and combination of programmable rules. Intelligent packet parsing and metering are combined to yield rich contextual information about the traffic flows through the switch. The IT professional is empowered through management software to control all flows through the switch, and dynamically reconfigure policy and take action as needed.

In addition to the application-aware security processor, Broadcom has integrated a hardware-based DoS attack prevention engine into its switches. The DoS engine is critical to providing continuity of uninterruptible service to users of a converged network. Harmful DoS attacks are blocked by the switches, allowing voice and data traffic communication to continue during an attack. This level of predictability is the key to building highly reliable resilient networks.

Price/performance is just one part of the advantage Broadcom brings with its technology breakthroughs that are created with fast time-to-market efforts; board-level products, chipsets, RAID-on-a-chip and converged NIC technologies move quickly from vendor to customer installations. The convergence technologies will improve performance and reliability of devices and drive down the costs of next-generation networks, resulting in faster ROIs.

Broadcom®, the pulse logo, Connecting everything®, the Connecting everything logo, BroadSAFE™ and SystemI/O™ are trademarks of Broadcom Corporation and/or its affiliates in the United States, EU and/or certain other countries. Wi-Fi Protected Access is a trademark of Wi-Fi Alliance Corporation. Any other trademarks or trade names mentioned are the property of their respective owners.