cisco express forwarding

289

Click here to load reader

Upload: malith-perera

Post on 18-Aug-2015

376 views

Category:

Documents


47 download

DESCRIPTION

Cisco express forwarding.

TRANSCRIPT

Cisco Press800 East 96th StreetIndianapolis, IN 46240USACisco Express ForwardingNakia Stringeld, CCIE No. 13451Russ White, CCIE No. 2635Stacia McKeeiiCisco Express ForwardingNakia Stringeld, Russ White, Stacia McKeeCopyright 2007 Cisco Systems, Inc.Published by:Cisco Press800 East 96th Street Indianapolis, IN 46240 USAAll rights reserved. No part of this book may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without writ-ten permission from the publisher, except for the inclusion of brief quotations in a review.Printed in the United States of America 1 2 3 4 5 6 7 8 9 0First Printing March2007ISBN-10: 1-58705-236-9ISBN-13: 978-15870-5236-1Library of Congress Cataloging-in-Publication Number: 2004117877Warning and DisclaimerThis book is designed to provide information about Cisco Express Forwarding (CEF). Every effort has been made to make this book as complete and as accurate as possible, but no warranty or tness is implied.The information is provided on an as is basis. The authors, Cisco Press, and Cisco Systems, Inc., shall have nei-ther liability nor responsibility to any person or entity with respect to any loss or damages arising from the informa-tion contained in this book or from the use of the discs or programs that may accompany it.The opinions expressed in this book belong to the authors and are not necessarily those of Cisco Systems, Inc.Trademark AcknowledgmentsAll terms mentioned in this book that are known to be trademarks or service marks have been appropriately capital-ized. Cisco Press or Cisco Systems, Inc., cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.Feedback InformationAt Cisco Press, our goal is to create in-depth technical books of the highest quality and value. Each book is crafted with care and precision, undergoing rigorous development that involves the unique expertise of members from the professional technical community.Readers feedback is a natural continuation of this process. If you have any comments regarding how we could improve the quality of this book, or otherwise alter it to better suit your needs, you can contact us through email at [email protected]. Please make sure to include the book title and ISBN in your message.We greatly appreciate your assistance.iiiCorporate and Government SalesCisco Press offers excellent discounts on this book when ordered in quantity for bulk purchases or special sales.For more information please contact: U.S. Corporate and Government Sales [email protected] sales outside the U.S. please contact: International [email protected] Paul BogerCisco Representative Anthony WolfendenCisco Press Program Manager Jeff BradyAssociate Publisher David DusthimerExecutive Editor Brett BartowManaging Editor Patrick KanouseDevelopment Editor Dayna IsleySenior Project Editor San Dee PhillipsCopy Editor Written Elegance, Inc.Technical Editors Neil Jarvis, LJ WobkerTeam Coordinator Vanessa EvansBook and Cover Designer Louis AdairComposition Mark ShirarIndexer Tim WrightProofreader Molly ProueivAbout the AuthorsNakia Stringeld, CCIE No. 13451, is a network consulting engineer for Advanced Services at Cisco in Research Triangle Park, North Carolina, supporting top nancial customers with network design and applying best practices. She was formerly a senior customer support engineer for the Routing Protocols Technical Assistance Center (TAC) team, troubleshooting issues related to CEF and routing protocols. Nakia has been with Cisco for more than six years, previously serving as a technical leader for the Architecture TAC team. She has given training courses on CEF operation and troubleshooting for inter-nal employees. Nakia also worked for a year with IBM Global Services LAN Support in Research Tri-angle Park, North Carolina. Nakia attended North Carolina State University and completed her bachelor of science degree in electrical engineering in 1996. She also earned a master of science in computer net-working and computer engineering from North Carolina State University in 2000.Russ White, CCIE No. 2635, is a member of the Routing Protocol Design and Architecture Team at Cisco, Research Triangle Park, North Carolina. He is a member of the Internet Engineering Task Force (IETF) Routing Area Directorate, a cochair of the Routing Protocols Security Working Group in the IETF, a regular speaker at Networkers, a member of the Cisco Certied Internetwork Expert (CCIE) Content Advisory Group, a member of the core team developing the new Cisco Design certication, a regular contributor to the Internet Protocol Journal, and the coauthor of six other books about routing and routing protocols, including Optimal Routing Design, from Cisco Press. Russ primarily works in the development of new features and design architectures for routing protocols.Stacia McKee is a customer support engineer and technical leader of the Routing Protocols (RP) Tech-nical Assistance Center (TAC) team at Cisco in Research Triangle Park, North Carolina. This team focuses on providing postsales support of IP routing protocols, Multiprotocol Label Switching (MPLS), quality of service (QoS), IP multicast, and many other Layer 3 technologies. Stacia has been with Cisco for more than six years, previously serving as a technical leader of the Architecture TAC team and mem-ber of the WAN/Access TAC team. She has created and presented training on packet switching, router architecture, and troubleshooting for internal employees. Stacia has also been a technical editor and reviewer of Cisco.com technical documentation, mainly in router and IOS architecture and IP routing protocols technologies. She works closely with the IP Routing and IP Services groups within the Cisco Network Software and Systems Technology Group (NSSTG) on customer problems and early eld tri-als. In 2000, Stacia completed her bachelor of science degree in computer information systems, bache-lor of science degree in business administration, and bachelor of arts degree in computer science at the College of Charleston in Charleston, South Carolina.vAbout the Technical ReviewersNeil Jarvis has been a software engineer in the networking industry since 1990. He is currently employed by Cisco Systems as a distinguished engineer, responsible for the architecture and develop-ment of switching control and data plane software, including Cisco Express Forwarding (CEF).He was a technical contributor and editor of a number of IEEE 802 standards, including 802.1 (bridging) and 802.5 (token ring). He was IEEE 802.1 vice-chair for a number of years. Neil graduated with a masters degree in microelectronic systems engineering from UMIST (Manchester, England) in 1989 and now lives with his wife in Edinburgh, Scotland.LJ Wobker, CCIE No. 5020, holds a bachelor of science degree in computer science from North Caro-lina State University in Raleigh, North Carolina. He started his networking career running cables as a college intern in the Cisco Research Triangle Park TAC lab and has worked in TAC, Advanced Services, and software development. For the last ve years, LJ has been a technical marketing engineer, support-ing the Cisco 12000 and CRS-1 series routers.viDedicationsNakia Stringeld:I would like to dedicate this book to my wonderful, supportive husband, Kwame Stringeld, and to our beautiful daughter, Kyra. Most of all, thanks go to God for favor and challenging opportunities. Thanks to my parents, Robert and Annette; my family; my pastors; Dr. Frank and JoeNell Summereld; and my friends for their many prayers and for believing in me.Russ White:I would like to dedicate this book to my two daughters, Bekah and Hannah, as well as to my beautiful wife, Lori. I would like to thank God for the opportunities and skills to work on routers, routing, and books.Stacia McKee:I would like to dedicate this book in memory of my former colleague and dearest friend, Parag Avinash Kamat (July 19, 1977August 19, 2004). May his memory live on forever. I would like to thank my wonderful husband, Michael McKee, and my parents, Richard and Sidney Froom, for their love, patience, and support while completing this project. I also thank God for all His blessings in my life.viiAcknowledgmentsThis book would not have been possible without the help of many people whose various comments and suggestions helped to formulate this project. First, we would like to give special recognition to Richard Froom for providing crucial direction and valuable feedback for this book. We also want to thank the technical reviewers for this book, Neil Jarvis and LJ Wobker.Finally, we want to thank Brett Bartow, Chris Cleveland, and Dayna Isley, as well as the other people at Cisco Press, for working with us, keeping us on track, and getting this book published.viiiThis Book Is Safari EnabledThe Safari Enabled icon on the cover of your favorite technology book means the book is available through Safari Bookshelf. When you buy this book, you get free access to the online edition for 45 days.Safari Bookshelf is an electronic reference library that lets you easily search thousands of technical books, nd code samples, download chapters, and access technical information whenever and wherever you need it.To gain 45-day Safari Enabled access to this book: Go to http://www.ciscopress.com/safarienabled Complete the brief registration form Enter the coupon code R7CH-25PD-7T4V-4VDV-RYMJIf you have difculty registering on Safari Bookshelf or accessing the online edition, please e-mail [email protected]. ix Contents at a Glance Introduction xvi Part I Understanding, Conguring, and Troubleshooting CEF 3 Chapter 1 Introduction to Packet-Switching Architectures 5 Chapter 2 Understanding Cisco Express Forwarding 51 Chapter 3 CEF Enhanced Scalability 81 Chapter 4 Basic IP Connectivity and CEF Troubleshooting 103 Part II CEF Case Studies 135 Chapter 5 Understanding Packet Switching on the Cisco Catalyst 6500 Supervisor 720 137 Chapter 6 Load Sharing with CEF 153 Chapter 7 Understanding CEF in an MPLS VPN Environment 207 Part III Appendix 247 Appendix A Scalability 249 Index 255xContentsIntroduction xviPart I Understanding, Conguring, and Troubleshooting CEF 3Chapter 1 Introduction to Packet-Switching Architectures 5Routing and Switching 5Understanding Broadcast and Collision Domains 5Broadcast and Collision Domains 6Broadcast and Collision Domains in Routing 7Layer 3 Switching 8Understanding Router Pieces and Parts 9Interface Processors 10Central Processing Unit 11Memory 11Backplanes and Switching Fabrics 11Shared Memory 11Crossbar Switching Fabric 13Bus Backplanes 14Cisco IOS Software: The Brains 17Memory Management 17Memory Pools 17Memory Regions 18Packet Buffers 20Interaction with Interface Processors 28Processes and Scheduling 28Process Memory 28Process Scheduling 29Understanding the Scheduler 29Process Life Cycle 29Process Priorities 32Scheduling Processes 32Process Watchdog 34Special Processes 35Putting the Pieces Together: Switching a Packet 35Getting the Packet off the Network Media 35Inbound Packets on Shared Media Platforms 36Inbound Packets on Centralized Switching Routers with Line Cards 37Inbound Packet Handling on Distributed Switching Platforms 38Switching the Packet 39Switching the Packet Quickly During the Receive Interrupt 39Process-Switching the Packet 41Transmitting the Packet 44xiHardware and Software show Commands 45Summary 48Chapter 2 Understanding Cisco Express Forwarding 51Evolving Packet-Switching Methods 51Process Switching 51Fast Switching 52What Is CEF? 53CEF Tables 54Forwarding Information Base (FIB) 54The Adjacency Table 60Relating the CEF Tables 61CEF Table Entries 62FIB Entries 62Attached FIB Entry 63Connected FIB Entry 63Receive FIB Entry 63Recursive FIB Entry 64Default Route Handler FIB Entry 66ADJFIB FIB Entry 66Learned from IGPs 67Generic FIB Entries 67Interface-Specific FIB Entries 68FIB Entries Built for a Multiaccess Network Interface 68FIB Entries Built on a Point-to-Point Network Interface 69FIB Entries Built on a 31-Bit Prefix Network Interface 69Special Adjacencies 69Auto Adjacencies 70Punt Adjacency 70Glean Adjacency 71Drop Adjacency 72Discard Adjacency 73Null Adjacency 73No Route Adjacencies 74Cached and Uncached Adjacencies 74Unresolved Adjacency 75Switching a Packet with CEF 75The CEF Epoch 77Configuring CEF/dCEF 77Summary 78References 79xiiChapter 3 CEF Enhanced Scalability 81Fundamental Changes to CEF for CSSR 82Data Structures 82Switching Path Changes 84Changes to show Commands 86show ip cef 86show ip cef interface 86show ip cef summary 87show cef state capabilities 88New show ip cef Commands 89show ip cef tree 89show ip cef internal 90show ip cef switching statistics 91New show cef Commands 91CEF Event Logger 94CEF Consistency Checker 97Passive Checkers 97Active Checkers 97Consistency-Checking Process 98New CEF Processes 100FIB Manager 100Adjacency Manager 100Update Manager 100Summary 101Chapter 4 Basic IP Connectivity and CEF Troubleshooting 103Troubleshooting IP Connectivity 103Accurately Describe the Problem 104Scoping the Network Topology 105Reviewing the OSI Model for Troubleshooting 106Troubleshooting Physical Connectivity 106Troubleshooting Layer 2 Issues 107Verifying the ARP Table 108Verifying the Routing Table 111Using IOS Ping with the Record Option to Rule Out CEF 115Troubleshooting the CEF FIB Table 116Verifying the CEF Configuration 117Confirming the IP CEF Switching Path 119Using CEF Accounting Counters to Confirm the Switching Path 123Verifying the CEF Switching Details 123 xiii Verifying the Adjacency Table 126Hardware-Specific Troubleshooting 128Troubleshooting Punt Adjacencies 129Understanding CEF Error Messages 131Troubleshooting Commands Reference 131Summary 133References 133 Part II CEF Case Studies 135 Chapter 5 Understanding Packet Switching on the Cisco Catalyst 6500 Supervisor 720 137CEF Switching Architecture on the Cisco Catalyst 6500 137Understanding Software-Based CEF and Hardware-Based CEF 137Centralized and Distributed Switching 138Troubleshooting CEF on the Catalyst 6500 SUP720 Platforms 139Simple Checking of Connectivity 139Systematic Checking of Connectivity 141Troubleshooting Load Sharing 148Summary 149References 149 Chapter 6 Load Sharing with CEF 153Benefits of Load Sharing 153Load Sharing with Process Switching and Fast Switching 154Comparing CEF Per-Packet and CEF Per-Destination Load Sharing 158Understanding Per-Destination Load Sharing 158Understanding Per-Packet Load Sharing 159Minimizing Out-of-Order Packets 159Configuring CEF Per-Packet Load Sharing 160CEF Architecture and Load Sharing 161CEF Load Sharing Across Parallel Paths 163CEF Per-Destination Example 163CEF Per-Packet Example 170Per-Packet Load Sharing on Hardware-Based Platforms 174CEF Per-Packet Load Sharing on the Cisco GSR Platform 175CEF Load-Sharing Troubleshooting Examples 176 xiv CEF Per-Destination Load Sharing Overloading One Link 176CEF Per-Packet Load Sharing Causing Performance Issues 188Troubleshooting a Single-Path Failure with CEF Load Sharing 190CEF Traffic-Share Allocation 192CEF Polarization and Load-Sharing Algorithms 200Original Algorithm 202Universal Algorithm 202Tunnel Algorithm 203Hardware Platform Implementations 203Summary 204References 205 Chapter 7 Understanding CEF in an MPLS VPN Environment 207An Internet Service Providers Simple MPLS VPN Design 207Understanding the CEF and MPLS VPN Relationship 209Case 1: Label Disposition 211Case 2: Label Imposition 212Case 3: Label Swapping 214Troubleshooting an MPLS VPN 214CEF Considerations When Troubleshooting MPLS VPN Across Various Platforms 215Cisco 7200 Router with an NPE-G2 216Cisco 7500 Router 216Cisco Catalyst 6500 with a Supervisor 2 217Catalyst 6500 with a Supervisor 720 3BXL 218Cisco 12000 Series Router 221Cisco 10000 Series Router 226CEF and MPLS VPN Load-Sharing Considerations 227PE-CE Load Sharing: CE Multihomed to Same PE 227PE-CE Load Sharing: Site Multihomed to Different PEs 233Load Sharing Between P and P Devices 242CEF and MPLS VPN Load-Sharing Platform Dependencies 243Summary 243References 244 Part III Appendix 247 Appendix A Scalability 249 Index 255xvIcons Used in This BookCommand Syntax ConventionsThe conventions used to present command syntax in this book are the same conventions used in the IOS Command Reference. The Command Reference describes these conventions as follows: Boldface indicates commands and keywords that are entered literally as shown. In actual con-guration examples and output (not general command syntax), boldface indicates commands that are manually input by the user (such as a show command). Italics indicate arguments for which you supply actual values. Vertical bars (|) separate alternative, mutually exclusive elements. Square brackets [ ] indicate optional elements. Braces { } indicate a required choice. Braces within brackets [{ }] indicate a required choice within an optional element.PC Terminal CatalystSwitchMultilayerSwitchNetwork Cloud Line: Ethernet Line: Serial File ServerRouterxviIntroductionHow does a router switch a packet? What is the difference between routing a packet and switching a packet? What is this CEF feature that is referred to in Cisco documentation and commonly found in Cisco IOS commands? This book answers these questions through comprehensive discussions of Cisco Express Forwarding (CEF).CEF is a term used to describe one of the mechanisms used by Cisco IOS routers and Cisco Catalyst switches to forward packets. Other packet-switching mechanisms include process switching and fast switching. CEF is found in almost all Cisco IOS routers and Catalyst switches. However, documentation of the topic is scarce. From a technical support perspective, CEF is a widely misunderstood topic whose implementation varies signicantly on multiple Cisco platforms. Cisco engineers, Cisco partners, and customers need material on CEF to properly deploy, maintain, and troubleshoot their networks.CEF offers the following benets: Improved performanceCEF is less CPU-intensive than fast-switching route caching. More CPU processing power can be dedicated to Layer 3 services such as quality of service (QoS) and encryption. ScalabilityCEF offers full switching capacity at each line card when distributed CEF (dCEF) mode is active. ResilienceCEF offers unprecedented levels of switching consistency and stability in large dynamic networks. CEF can switch trafc more efciently than typical demand-caching schemes.Goals and MethodsThis book addresses common misconceptions about CEF and packet switching across various plat-forms. The goal is to help end users understand CEF and know how to troubleshoot, regardless of whether a CEF or another problem is occurring in the network. Little information collectively addresses these concerns because CEF is proprietary. This book helps you understand CEF better by using the fol-lowing methods: Explaining CEF basics Supplying troubleshooting scenarios that enhance your ability to recognize common mistakes Providing best practices for congurationWho Should Read This BookThe focus audience of this book is networking professionals who require an understanding of Cisco packet-forwarding architecture and who are tasked with troubleshooting routing and switching issues in a Cisco network environment. This book is an invaluable guide for those seeking to gain an understand-ing of how CEF works and how to troubleshoot CEF issues on various hardware platforms.xviiHow This Book Is OrganizedAlthough this book could be read from cover to cover, it is designed to be exible and allows you to eas-ily move between chapters and sections of chapters to cover just the material that you need to trouble-shoot an immediate problem or to understand a concept.Cisco Express Forwarding is divided into two parts. The rst part of the book provides an overview of packet-switching architectures and CEF operation and advanced features. It also covers the enhanced CEF structure and general troubleshooting. The second part of the book focuses on particular case stud-ies. Because CEF is a widely misunderstood technology, the case studies focus on a list of the common topics that have been problematic for customers and those supporting Cisco networks. The case studies review and expand on material from the previous parts of the book and provide more in-depth analysis of real networking topologies and troubleshooting steps.Part I, Understanding, Conguring, and Troubleshooting CEF includes the following chapters: Chapter 1, Introduction to Packet-Switching ArchitecturesThis chapter explains packet-switching architecture and terminology. It also explains utilization of memory and buffers. Chapter 2, Understanding Cisco Express ForwardingThis chapter deals with the basics of CEF architecture and operation. It denes CEF terminology and history. Chapter 3, CEF Enhanced ScalabilityThis chapter discusses the enhanced CEF struc-ture and its purpose. Chapter 4, Basic IP Connectivity and CEF TroubleshootingThis chapter deals with general troubleshooting in a software-switching environment. Software switching has typically been used on routers.Part II, CEF Case Studies, deals with special CEF case studies covering the following common scenarios: Chapter 5, Understanding Packet Switching on the Cisco Catalyst 6500 Supervisor 720This chapter helps you understand the impact of CEF and learn how packet switching works on a Cisco Catalyst 6500 SUP720. Chapter 6, Load Sharing with CEFThis chapter discusses load sharing with CEF. It covers the purpose, conguration, and troubleshooting of common problems. Chapter 7, Understanding CEF in an MPLS VPN EnvironmentThis chapter explains the impact of CEF in an MPLS VPN environment.The book concludes with Appendix A, Scalability, which discusses CEF design considerations that could impact network scalability.xviiiThe Future of CEF and Packet SwitchingAlthough this book provides solid information for software handling and hardware handling, it does not provide a detailed description of implementation on all Cisco platforms and related technologies. Hard-ware design changes rapidly, and packet handling on one platform could easily consume the entire book.This book does not address Parallel Express Forwarding (PXF), which is used on devices such as Cisco 10000 series routers, Cisco 7600 series Optical Service Modules (OSMs), and Cisco 7300 series rout-ers. PXF leverages a combination of parallel processing and pipelining techniques to the CEF algo-rithms for faster throughput and optimal exibility through ASIC technology. Because PXF is highly dependent on the platform and specic ASIC technology, it is not covered in this book.Hardware switching will continue to be optimized for performance advantages. Introduction of distrib-uted CEF (dCEF) on Cisco 7500 series routers was a start down this path years ago to ofoad packet switching from the central processor to the Versatile Interface Processor (VIP) line card. Then progres-sion occurred to hardware-based localized switching on Cisco 6500s with Distributed Forwarding Cards (DFCs), FlexWans, and OSMs.Cisco recently introduced IOS Software Modularity, which provides subsystem In-service Software Upgrades and Process Fault Containment to the Cisco Catalyst 6500 series switches.As you continue to learn more about Cisco Express Forwarding, you may nd the following resources helpful: Bollapragada, V., R. White, and C. Murphy, Inside Cisco IOS Software Architecture, Indianapolis,Indiana: Cisco Press; 2000. Provides a detailed treatment of Cisco 7500 routers and Cisco 7200 routers. Cisco, Parallel Express Forwarding on the Cisco 10000 Series, www.cisco.com/en/US/products/hw/routers/ps133/products_white_paper09186a008008902a.shtml. Cisco, Cisco 7600 Series Router Q & A, www.cisco.com/en/US/products/hw/routers/ps368/products_qanda_item09186a008017a32b.shtml. Cisco, PXF Information for Cisco 7304 Routers, www.cisco.com/en/US/products/hw/routers/ps352/prod_maintenance_guide09186a008057410a.html. Cisco, Cisco Catalyst 6500 Series Switches with IOS Software Modularity Make IT Managers More Productive and Further Improve Network Reliability, http://newsroom.cisco.com/dlls/2005/prod_082905.html. Cisco, Cisco Catalyst 6500 with Cisco IOS Software Modularity, www.cisco.com/en/US/products/hw/switches/ps708/products_promotion0900aecd80312844.html.This page intentionally left blank P A R TIUnderstanding, Conguring, and Troubleshooting CEFChapter 1 Introduction to Packet Switching ArchitecturesChapter 2 Cisco Express ForwardingChapter 3 CEF Enhanced ScalabilityChapter 4 Basic IP Connectivity and CEF TroubleshootingThis chapter covers the following topics: Routing and switching Understanding router pieces and parts Cisco IOS Software: the brains Processes and scheduling Putting the pieces together: switching a packet Hardware and software show commandsC H A P T E R1Introduction to Packet-Switching ArchitecturesWhat, exactly, does a router do? Whats the difference between a switch and a router? If you were building a router, what would the parts of the router be? How would they t together? These are the sorts of questions that router designers within Cisco face every day, both from the hardware and software points of view.This chapter begins with a discussion of the terms routing and switching and provides you with the background needed to understand the differences between the two. The chapter then covers the physical pieces and parts of a router and discusses the brains, Cisco IOS Software. You then learn how the pieces work together to switch a packet.Routing and SwitchingThe networking industry uses many terms and concepts to describe switching and routing; because a good number of them have overlapping meanings, deciphering the terminology can be confusing. Does a router route or switch? Whats the difference between Layer 3 switching and routing? Whats Layer 7 switching, and who cares? Lets examine what happens to a packet as it passes through a network to try and discover some of the answers to these questions.Understanding Broadcast and Collision DomainsThe two primary concepts you need to understand when discussing the various concepts of switching are broadcast domain and collision domain. The simple network in Figure 1-1 illustrates these two concepts.Figure 1-1 Broadcast and Collision DomainsThe collision domain is dened as the set of hosts that are attached to the network which might not transmit at the same time without their transmissions colliding. For example, if Host A Host B6 Chapter 1:Introduction to Packet-Switching ArchitecturesHost A and Host B are connected through a straight wire, they cannot transmit at the same time. If, however, some physical device between them allows them to transmit at the same time, they are in separate collision domains.The broadcast domain is the set of hosts that can communicate simply by sending Layer 2 (or link-layer) broadcasts. If Host A transmits a broadcast packet to all the hosts that are locally attached, and Host B receives it, these two hosts are in the same broadcast domain.Broadcast and Collision DomainsBridging breaks up the collision domain, but not the broadcast domain. In fact, traditional switching and bridging are the same thing technically. The primary difference is that in most switched environments, each device connected to the network is in a separate collision domain.Looking at the format of a typical data packet, what is changed when the packet crosses a switch? Not a single thing, as Figure 1-2 illustrates.Figure 1-2 Packets Passing Through a SwitchIn effect, devices on either side of a switch cannot tell that there is a switch between them, nor do they know the destination of their packets; switches are transparent to devices connected to the network.If Host A wants to send a packet to 192.168.1.2 (Host B), it can send a broadcast to all the hosts connected to the same segment asking for the MAC address of the host with the IP address 192.168.1.2 (this is called an Address Resolution Protocol (ARP) request). Because Host B is in the same broadcast domain as Host A, Host A can be certain that Host B will receive this broadcast and answer with the correct MAC address to send packets to.192.168.1.1 192.168.1.2MAC (layer 2) headerIP (layer 3) headerDataMAC (layer 2) headerIP (layer 3) headerDataHost A Host BRouting and Switching 7Broadcast and Collision Domains in RoutingRouters not only break the collision domain, but they also break the broadcast domain, as Figure 1-3 illustrates.Figure 1-3 RoutingNow, if Host A wants to reach 192.168.2.1, how will it do this? It cannot broadcast an address resolution packet to discover Host Bs address, so it has to use some other method to gure out how to reach this destination. How does Host A know this? Note that after each IP address in Figure 1-3, there is also a /24; this number indicates the prex length, or the number of bits that are set in the subnet mask. Host A can use this information to determine that Host B is not in the same broadcast domain (not on the same segment), and Host A must use an intervening router to reach the destination, as Figure 1-4 shows.Figure 1-4 Determining That a Destination Is Not in the Same Broadcast DomainNow that Host A knows Host B isnt in the same broadcast domain, it also knows it cant send a broadcast out to resolve Host Bs address. How, then, can Host A reach Host B? By directing its packets toward the intervening router. Host A places Host Bs IP address in the packet header, but it places the intervening routers MAC address in the packet, as Figure 1-5 shows.Host A Host B192.168.1.1/24 192.168.2.1/24192.168.001.1 192.168.002.1192.168.001.0 192.168.002.0255.255.255.0 255.255.255.024 bits of prefix,8 bits of host NetworkCompare; if these are not thesame, then these two addressesare in different broadcastdomainsNetwork24 bits of prefix,8 bits of host IP address IP address 8 Chapter 1:Introduction to Packet-Switching ArchitecturesFigure 1-5 Packet Flow Through a RouterHost A puts the routers MAC address on the packet, so the router accepts the packet off the network. The router examines the destination IP address and determines what the next closer hop should be by consulting a routing table (in this case, it is Host B itself), and replaces the MAC address with the correct MAC address for the next hop. The router then transmits the packet back onto a different segment, which is in a different broadcast domain.Layer 3 SwitchingLayer 3 switching looks a lot like routing, as Figure 1-6 illustrates (note that it is the same as Figure 1-5). Thats because Layer 3 switching is routing, as far as hosts connected to the network are concerned; there is no functional difference between Layer 3 switching and routing.NOTE The rest of this book refers to the process of forwarding a packet based on Layer 3 information from a cache or table switching, without referencing Layer 3. The rest of the book provides little or no discussion of Layer 2, or traditional, switching; in those places where Layer 2 switching is involved, it will be explicitly stated.192.168.1.1/24Routers MAC (layer 2)addressHost Bs MAC (layer 2)addressHost Bs MAC (layer 2)address192.168.2.1 192.168.2.1IP (layer 3) headerDataDataDataHost A Host B192.168.2.1/24Routers MAC (layer 2)addressUnderstanding Router Pieces and Parts 9Figure 1-6 Layer 3 SwitchingUnderstanding Router Pieces and PartsConsider the following components required to switch a packet: Some device detects packets that the router needs to receive (pick up off the wire) and process. The packet needs to be stored (copied someplace in the routers memory). The packets header must be examined (by the local packet psychologist to see whether the packet is crazy), various options and elds within the packet (such as the Time to Live eld) must be processed, and the Layer 2 header needs to be replaced with the correct Layer 2 header for the next hop of the packets journey. The packet must then be given to some device on the outbound interface that will transmit it onto the correct network segment.Each of these four steps must be performed by some piece of hardware or software within the router.Most routers use the following fundamental pieces of hardware while switching packets: Interface processors Central processing unit (CPU) Memory Backplane and switching fabricThe following sections discuss each of these pieces.192.168.1.1/24Routers MAC (layer 2)addressHost Bs MAC (layer 2)addressHost Bs MAC (layer 2)address192.168.2.1 192.168.2.1IP (layer 3) headerDataDataDataHost A Host B192.168.2.1/24Routers MAC (layer 2)address10 Chapter 1:Introduction to Packet-Switching ArchitecturesInterface ProcessorsInterface processors are responsible for the following actions: Decoding the electrical or optical signals as they arrive from the physical media Translating these signals into 1s and 0s Transferring the 1s and 0s into a memory location Deciding where the end of a packet is and signaling other devices in the router that there is a new packet to processTypically, interface processors are commercially available chips designed specically to decode a particular type of signal and translate it into packets. For example, the older Lance Ethernet chipset is a well-known decoder for Ethernet networks that translates the Ethernet electrical encoding into packets.Interface processors transfer packets into memory using direct memory access; they will directly copy the packet into a memory location specied by the controlling device (in this case, a Cisco IOS Software device driver, which we cover in the section Cisco IOS Software: The Brains, later in this chapter). The set of addresses they copy the packets into is stored as a ring buffer, which is illustrated in Figure 1-7.Figure 1-7 Interface Processor Ring BuffersEach entry in the buffer points to a different memory location; the rst packet copied in off the wire will be placed in the memory location indicated by (pointed to by) A, while the second will be placed in the memory location pointed to by B, the third will be placed in the memory location pointed to by C, and so on.When the interface processor copies a packet into location H, it will loop around the ring and start copying the next packet into location A again. This looping effect is why the transmit and receive buffers are generally called transmit and receive rings.Location A Location BLocation F Location G Location HLocation CLocation DLocation EAB CDEF GHUnderstanding Router Pieces and Parts 11Central Processing UnitThe central processing unit (CPU) provides horsepower for any general task the software needs to perform. On some platforms, the CPU does the work required to switch packets, whereas on others, the CPU primarily focuses on control-plane management while hardware specically tailored to switching packets does the packet switching.MemoryCisco routers use memory to store the following: Packets while they are being switched Packets while they are being processed Routing and switching tables General data structures, executing code, and so onSome Cisco platforms have only one type of memory, dynamic random-access memory (DRAM) or synchronous dynamic random-access memory (SDRAM), whereas others have a wide variety of memory available for different purposes. You learn how memory is divided up and used in the section Memory Management," later in this chapter.Backplanes and Switching FabricsWhen a packet is switched by a router, it has to be copied from the inbound port to the outbound port in some way; the ports in the router are generally interconnected using some sort of a switching fabric to allow this inbound-to-outbound copying. Cisco routers use three types of interconnections: Shared memory Crossbar switching fabric Bus backplanesThe following sections describe each type.Shared MemoryIn shared memory architectures, packets are copied into a memory location that is accessible by both the input and output interface processors. Figure 1-8 illustrates that as a packet is copied into memory on the inbound side, it is copied into a packet buffer that all interfaces can access. To transmit a packet, the outbound interface copies the packet out of the shared memory location onto the physical wire (encoding the 0s and 1s as needed).12 Chapter 1:Introduction to Packet-Switching ArchitecturesFigure 1-8 Copying Packets Through Shared MemoryThe primary speed bottleneck on shared memory architectures tends to be the speed of the memory in which the packets are stored. If the memory cannot be accessed for some amount of time after a packet is written to it, or if the access speed is slow, the interface processors wont be able to copy packets to and from the wire quickly. Most routers that use this architecture use very-high-speed memory systems to accommodate high-speed packet switching. In some routers, the interface processor is physically separated from the shared memory used to switch packets; for example, in the Cisco 7200 series of routers, the interface processors are physically located on the Port Adapters (PAs), while the shared memory is physically located on the Network Processing Engine (NPE). In these cases, a bus separates the shared memory from the interface processor, as Figure 1-9 illustrates.Figure 1-9 Copying Packets Through Shared Memory over a BusInterfaceprocessorInterfaceprocessorShared memoryBusLocal memorybuffers on thePA or Line CardLocal memorybuffers on thePA or Line CardInterfaceprocessorInterfaceprocessorShared memoryUnderstanding Router Pieces and Parts 13If there is a bus between the interface processor and the shared memory, packets are copied into local memory on the line card and then transferred across the bus to the shared memory, where other line cards can access the packet to copy it back for transmitting.Because each packet must be copied across the bus between the interface processor and the shared memory twice, the buss bandwidth has a major impact on the performance of the router, as well as on the speed at which packets can be written into and read from the shared memory.In virtually all shared memory systems, whether there is a bus between the interface processor and the shared memory or not, the work of switching the packet is done by the CPU or some other centralized processor that has access to the shared memory. Crossbar Switching FabricIf a switching decision can be made on individual line cardssuch as when the line cards have specialized hardware or a separate processorcopying the packet into shared memory of any type between interfaces isnt necessary. Instead, the packet can be transferred directly from the local memory on one line card to the local memory on another line card. Figure 1-10 illustrates an example of a crossbar switching fabric.Figure 1-10 Crossbar Switching FabricLine card 1Line card 2Line card 3Line card 4Line card 5Line card 6Line card 1Line card 2Line card 3Line card 4Line card 5Line card 6TransmitReceive14 Chapter 1:Introduction to Packet-Switching ArchitecturesIn a crossbar switching fabric, each line card has two connections to a fabric. At each cycle (or point in time), any given line cards output can be connected to any other line cards input. So, if Line Card 2 receives a packet that it determines needs to be transmitted out a port on Line Card 3, it can ask the switch fabric controller to connect its output to the input of Line Card 3 and transfer the packet directly.For multicast, the inbound line card can request a connection to multiple line card inputs. For example, in Figure 1-10, Line Card 5s output is connected to both Line Card 6s and Line Card 4s input, so any packets Line Card 5 transmits will be received by both of these line cards at the same time.The primary advantage of a crossbar switching fabric is that it can be scaled along with the number of line cards installed in a system; each new line card installed represents a new set of input and output connections that can be used in parallel with the other existing connections. As long as each individual connection has enough bandwidth to carry line-rate trafc between any pair of line cards, the entire router can carry line-rate trafc between multiple pairs of line cards.The bandwidth of the individual lines in the crossbar switching mesh will not help in one instance: if two line cards want to transmit packets to a third line card at the same time. For example, in Figure 1-10, if both Line Cards 1 and 2 want to transmit a packet to Line Card 3, they cant; Line Card 3 has only one input. This problem has several solutions, any and all of which are used in Cisco routers. The rst is to schedule the connections in a way that no line card is expected to receive from two other line cards at once. Cisco routers provide various scheduling algorithms and systems to ensure that no line card is starved for transmission, so packets dont bottleneck on the inbound line card.Another possibility, used on some high-speed line cards on some platforms, is to provide two connections into a given line card so that it can receive packets from two different line cards at the same time. This doesnt completely resolve the problemscheduling is still requiredbut it does relieve some of the pressure on line cards that are often used as uplinks to higher-speed networks. Bus BackplanesThe last type of connection used between line cards is a bus backplane. This is different in a few ways from a shared memory architecture with a bus. Figure 1-11 illustrates such a backplane.Understanding Router Pieces and Parts 15Figure 1-11 Bus BackplanePackets that are received by a line card are transmitted onto the backplane as they are received. When the switching engine has received and processed enough of the packet to decide which interface the packet needs to be transmitted out of, the line card with that interface connected is instructed to continue copying the packet off the backplane and is given information about which port the packet should be transmitted out.The remainder of the line cards is instructed to ush this packet from their buffers and prepare to receive the next packet, or do other work as needed. Figure 1-12 illustrates how this works.Switching engineLine card 1Line card 4Line card 2Line card 3Line card 5Line card 616 Chapter 1:Introduction to Packet-Switching ArchitecturesFigure 1-12 Packet Receipt and Transmission Across a Bus BackplaneThe following list explains the process illustrated in Figure 1-12:1 Line Card 1 begins receiving the packet and copies it onto the backplane; all the other line cards begin to store the packet in local buffers.2 When Line Card 1 has put the packet headers up the IP header onto the backplane, the switching engine consults its local tables to determine the correct output port.3 The switching engine now signals Line Card 4 that it should continue accepting and storing this packet, and it informs the remaining line cards that there is no reason for them to continue receiving or storing this packet.4 After Line Card 1 has nished transmitting the packet onto the backplane, Line Card 4 transmits the packet onto the locally connected physical media.Switching multicast packets over a backplane bus is relatively simple; the switching engine signals several line cards to accept and transmit the packet, rather than just one. Scheduling algorithms and local buffers are used to prevent several line cards from sending packets onto the backplane at the same time. Step 2Step 4Step 3Step 1Switching engineLine card 2Line card 1Line card 5Line card 4SwitchingtableCisco IOS Software: The Brains 17Cisco IOS Software: The BrainsRouters arent just brawn without brainssoftware is required to provide general control-plane functionality, build tables, and (sometimes) switch packets. Cisco routers run a specialized operating system called Cisco IOS, generally referred to as Cisco IOS Software.Cisco IOS was originally designed as a small operating system that could be embedded on a single ROM chip, and it has progressed to a full-edged operating system with memory management, hardware abstraction, process scheduling, and other such services. In the following sections, you look at several aspects of Cisco IOS Software, including how it manages memory, interacts with interface processors, and schedules processes.Memory ManagementIn the section Understanding Router Pieces and Parts, earlier in this chapter, you learned that several types of memory are usedsome fast and some slower, depending on what the memory is used for. How does Cisco IOS Software manage this memory? If a process wants to allocate memory in a fast (or shared) area of memory for processing a packet, how does it do so?When the router boots up, Cisco IOS Software divides memory into pools and regionsbased on the type of memory used.Memory PoolsA large variety of memory is available in Cisco routers, depending on the platform; some have just DRAM, others have SDRAM, others have both DRAM and SDRAM, and some have other combinations of memory. Figure 1-13 illustrates some of these memory congurations and shows how they relate to what memory is used for in the router.Figure 1-13 Memory Types and Their UsesCisco 2500Flash RAMStorage Main Memory Packet MemoryCisco 4700Storage Main Memory Packet MemoryFlash RAMCisco 7200Storage Main Memory Packet MemoryFlash RAM SRAMCisco 7200VXRStorageSoftwareSoftwareSoftwareSoftware Main Memory Packet MemoryFlash SRAM SRAM PCI18 Chapter 1:Introduction to Packet-Switching ArchitecturesWith all of these different types of memory available, how does IOS divide it up so that processes can simply indicate what they will use the memory for, and get a piece of memory out of the right memory type? Through hardware abstraction.Hardware abstraction is the process of creating a layer on top of the hardware that looks the same no matter what hardware its running on. In this case, you want the same types of logical memorystorage, running software, data structures, and packet memoryto be available to processes running under Cisco IOS Software no matter what physical memory is available. The hardware abstraction used here is memory pools. Figure 1-14 illustrates memory pools. Figure 1-14 Abstracting Memory Types Using PoolsWhen IOS boots on a given router, it determines what sorts of memory are available and builds memory pools as needed. You can examine the layout of the pools on a router by executing the show memory command, as shown in Example 1-1.The column on the left lists the memory pools available on this router. This particular router is a Cisco 7206, which has two I/O memory pools, because there are two physical sets of fast memory used for storing and processing packets as they are switched through the router.NOTE Detailed information about the show commands referenced throughout this chapter are included in the section Hardware and Software Show Commands," later in this chapter.Memory RegionsWithin memory pools, the memory is subdivided into regions. Each region represents a type of memory within the pool. For example, within the processor pool, some memory is used Example 1-1 Output of the show memory Command7206-router#show memoryHeadTotal(b) Used(b) Free(b) Processor 61E2EEE0941796161205412082125496I/O 200000003355443222498433329448I/O-27800000 8388616 3077528 5311088--more--FlashFlashProcessor Memory pool I/ORAM Physical memoryStorage Used for Software Main Memory Packet MemoryCisco IOS Software: The Brains 19for general data storage (the heap) and other memory is used for executing code (text).Figure 1-15 illustrates these regions within the context of memory pools.Figure 1-15 Memory RegionsYou can see what region each memory pool has been divided into by using the show regioncommand, as shown in Example 1-2.Each region can also have subregions, which are indicated by a colon followed by the subregion name. For example, main has four subregions: main:text, main:data, main:bss,and main:heap. Each of these subregions is used for different types of data. Main:text is marked as read only (R/O in the Media column), because that is where the executing code is stored, and the executing code should not be overwritten. Main:data, however, has been marked as read/write (R/W in the Media column), because this is where general initialized data is kept.Each region can also have several aliases, which are used to access the memory in different ways. Aliases are indicated by a region name followed by a colon, and then the alias name in parentheses, such as main:(main k0). This can represent a cached versus an uncached view of the same memory, different access methods, and so on.Example 1-2 Output of the show region Command7206-router#show regionRegion Manager:Start End Size(b)ClassMediaName 0x078000000x07FFFFFF 8388608IomemR/Wiomem2 0x200000000x21FFFFFF33554432IomemR/Wiomem 0x578000000x57FFFFFF 8388608IomemR/Wiomem2:(iomem2_cwt) 0x600000000x677FFFFF 125829120LocalR/Wmain 0x600089600x611682CB18217324ITextR/Omain:text 0x6116A0000x61B4E17F10371456IDataR/Wmain:data 0x61B4E1800x61E2EEDF 3018080IBss R/Wmain:bss 0x61E2EEE00x677FFFFF94179616LocalR/Wmain:heap 0x700000000x71FFFFFF33554432IomemR/Wiomem:(iomem_cwt) 0x800000000x877FFFFF 125829120LocalR/Wmain:(main_k0) 0xA00000000xA77FFFFF 125829120LocalR/Wmain:(main_k1)Flash Flash main:textRAM ProcessorPhysicalmemoryMemorypoolsMemoryregionsmain:dataSRAM I/O main:bss20 Chapter 1:Introduction to Packet-Switching ArchitecturesPacket BuffersPacket buffers are where packets are stored while they are being switched (on some platforms) or processed in some way. Packet buffers are created out of the iomem memory region and are managed by a buffer management process.Types of Buffer PoolsPacket buffers are kept in pools of buffers; there are three ways in which a packet buffer pool can be classied: Private or publicPublic packet buffer pools contain buffers that can be taken by any process on the router, while only certain processes (generally an interface device driver) can take buffers from a private buffer pool. Dynamic or staticDynamic buffer pools can change size as they are used; frequently used dynamic pools can increase in size, whereas infrequently used pools can shrink. Static buffer pools are created at a xed size and remain that size throughout their life. SizeBuffers available from the pool occur in six sizes: small (up to 104 bytes), middle (up to 600 bytes), big (up to 1524 bytes), very big (up to 4520 bytes), large (up to 5024 bytes), and huge (up to 18,024 bytes).Packet Buffer SizesYou might be wondering why packet buffers are such odd sizes. Each packet buffer size is designed to handle packets of a common size. For example, small buffers have enough space in them for 64-byte packets plus the overhead of any Layer 2 (MAC) headers. Others are set up for the maximum transmission unit size of a given link type. Big buffers t Ethernet packets plus any Layer 2 data. The huge buffers are so much larger than the largest maximum transmission unit (16,384 bytes) for terminating tunnels and other sorts of virtual interfaces, where the packet must be reassembled in the packet buffer before it can be switched.Private buffer pools are normally attached to an interface; in the Cisco IOS Software command show buffers, they are normally called interface buffers. Example 1-3 shows output from the show buffers command.Example 1-3 Interface Buffer Pools in the Output of the show buffers Command2651-router#show buffers....Interface buffer pools:CD2430 I/O buffers, 1524 bytes (total 0, permanent 0): 0 in free list (0 min, 0 max allowed) 0 hits, 0 fallbacks....Cisco IOS Software: The Brains 21Example 1-4 shows public buffers in the show buffers command.Managing Buffer PoolsTo explain buffer pool management, this section examines a series of operations on a buffer pool as an example. Youll examine the output of the show buffers command for the small public buffer pool through a series of buffers being consumed for packets received, packet buffer creation, and so on. Figure 1-16 shows the beginning state of a packet buffer.Figure 1-16 Buffer Pool Management, Step 1Example 1-5 shows that the buffer pool starts with ve buffers in the pool, which is the number of permanent buffers. All the other counters start at 0.Example 1-4 Public Buffer Pools in the Output of the show buffers Command2651-router#show buffers....Public buffer pools:Small buffers, 104 bytes (total 5, permanent 5): 2 in free list (2 min, 5 max allowed) 3 hits, 0 misses, 0 trims, 0 created 0 failures (0 no memory) Example 1-5 Public Buffer Pool from the show buffers Command in Initial Staterouter#show buffers....Public buffer pools:Small buffers, 104 bytes (total 5, permanent 5): 5 in free list (2 min, 5 max allowed) 0 hits, 0 misses, 0 trims, 0 created 0 failures (0 no memory)BufferBufferBufferBufferBufferBuffers in the poolBuffers used byanother processStep 122 Chapter 1:Introduction to Packet-Switching ArchitecturesA process pulls three buffers from the pool, as illustrated in Figure 1-17.Figure 1-17 Buffer Pool Management, Step 2Example 1-6 shows that the number of buffers now in the free list is two, and there are three hits, because three buffers were used from this pool.The same process now uses two more buffers from the same pool; the result is illustrated in Figure 1-18. Example 1-7 now shows ve hits, because ve buffers have been pulled from this pool successfully, and none are in the free list. Each buffer pulled from the pool in this step has dropped the number of free buffers in the pool below the minimum free allowed, so each one is counted as a miss (even though the buffer was supplied to the other process successfully).Example 1-6 Public Buffer Pool After Three Hitsrouter#show buffers....Public buffer pools:Small buffers, 104 bytes (total 5, permanent 5): 2 in free list (2 min, 5 max allowed) 3 hits, 0 misses, 0 trims, 0 created 0 failures (0 no memory)BufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBuffers in the poolBuffers used byanother processStep 1 Step 2Cisco IOS Software: The Brains 23Figure 1-18 Buffer Pool Management, Step 3The pool manager now runs and notes that this buffer pool has fewer free buffers than it is allowed. So it creates two more buffers so that the pool has at least the minimum number of free buffers allowed. This is illustrated in Figure 1-19.Figure 1-19 Buffer Pool Management, Step 4Example 1-7 Public Buffer Pool After Five Hitsrouter#show buffers....Public buffer pools:Small buffers, 104 bytes (total 5, permanent 5): 0 in free list (2 min, 5 max allowed) 5 hits, 2 misses, 0 trims, 0 created 0 failures (0 no memory)BufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBuffers in the pool Buffers used by another process Step 1 Step 2 Step 3BufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBuffers in the pool Buffers used by another process Step 1 Step 2 Step 3 Step 424 Chapter 1:Introduction to Packet-Switching ArchitecturesExample 1-8 now shows two buffers in the free list and two created buffers, because the pool manager had to create two buffers to reach the minimum free allowed.The other process now requests three more buffers; the results are shown in Figure 1-20.Figure 1-20 Buffer Pool Management, Step 5Example 1-9 shows that the pool is now back down to 0 in the free list. Because two more buffers were successfully pulled from the pool, the output shows seven hits. Because the two buffers used from the pool each dropped the number of buffers in the free list below the Example 1-8 Public Buffer Pool After Two Are CreatedPublic buffer pools:Small buffers, 104 bytes (total 7, permanent 5): 2 in free list (2 min, 5 max allowed) 5 hits, 2 misses, 0 trims, 2 created 0 failures (0 no memory)BufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBuffers in the poolBuffers used byanother processStep 1 Step 2 Step 3 Step 4 Step 5Cisco IOS Software: The Brains 25allowed minimum number of free buffers, the output shows ve misses. The third miss added is because of the buffer that the pool failed to supply; this is also counted as a miss.The process that is using these seven small buffers returns all but one of them to the pool. The result is illustrated in Figure 1-21.Figure 1-21 Buffer Pool Management, Step 6Example 1-9 Public Buffer Pool After Five Missesrouter#show buffers....Public buffer pools:Small buffers, 104 bytes (total 7, permanent 5): 0 in free list (2 min, 5 max allowed) 7 hits, 5 misses, 0 trims, 2 created 1 failures (0 no memory)BufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBuffer BufferBufferBufferBufferBuffers in the poolBuffers used byanother processStep 3 Step 4 Step 5 Step 626 Chapter 1:Introduction to Packet-Switching ArchitecturesExample 1-10 shows six buffers in the free list, but only ve are allowed to be in the free list. The next time the pool manager runs, it will trim one of these buffers, which essentially means that it will release the memory associated with it. The result is shown in Figure 1-22. Figure 1-22 Buffer Pool Management, Step 7Example 1-11 shows that because the pool manager has now trimmed one buffer from this pool, the free list is down to ve and one trim is recorded.Example 1-10 Public Buffer Pool After a Trimrouter#show buffers....Public buffer pools:Small buffers, 104 bytes (total 7, permanent 5): 6 in free list (2 min, 5 max allowed) 7 hits, 5 misses, 0 trims, 2 created 1 failures (0 no memory)Example 1-11 Public Buffer Pool After Final Trim router#show buffers....Public buffer pools:Small buffers, 104 bytes (total 7, permanent 5):BufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBuffer BufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBuffers in the poolBuffers used byanother processStep 4 Step 5 Step 6 Step 7Cisco IOS Software: The Brains 27Finally, the other process returns the last buffer, which will again bring the free list up to six buffers. The pool manager will again trim this to ve buffers, because the maximum free is ve buffers, and the number of permanent buffers in the pool should be ve. The result is illustrated in Figure 1-23.Figure 1-23 Buffer Pool Management, Step 8The output of the show buffers command in Example 1-12 shows this public buffer pool in its nal state. 5 in free list (2 min, 5 max allowed) 7 hits, 5 misses, 1 trims, 2 created 1 failures (0 no memory)Example 1-12 Public Buffer Pool in Final Staterouter#show buffers....Public buffer pools:Small buffers, 104 bytes (total 7, permanent 5): 4 in free list (2 min, 5 max allowed) 7 hits, 5 misses, 2 trims, 2 created 1 failures (0 no memory)Example 1-11 Public Buffer Pool After Final Trim (Continued)BufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBuffer BufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBufferBuffers in the poolBuffers used byanother processStep 5 Step 6 Step 7 Step 828 Chapter 1:Introduction to Packet-Switching ArchitecturesInteraction with Interface ProcessorsIn this chapter, youve learned how interface processors copy incoming packets from the network media to locations in the routers memory (either shared memory or memory that is local to a line card). You learn how the packet is switched after it is copied into the routers memory in Chapter 2, Cisco Express Forwarding.But after the packet is in memory, device drivers notify the switching engine that there is a packet to be switched. Device drivers are the critical pieces of code that connect the interface processors to the switching path in the router. Device drivers generally do several things: Classify packets as they are received so that the right switching code is called to process the packet (IP, IPX, AppleTalk, and so on) Make certain that the receive and transmit rings used by the interface processors to copy packets dont run out of space (if possible) Program the interface controller with any needed information, such as which Layer 2 (MAC) multicast addresses the interface processor should be listening toProcesses and SchedulingCisco IOS Software, like all operating systems, uses processesindividual applications" that perform certain operations within the context of the operating system. Some of the processes that run within Cisco IOS Software include: The pool manager, which manages the memory pools described in the section Memory Management," earlier in this chapter. IP input, which processes Internet Protocol (IP) packets that are not switched in some other way. More information on this is contained in the section Putting the Pieces Together: Switching a Packet," later in this chapter. Each of the routing protocols, such as Open Shortest Path First (OSPF), Border Gateway Protocol (BGP), Enhanced Interior Gateway Routing Protocol (EIGRP), and Intermediate System to Intermediate System (IS-IS), has one or more processes each.Youll learn about some of the specic processes that run within Cisco IOS Software in later sections.Process MemoryIn some operating systems, each process is given a block of memory that other processes in the system cannot access. Each process is protected from every other process, because no process can write in another processs memory space. Cisco IOS Software, however, uses one at memory space for all processes, although it does protect some memory space Processes and Scheduling 29from being written into (by marking it read only through the pool manager, as noted in the section Memory Management," earlier in this chapter).Both models have advantages and disadvantages. For example, in operating systems where processes are protected from writing into each others space, one process cannot corrupt another processs memory, so the system might not experience as many failures because of memory corruption. On the other hand, operating systems that use a at memory model allow processes to share information easily and quickly.Because almost all Cisco IOS Software was originally written with simplicity and speed as the top priorities, a at memory model was chosen. The show processes memorycommand, which is explained in the section Hardware and Software Show Commands," later in this chapter, provides information about the memory being used by each process.Process SchedulingIf there are multiple processes running, and only one can run on the processor at any time, the scheduler determines which process runs at any given time. The following sections look at how the scheduler works by examining the scheduler itself, the life cycle of a process, process priorities, and how the scheduler decides which process should be run at any given time. These sections also examine the process watchdog.Understanding the SchedulerTo understand the scheduler, you need to begin by noting that Cisco IOS Software is a nonpreemptive multitasking system, which means that when a process begins running, the process itself decides when to release the processor (within limits, as discussed in the section Process Watchdog," later in this chapter).The scheduler itself, then, should be lightweight, primarily responsible for ordering process priorities, determining which processes should be moved into which states periodically, and determining the next process to be run after a process releases the processor.Although Cisco IOS includes a scheduler, it does not include a scheduler process; in other words, the scheduler doesnt exist as a stand-alone process that gains access to the CPU from time to time. Instead, the scheduler simply runs each time a process releases the processor. It rst checks to see whether any processes should change state and then determines which process should run next. Process Life Cycle Figure 1-24 illustrates the life cycle of a process within Cisco IOS Software.30 Chapter 1:Introduction to Packet-Switching ArchitecturesFigure 1-24 Life Cycle of a Process in Cisco IOS SoftwareA process within Cisco IOS Software can be in one of ve states: NewThe process has just been created and has no resources. It has never been scheduled to run. ReadyThe process is ready to run and should be scheduled on the processor the next time it is eligible. RunThe process is currently running. IdleThe process is waiting for some event to become ready to run. DeadThe process is dead; it is being cleaned up.The details of the life cycle processes within Cisco IOS Software are as follows:1 Processes are created in the new state; at this point, the process doesnt have any resources, nor is there any way for the process to be scheduled to run. When you rst enter a routing protocol subconguration mode, for example, the protocols routing process is created in the new state.2 After a process is created, it is modied and placed in the ready state. Modication primarily entails assigning resources to the process, such as initialized and uninitialized data segments, access to the console, and so on. Processes in the ready state are eligible to be scheduled for execution on the processor.3 From the ready state, the process is scheduled and moved to the run state. There is no way for a process that is in the ready state to move directly to the idle state; it must run to become idle.New ReadyRunModifyKilledIdleDeadSuspendScheduleCompleteKilledWaitEventoccursProcesses and Scheduling 314 A process that is running can complete, which means that it reaches the end of its task and nishes. The next time this process needs to be run, it will need to be re-created, modied, placed in the ready state, scheduled, and nally run. This occurs when a routing protocol is removed from the router, for example. Some show commands issued from the enable prompt create processes that only run until the show output is generated; they then complete and move into the dead state.5 A process that is running can suspend, which means that it has more work to do, but is at a point at which it can quit processing for some time and allow other processes that might be waiting to be scheduled and placed in the run state. Processes that suspend are placed in the ready state, which means that they are eligible to be scheduled and run the next time the scheduler runs.6 A process that is running can also wait, which means it nds that it cannot process any more information until some piece of information is available, or some other process runs and does some work on which it is dependent. One example of this is when a routing protocol process nishes processing all the packets that the lower-layer protocols have passed to it, and the routing protocol process has no further work to do. The routing protocol process would then wait and enter the idle state.7 A process in the idle state can be killed, for example, when the user removes the protocol or prematurely ends the process by interrupting some output. Processes in the idle state that are killed are moved into the dead state. 8 If a process in the idle state is waiting on data that becomes available, or if an event occurs that indicates the process has new work to do, the process will be moved to the ready state, where it is eligible to be scheduled and then run.The current state of a process is indicated in the output of the show processes enable command, as shown in Example 1-13.The Ty columns in this output, highlighted in Example 1-13, indicate the process state, as follows: *Process is running on the CPU (youll almost always see exec as the current process on the processor, because it is almost always the place you run this command from). EProcess is waiting on an event.Example 1-13 Sample Output from the show processes Command7206#show processesCPU utilization for five seconds: 0%/0%; one minute: 0%; five minutes: 0% PID QTy PC Runtime (ms)Invoked uSecs Stacks TTY Process 2 M* 08 8693 9888/12000 0 Exec 3 Lst 60655C58 345736 12973326645740/6000 0 Check heaps....32 Chapter 1:Introduction to Packet-Switching Architectures SProcess is suspended. rdProcess is ready to run. weProcess is idle and waiting on an event. saProcess is idle until a specic (absolute) time occurs. siProcess is idle until a specic amount of time passes. spProcess is idle until a specic amount of time passes; this state is used for processes that run periodically. stProcess is idle until a timer expires. hgProcess is hung. xxProcess is dead. Process PrioritiesSome processes are more important than other processes; some processes, such as those that allocate and manage buffers in which packets are stored, should be given a higher priority than others. In fact, every process within Cisco IOS Software has a process priority, and the scheduler uses the priorities when scheduling processes.There are four priorities that can be assigned to a Cisco IOS Software process: CriticalProcesses that allocate resources for other processes and must be run periodically for the router to operate properly. For example, the pool manager process is critical. HighProcesses that require very quick response times; Net Input, which accepts packets off of interfaces, is high priority. MediumThe default process priority; almost all processes run at this priority. LowPeriodic background processes that arent required for router operation; Logger, which records Syslog messages, is a low-priority process.Scheduling ProcessesThe scheduler moves processes from one of the ve possible states to another state based on events occuring between the scheduler runs and the priority of each process in the system. Figure 1-25 illustrates the scheduling of a process, beginning and ending with the process in the idle state.Processes and Scheduling 33Figure 1-25 A Process Moving from Idle to RunAs Figure 1-25 illustrates, there are actually four queues of ready-to-run processes (or four ready queues), rather than oneone for each process priority.The way in which processes are scheduled and run, as well as moved between the various queues, is explained in the following list.1 Process A begins in the idle state, waiting on some system event before it needs to run again. When the scheduler runs, it nds that the event that Process A was waiting on has occurred, so it moves the process from the idle queue to the high-priority ready queue.2 Now that Process A has been moved from the idle queue to the high-priority ready queue, the scheduler needs to decide what process should be run. It begins with the critical ready queue and nds that Process B is ready to run, so it schedules Process B.3 While Process B is running, it nds that it needs to wait until some system event occurs to complete further processing, and Process B is placed on the idle queue.4 The scheduler now examines the critical queue again and nds that Process D is ready to run. Process D is scheduled.5 Process D uses the processor for some time and nds that it needs to wait for a system event as well before continuing to process, so the scheduler places Process D in the idle queue.6 The scheduler now examines the critical queue and nds it is empty, so it examines the high-priority ready queue to see whether any high-priority processes are ready to run. It nds Process A ready to run in the high-priority ready queue, so it schedules it.Ready queuesIdlequeueCritical Critical Critical Critical CriticalRunningHigh High High High HighMedium Medium Medium Medium MediumLowLowLowLowLow Step 2Step 1 Step 5 Step 7Step 4Step 3Step 6 Step 8AA A A AA AAB BB B B BBC C C C CCD D DD D DD34 Chapter 1:Introduction to Packet-Switching Architectures7 Process A runs for some time and nds it needs to wait on some other event or piece of information, so the scheduler moves it back to the idle queue.8 The scheduler now examines the high-priority ready queue and nds no processes ready to run, but it does nd Process C on the medium-priority ready queue; it schedules this process to run next. Using this example, the following rules summarize the schedulers operation: The scheduler runs as each process releases the processor. Each process in the idle and waiting state is examined; if an event has occurred that a process was waiting on, the process will be moved to the appropriate ready queue (based on its priority). If there are any processes in the critical-priority ready queue, one of them is scheduled and the scheduler releases the processor. If no critical processes are ready to run, the high-priority ready queue is examined. If high-priority processes are ready to run, one is scheduled and the scheduler releases the processor. If no critical- or high-priority processes are ready to run, the medium-priority ready queue is checked. If medium-priority processes are ready to run, one of them is scheduled to run, and the scheduler releases the processor. If no critical-, high-, or medium-priority processes are ready to run, the low-priority ready queue is checked. Process WatchdogIf a process runs until it is ready to release the processor, cant it run forever, keeping any other processes from running? No, because each process is carefully watched by the watchdog timer to make certain that it doesnt run for too long. Every 4 milliseconds, the watchdog timer interrupts the processor, adjusts the system clock, and does other background tasks.While doing these things, the watchdog also checks to see what process is running on the processor. If it nds a single process running on the processor for more than 2 seconds, it ags the process as a processor hog (also called a CPU hog). The next time the scheduler runs, it notes that the process releasing the processor has been agged as a CPU hog, and prints a diagnostic message that the Cisco Technical Assistance Center (TAC) can use to determine which process ran too long.If a process runs for 4 seconds, it will be terminated.Putting the Pieces Together: Switching a Packet 35Special ProcessesThere are three special processes shown in the show processes memory command, but not in the show processes command. Example 1-14 shows these processes.What do the Init, Sched, and Dead processes do, and why dont they show up in show processes? They dont show up in show processes because they arent processes in the proper sense. They are simply markers to indicate where certain pieces of memory are being used, as follows: InitAccounts for the memory allocated before the scheduler, memory pool manager, and other such processes come up. SchedAccounts for the memory used in the process scheduler; the scheduler isnt a separate process. DeadAccounts for memory that is allocated to processes that have entered the dead state.Putting the Pieces Together: Switching a PacketNow that youve examined all the individual pieces used in a Cisco router to switch packets, the following sections put them all together so that you can see how they interact.Getting the Packet off the Network MediaBeginning with the packet being handled by the interface processor, there are a bewildering number of ways in which a packet is copied off the network media. The method used depends on the hardware platform involved. Three different general types of platforms are discussed in the following sections: Shared memory Line cards with a centralized processor Line cards with distributed switchingExample 1-14 Special Processes in the Output of the show processes memory Commandrouter#show processes memory Total: 35629228, Used: 5710380, Free: 29918848 PID TTYAllocatedFreedHoldingGetbufsRetbufs Process 0 074032 1808335858800 *Init* 0 0 1116 275416 111600 *Sched* 0 0 1290760868 12831570481223504 1649280 *Dead*36 Chapter 1:Introduction to Packet-Switching ArchitecturesInbound Packets on Shared Media PlatformsThe inbound packet path on a shared memory platform, such as the Cisco 2600, is illustrated in Figure 1-26.Figure 1-26 Inbound Packet Handling on Shared Media PlatformsThe following list explains the process of handling inbound packets on shared media platforms:1 The interface process detects the optical or electrical signals that indicate a new packet is arriving on the interface. The Layer 2 address is read and checked to make certain that the router should receive and process this packet, and the packet is copied into the packet buffer pointed to by the next receive (RX) ring entry in the interface processor.2 After the packet has been copied into the memory location pointed at by the RX ring, the interface processor interrupts the processor in the router to indicate a packet has been received. This is called the receive interrupt.3 The device driver now runs and immediately reprograms the RX ring so that it points at a packet buffer from some other memory pool. The packet buffer, now used by the RX ring, is removed from the pool of packet buffers. The device driver also classies the packet and determines which process should switch it. InterfaceprocessorProcessorPacket buffer pointedto by the RXringPacket bufferin a buffer pool132Putting the Pieces Together: Switching a Packet 37Inbound Packets on Centralized Switching Routers with Line CardsRouters with line cards and that perform switching on a centralized processor, such as the Cisco 7200 series of routers, need to pass packets received on their interfaces through one more step before being able to switch them. Packets must be moved across the bus that connects the line cards to the centralized processor, as illustrated in Figure 1-27.Figure 1-27 Inbound Packets on Centralized Switching Routers with Line CardsThe following list explains the process:1 The interface process detects the optical or electrical signals that indicate a new packet is arriving on the interface. The Layer 2 address is read and checked to make certain that the router should receive and process this packet, and the packet is copied into the packet buffer pointed to by the next receive (RX) ring entry in the interface processor.2 After the packet has been copied into the memory location pointed at by the RX ring, the interface processor interrupts the processor in the router to indicate a packet has been received. This is called the receive interrupt.3 The device driver now runs and immediately reprograms the RX ring so that it points at a packet buffer from some other memory pool. The packet buffer, now used by the RX ring, is removed from the pool of packet buffers.4 The device driver copies the packet across the bus so that it is in memory on the route processor. The device driver also classies the packet and determines how it should be switched. Bus4InterfaceprocessorProcessorPacket buffer pointedto by the RXringPacket bufferin a buffer pool13238 Chapter 1:Introduction to Packet-Switching ArchitecturesInbound Packet Handling on Distributed Switching PlatformsThe inbound packet handling path on platforms that switch packets on the line cards themselves, such as the Cisco 12000 and 7500 series of routers, is illustrated in Figure 1-28.Figure 1-28 Inbound Packet Handling on Distributed Switching PlatformsThe following list explains the process of handling packets on distributed switching platforms:1 The interface processor detects the electrical or optical signals that indicate a packet is being received. The interface processor checks the Layer 2 address (MAC address) of this packet to see whether the router needs to receive or process it. As the packet is received, it is copied into a packet buffer on the line card based on the pointer contained in the receive ring.2 The interface processor now interrupts the processor on the line card to notify it that a packet has been received. This is the receive interrupt.3 The device driver runs during this interrupt; it immediately replaces the packet buffer on the receive ring (by reprogramming the interface processor) and then classies the packet to determine how it should be switched.As you can see from the illustration, the process on all three types of platforms is similar; the primary difference is which processor is interrupted to notify the router of a new packet that needs to be processed, and where the packet sits in memory when the actual switching of the packet begins. InterfaceprocessorProcessorPacket buffer pointedto by the RXringPacket bufferin a buffer pool132Putting the Pieces Together: Switching a Packet 39Switching the PacketNow that the packet has been copied off the network media into a packet buffer on the router, it can be switched. A Cisco router can use two basic methods to switch packets: Hardware-based switching Software-based switchingHardware-based switching exists in many different forms on Cisco routers, from the Cisco 12000 series and Cisco 6500 series, which use custom-designed ASICs, to the Cisco 10000, which uses a programmable ASIC called the Toaster to perform Parallel Express Forwarding (PXF). The following sections focus on software packet switching.Switching the Packet Quickly During the Receive InterruptRecall from the preceding section that when a packet is received and copied into memory, the processor is interrupted and the device driver code runs. After the device driver has run, the switching code runs while the processor is still in the context of this receive interrupt.Figure 1-29 illustrates the steps taken to software-switch the packet while it is still within the receive interrupt.Figure 1-29 Software-Switching a Packet in the Receive InterruptInput interfaceprocessorOutput interfaceprocessorOutput queueInput queueProcessorPacket ina buffer221453Switchingtable40 Chapter 1:Introduction to Packet-Switching ArchitecturesThe following list explains the process:1 The switching code consults the switching table to determine whether the packets destination is known, the next hops MAC header is known, and the outbound interface is known.2 If the information needed to switch the packet is available in the switching table, and the packet is not destined for the router itself, the packet header is modied. If the information needed to switch the packet is not available in the switching table, or if the packet is destined to the router itself, the buffer the packet is in is placed on the input queue of the appropriate process to be dealt with later. Note that the packet itself is not moved; instead, a pointer to the packet buffer that the packet was originally copied into is placed on the input queue of the appropriate process. The next section in this chapter explains what happens to the packet after it is placed on the input queue of a process.3 After the MAC header is rewritten, the output queue of the outbound interface is checked. If any packets are in the output queue of the outbound interface, a pointer to the buffer that the packet is in is copied into this output queue.4 If no other packets are on the output queue of the outbound interface, the transmit ring of the outbound interface is then checked. If there is space on the transmit ring of the outbound interface, a pointer to the buffer containing the packet is placed on the transmit ring.5 If the transmit ring is full, a pointer to the buffer containing the packet is placed on the output queue of the outbound interface. After the packet has been moved into the appropriate place for further processingthe input queue of some process, the output queue of the outbound interface, or the transmit ring of the outbound interfacethe receive interrupt is released, and the processor continues running the process interrupted by the received packet.Out-of-Order Packets and Fancy QueuingThese last three steps might seem odd when you rst read themWhy would the router check the output queue for packets before deciding whether to transfer the packet to the transmit ring? To prevent out-of-order packets and to make fancy queuing mechanisms, such as weighted fair queuing (WFQ), do their job.Lets start with out-of-order packet prevention. Assume that a single packet is received and is placed in the input queue of some process for later processing. As this packet is processed, it builds the information needed to switch future packets of this type and adds it to the switching table. The packet is nally placed on the output queue for the outbound interface.Putting the Pieces Together: Switching a Packet 41Another packet now arrives of this same type, and during the receive interrupt, switching information for this packet is found in the switching table. If the router now uses this information to switch the packet, and places it directly on the transmit ring of the outbound interface, the second packet will be transmitted before the rst. To prevent this sort of error, the output queue of the outbound interface is checked before placing packets directly on the transmit ring of the outbound interface.Fancy queuing of any type would also be ineffective if all the packets being switched during the receive interrupt were placed directly on the transmit ring of the outbound interfacethere wouldnt be any way to make certain that packets which should be transmitted rst really will be, because all fancy queuing takes place between the output queue and the transmit ring of the interface. Process-Switching the PacketFigure 1-30 illustrates what happens when a packet is placed on the input queue of some process to be switched or otherwise processed.Figure 1-30 Process-Switching a PacketThe following list explains process-switching a packet:1 The processor nishes the currently running process and turns control over to the scheduler. The scheduler notes that there is a packet on the input queue of one of the switching processes and moves the process into the ready queue. The process is scheduled and runs.Process ProcessSwitchingprocessOutput queue Input queueProcessorRoutingtableLayer 2table1 234542 Chapter 1:Introduction to Packet-Switching Architectures2 The process examines the packet header and looks in the routing table to determine the outbound interface and next hop.3 The switching process looks up the next hop in the Layer 2 tables to determine what Layer 2 (MAC) header to use when forwarding the packet.4 The switching process rewrites the Layer 2 (MAC) header.5 The packet is removed from the input queue of the switching process and placed on the output queue of the interface. (The packet isnt actually moved; the pointer to the buffer the packet is in is moved.)Input and Output QueuesTwo queues are mentioned here that arent explained elsewhere: the input queue and the output queue. There are multiple input and output queues within Cisco IOS Software. Figure 1-31 illustrates these queues. Figure 1-31 Input and Output QueuesEach process that either switches or otherwise handles packets has an independent input queue, and each interface that can transmit packets has an independent output queue. As ProcessInputinterface34OutputqueuesOutputprocessesInputqueuesSocketqueuesInputprocessesProcess Process Process ProcessProcess Process12Putting the Pieces Together: Switching a Packet 43packets are received that either must be switched by one of the switching processes or must be processed by the router, they are placed in the input queue for the appropriate process, as shown with the line marked 1 in Figure 1-31. After they have been switched by the correct process, they are placed in the output queue of the output interface, as shown by line 2.Packets that are destined to the router, such as routing protocol control packets and echo requests, are passed into a socket queue by the switching processes. Packets that are generated by processes on the router are queued directly to the correct interfaces output queue.The output of the show interface command in Cisco IOS Software shows both queues, which is confusing until you understand what the output is indicating.router#show interface fastethernet 0/0FastEthernet0/0 is up, line protocol is up Hardware is DEC21140A, address is 0030.7b1d.2c00 (bia 0030.7b1d.2c00)....Queueing strategy: fifoOutput queue 0/40, 0 drops; input queue 0/75, 0 dropsFigure 1-32 illustrates the input and output queues in a different way that will help explain the output of the show interface command.Figure 1-32 Input and Output Queues in Relation to the show interface CommandEach process that switches or creates packets feeds the single output queue for each interface; this much is straightforward. If each process in Figure 1-32 has queued one packet on the interface shown, show interface would report four packets in output queue.Each interface, however, feeds several input queues. The input queue count shown in the output of show interface indicates the number of packets the interface has queued to Process 1 Process 2 Process 3 Process 4InputqueuesOutputqueueInputinterface1 packet3 packets2 packets+ 5 packets11 packets3packetsqueuedhere1packetqueuedhere2packetsqueuedhere5packetsqueuedhere44 Chapter 1:Introduction to Packet-Switching Architecturesprocesses that havent been processed. Thus, the interface in Figure 1-32 has queued one packet toward process 1, three packets toward process 2, two packets toward process 3, and ve packets toward process 4. Eleven packets have been queued by the interface and are waiting to be processed. The output of show interface would show the input queue for this interface with 11 packets. Transmitting the PacketThe Layer 2 header, or MAC header, has now been rewritten, and the packet is shifted to the outbound interface. Figure 1-33 illustrates the transmission of the packet.Figure 1-33 Transmitting the PacketThe following