abstract in recent years, the exponential growth

Upload: adhurschowdary

Post on 30-May-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    1/74

    Abstract

    In recent years, the exponential growth of Internet

    users with increased bandwidth requirements has led

    to the emergence of the next generation of IP

    routers. Distributedarchitecture is one of the

    promising trends providing petabit routers with a

    large switching capacity and high-speed interfaces.

    Distributed routers aredesigned with an optical

    switch fabric interconnecting line and control cards.

    Computingand memory resources are available on

    both control and line cards to performrouting and

    forwarding tasks. This new hardware architecture is

    not efficientlyutilized by the traditional software

    models where a single control card is responsiblefor

    all routing and management operations. The routing

    table manager playsan extremely critical role by

    managing routing information and in particular, a

    forwarding

    information table. This article presents a distributed

    architecture set uparound a distributed and scalable

    routing table manager. This architecture alsocomes

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    2/74

    provides improvements in robustness and resiliency.

    The proposed architecture

    is based on a sharing mechanism between control

    and line cards and is ableto meet the scalability

    requirements for route computations, notifications,

    andadvertisements. A comparative scalability

    evaluation is made between distributedand

    centralized architectures in terms of required

    memory and computingresources.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    3/74

    INTRODUCTION

    between routers, the size of the routing table

    managed by theRTM module tends to increase

    rapidly. This requires routersto have more CPU

    cycles, more powerful accompanying

    hardwareresources, and an increased memory size to

    contain allavailable routing information. Until

    recently, the only validsolution to support the

    increasing Internet traffic was to periodicallyupgrade

    the router control card on which the RTMmodule was

    running or to replace the whole router with a

    newone, having more powerful hardware resources

    (e.g., CPU andincreased memory size), demanding

    some service interruptions.An alternate solution is to

    implement distributed andscalable routers [2].In this

    article, we describe the benefits and limitations of

    adistributed router design and propose a distributed

    architecturefor the RTM. We first review the

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    4/74

    hardware architectureof next-generation routers and

    provide an overview of thefunctionality of the RTM.

    The critical issues for a centralized

    RTM architecture are then discussed, leading to a

    proposal ofa completely distributed architecture for

    the RTM. We thenpresent a comparative scalability

    evaluation of the proposeddistributed architecture

    with a centralized one, in terms ofrequired memory

    and computing resources.Next-Generation Routers

    and the Routing

    Table Manager

    The first and second generations of IP routers were

    basicallymade of a single central processor running

    all routing protocolmodules and with multiple line

    cards interconnectedthrough a shared bus. Their

    performance depends on thethroughput of the

    shared bus and on the speed and capabilitiesof the

    central processor; and therefore, they are not ableto

    meet todays bandwidth requirements. The third

    generationor the current generation routers were

    introduced to solvebottlenecks of the second

    generation [3]. The switch fabric

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    5/74

    replaces the shared bus: it is a crossbar connecting

    multiplecards together, thus providing ample

    bandwidth for transmittingpackets simultaneously

    among line cards. These routershave a set of line

    cards, a set of forwarding engines, and a

    singlecontrol card that are interconnected through a

    switch fabric.The header of an incoming packet

    entering a line cardinterface is sent through the

    switch fabric to the appropriateforwarding engine.

    The forwarding engine determines towhich outgoing

    interface the packet should be sent. Thisinformation

    is sent back to the line card through the switch

    fabric, which forwards the packet to the egress line

    card.Other functionality, such as resource

    reservation and maintenanceof the routing table, are

    handled by the modules running

    n the control card.The architecture for next-

    generation routers is essentiallyswitch-based.

    However, the switching capacity is enhanced upto

    petabits per second [4]. The hardware architecture of

    theserouters is based on three types of cards (Fig.

    1a):The line card provides multiple gigabit

    interfaces. Theingress network processor (iNP) is

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    6/74

    programmable with parallelprocessing capability. It

    does packet forwarding, classification,and flow

    policing. The iNP contains a FIT that is used

    todetermine the destination of data packets. Control

    packetscan be filtered and forwarded to the CPU for

    processing. Theingress traffic manager (iTM)

    forwards the packets from theiNP to the switch fabric

    while maintaining traffic load balancingusing traffic

    access control, buffer management, and

    packetscheduling mechanisms. Data packets travel

    through theswitch fabric to the egress line card, and

    control packets aresent to the control card. The

    egress traffic manager (eTM)receives packets from

    the switch fabric plane directly connectedto its line

    card, performs packet re-ordering, and

    controlscongestion. The egress network processor

    (eNP) sends out thepackets with per-egress-port

    output scheduling mechanisms.

    The CPU is multi-purpose and able to perform control

    plane

    functions with the help of the built-in memory.The

    control card or route processor is designed to run

    themain routing protocol modules (i.e., BGP, OSPF,

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    7/74

    IS-IS, andmultiprotocol label switching [MPLS]), the

    RTM, and thecommand line interface (CLI). The

    control card architectureis similar to a line card, but

    its processing power and storagecapabilities are far

    superior, and there is no interface to externaldevices.

    The control card has one iTM chip and one eTMchip

    to provide interfaces between the local processor

    and theswitch fabric planes. They are responsible for

    managing flowsof control packets.The control and

    line cards are interconnected by a scalableswitch

    fabric that is distributed into identical and

    independentswitching planes. The switch fabric is

    made ofso-called matrix cards that provide data

    switching functions.Per-flow scheduling, path

    balancing, and congestion managementwithin the

    switch fabric are achieved by the fabric

    trafficmanager chipsets integrated on the matrix

    card. Each line

    card or control card has an ingress port and an

    egress portconnecting to a matrix card. Each

    switching plane is made of

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    8/74

    the same number of matrix cards. Several topologies

    may beused to connect the matrix cards. The Benes

    topology [4] is

    recommended, due to its non-blocking

    characteristics.One of the most important software

    components of therouter is the RTM. It builds the FIT

    from the routing databasethat stores all routes

    learned by different routing and signalingprotocols,

    including the best and the non-best routes. Fora set

    of routes having the same destination prefix, only

    oneroute is deemed the best, which is based on a

    pre-configuredpreference value assigned to each

    routing protocol. For example,if static routes have a

    high preference value and OSPFroutes have a low

    preference value, and if a route entry havingthe

    same destination prefix was recorded by each

    protocol,

    the static route is considered to be the best route

    and is added

    to the FIT (Fig. 1b). However, some services, such

    asResource Reservation Protocol (RSVP), can use

    non-bestroutes to forward data with respect to user-

    defined parameters.Therefore, the RTM must keep all

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    9/74

    routes, allow users orrequested modules to access

    the route database, and makerouting decisions

    based on request next hop and explicit

    routeresolution; notify any change in the routing

    tables generatedby the underlying routing protocols

    (e.g., Routing Informationprotocol [RIP], OSPF, IS-IS,

    BGP); alert the routing protocolsabout the current

    state of physical links, such as theup/down status,

    available bandwidth, and so on to manageassociated

    link states and indirectly route status;

    communicatewith a policy manager module for

    making route filtering decisionsfor routing protocols

    (e.g., OSPF or BGP); and alert therouting protocols

    about resource reservation failures.

    Another requirement for the RTM is to contain a

    verylarge number of routes, such as the ever

    increasing BGProutes. Because router vendors do not

    increase the memory ofthe main control card by

    much, Internet service providers(ISPs) are very

    careful about the amount of informationrouters must

    store.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    10/74

    FEASIBILITY STUDYAll projects are feasible given unlimited resources and

    infinite time. It is both necessary and prudent to evaluate the

    feasibility of the project at the earliest possible time. Feasibility

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    11/74

    and risk analysis is related in many ways. If project risk is great ,

    the feasibility listed below is equally important.

    The following feasibility techniques has been used in this

    project

    Operational Feasibility

    Technical Feasibility

    Economic Feasibility

    Operational Feasibility:

    Proposed system is beneficial since it turned into

    information system analyzing the traffic that will meet the

    organizations operating requirements.

    IN security, the file is transferred to the destination and the

    acknowledgement is given to the server. Bulk of data transfer

    is sent without traffic.

    Technical Feasibility:

    Technical feasibility centers on the existing computer system

    (hardware , software, etc..) and to what extent it can support the

    proposed addition. For example, if the current computer is

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    12/74

    operating at 80% capacity. This involves, additional hardware

    (RAM and PROCESSOR) will increase the speed of the process.

    Economic Feasibility:

    Economic feasibility is the most frequently used

    method for evaluating the effectiveness of a candidate system.

    More commonly known as cost / benefit analysis, the procedure is

    to determine the benefits and saving that are expected from a

    candidate and compare them with the costs. If the benefits

    outweigh cost. Then the decision is made to design and implement

    the system. Otherwise drop the system.

    This system has been implemented such that it can be

    used to analysis the traffic. So it does not requires any extra

    equipment or hardware to implement. So it is economically

    feasible to use.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    13/74

    2.1 OBJECTIVES :

    VIABLE PACKET FORWARDING

    TASK SHARING MECHANISM

    TO AVOID MEMORY COMPLEXITY OF

    CONTROL CARD

    RTM UPDATES OF LINE CARD

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    14/74

    SYSTEM

    SPECIFICATION

    . SYSTEM SPECIFICATION

    3.1 HARDWARE SPECIFICATION:

    Processor : Pentium-III

    Speed : 1.1GHz

    RAM : 512MB

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    15/74

    Hard Disk : 40GB

    General : KeyBoard, Monitor ,

    Mouse

    3.2 SOFTWARE SPECIFICATION:

    Operating System : Windows XP

    Software : visual studio 5.0

    Back End :sql server

    LANGUAGE

    DESCRIPTION

    4. LANGUAGE DESCRIPTION

    Active Server Pages.NET

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    16/74

    ASP.NET is a programming framework built on

    the common language runtime that can be used

    on a server to build powerful Web applications.

    ASP.NET offers several important advantages

    over previous Web development models:

    Enhanced Performance. ASP.NET is

    compiled common language runtime code

    running on the server. Unlike its interpreted

    predecessors, ASP.NET can take advantage

    of early binding, just-in-time compilation,

    native optimization, and caching services

    right out of the box. This amounts to

    dramatically better performance before you

    ever write a line of code.

    World-Class Tool Support. The

    ASP.NET framework is complemented by a

    rich toolbox and designer in the Visual

    Studio integrated development

    environment. WYSIWYG editing, drag-and-

    drop server controls, and automatic

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    17/74

    deployment are just a few of the features

    this powerful tool provides.

    Power and Flexibility. Because

    ASP.NET is based on the common language

    runtime, the power and flexibility of that

    entire platform is available to Web

    application developers. The .NET Framework

    class library, Messaging, and Data Accesssolutions are all seamlessly accessible from

    the Web. ASP.NET is also language-

    independent, so you can choose the

    language that best applies to your

    application or partition your application

    across many languages. Further, common

    language runtime interoperability

    guarantees that your existing investment in

    COM-based development is preserved when

    migrating to ASP.NET.

    Simplicity. ASP.NET makes it easy to

    perform common tasks, from simple form

    submission and client authentication to

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    18/74

    deployment and site configuration. For

    example, the ASP.NET page framework

    allows you to build user interfaces that

    cleanly separate application logic from

    presentation code and to handle events in a

    simple, Visual Basic - like forms processing

    model. Additionally, the common language

    runtime simplifies development, with

    managed code services such as automatic

    reference counting and garbage collection.

    Manageability. ASP.NET employs a

    text-based, hierarchical configuration

    system, which simplifies applying settings to

    your server environment and Web

    applications. Because configuration

    information is stored as plain text, new

    settings may be applied without the aid of

    local administration tools. This "zero local

    administration" philosophy extends to

    deploying ASP.NET Framework applications

    as well. An ASP.NET Framework application

    is deployed to a server simply by copying

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    19/74

    the necessary files to the server. No server

    restart is required, even to deploy or

    replace running compiled code.

    Scalability and Availability. ASP.NET

    has been designed with scalability in mind,

    with features specifically tailored to improve

    performance in clustered and

    multiprocessor environments. Further,processes are closely monitored and

    managed by the ASP.NET runtime, so that if

    one misbehaves (leaks, deadlocks), a new

    process can be created in its place, which

    helps keep your application constantly

    available to handle requests.

    Customizability and Extensibility.

    ASP.NET delivers a well-factored

    architecture that allows developers to "plug-

    in" their code at the appropriate level. In

    fact, it is possible to extend or replace any

    subcomponent of the ASP.NET runtime with

    your own custom-written component.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    20/74

    Implementing custom authentication or

    state services has never been easier.

    Security. With built in Windows

    authentication and per-application

    configuration, you can be assured that your

    applications are secure.

    Language Support

    The Microsoft .NET Platform currently offers

    built-in support for three languages: C#, Visual

    Basic, and JScript.

    What is ASP.NET Web Forms?

    The ASP.NET Web Forms page framework is a

    scalable common language runtime

    programming model that can be used on the

    server to dynamically generate Web pages.

    Intended as a logical evolution of ASP (ASP.NET

    provides syntax compatibility with existing

    pages), the ASP.NET Web Forms framework has

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    21/74

    been specifically designed to address a number

    of key deficiencies in the previous model. In

    particular, it provides:

    The ability to create and use reusable

    UI controls that can encapsulate common

    functionality and thus reduce the amount

    of code that a page developer has to write.

    The ability for developers to cleanlystructure their page logic in an orderly

    fashion (not "spaghetti code").

    The ability for development tools to

    provide strong WYSIWYG design support

    for pages (existing ASP code is opaque to

    tools).

    ASP.NET Web Forms pages are text files with

    an .aspx file name extension. They can be

    deployed throughout an IIS virtual root directory

    tree. When a browser client requests .aspxresources, the ASP.NET runtime parses and

    compiles the target file into a .NET Framework

    class. This class can then be used to dynamically

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    22/74

    process incoming requests. (Note that the .aspx

    file is compiled only the first time it is accessed;

    the compiled type instance is then reused across

    multiple requests).

    An ASP.NET page can be created simply by

    taking an existing HTML file and changing its file

    name extension to .aspx (no modification of

    code is required). For example, the followingsample demonstrates a simple HTML page that

    collects a user's name and category preference

    and then performs a form postback to the

    originating page when a button is clicked:

    ASP.NET provides syntax compatibility with

    existing ASP pages. This includes support for code render blocks that can be

    intermixed with HTML content within an .aspx

    file. These code blocks execute in a top-down

    manner at page render time.

    Code-Behind Web Forms

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    23/74

    ASP.NET supports two methods of authoring

    dynamic pages. The first is the method shown in

    the preceding samples, where the page code is

    physically declared within the originating .aspx

    file. An alternative approach--known as the

    code-behind method--enables the page code to

    be more cleanly separated from the HTML

    content into an entirely separate file.

    Introduction to ASP.NET Server Controls

    In addition to (or instead of) using code

    blocks to program dynamic content, ASP.NET

    page developers can use ASP.NET server

    controls to program Web pages. Server controls

    are declared within an .aspx file using custom

    tags or intrinsic HTML tags that contain a

    runat="server" attribute value. Intrinsic HTML

    tags are handled by one of the controls in the

    System.Web.UI.HtmlControls namespace.Any tag that doesn't explicitly map to one of the

    controls is assigned the type of

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    24/74

    System.Web.UI.HtmlControls.HtmlGenericC

    ontrol.

    Server controls automatically maintain any

    client-entered values between round trips to the

    server. This control state is not stored on the

    server (it is instead stored within an form field that is round-

    tripped between requests). Note also that noclient-side script is required.

    In addition to supporting standard HTML input

    controls, ASP.NET enables developers to utilize

    richer custom controls on their pages. For

    example, the following sample demonstrates

    how the control can be used

    to dynamically display rotating ads on a page.

    1. ASP.NET Web Forms provide an easy

    and powerful way to build dynamic Web

    UI.

    2. ASP.NET Web Forms pages can target

    any browser client (there are no script

    library or cookie requirements).

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    25/74

    3. ASP.NET Web Forms pages provide

    syntax compatibility with existing ASP

    pages.

    4. ASP.NET server controls provide an

    easy way to encapsulate common

    functionality.

    5. ASP.NET ships with 45 built-in server

    controls. Developers can also use controls

    built by third parties.

    6. ASP.NET server controls can

    automatically project both uplevel and

    downlevel HTML.

    7. ASP.NET templates provide an easy

    way to customize the look and feel of list

    server controls.

    8. ASP.NET validation controls provide an

    easy way to do declarative client or server

    data validation.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    26/74

    Crystal Reports

    Crystal Reports for Visual Basic .NET is the standard

    reporting tool for Visual Basic.NET; it brings the

    ability to create interactive, presentation-quality

    content which has been the strength of Crystal

    Reports for years to the .NET platform.

    With Crystal Reports for Visual Basic.NET, you can

    host reports on Web and Windows platforms and

    publish Crystal reports as Report Web Services on a

    Web server.

    To present data to users, you could write code to

    loop through recordsets and print them inside your

    Windows or Web application. However, any work

    beyond basic formatting can be complicated:

    consolidations, multiple level totals, charting, and

    conditional formatting are difficult to program.

    With Crystal Reports for Visual Studio .NET, you can

    quickly create complex and professional-looking

    reports. Instead of coding, you use the Crystal

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    27/74

    Report Designer interface to create and format the

    report you need. The powerful Report Engine

    processes the formatting, grouping, and charting

    criteria you specify.

    Report Experts

    Using the Crystal Report Experts, you can quickly

    create reports based on your development needs:

    Choose from report layout options ranging from

    standard reports to form letters, or build your

    own report from scratch.

    Display charts that users can drill down on to

    view detailed report data.

    Calculate summaries, subtotals, and

    percentages on grouped data.

    Show TopN or BottomN results of data.

    Conditionally format text and rotate text objects.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    28/74

    ACTIVE X DATA OBJECTS.NET

    ADO.NET Overview

    ADO.NET is an evolution of the ADO data access

    model that directly addresses user requirements for

    developing scalable applications. It was designed

    specifically for the web with scalability,

    statelessness, and XML in mind.

    ADO.NET uses some ADO objects, such as the

    Connection and Command objects, and also

    introduces new objects. Key new ADO.NET objects

    include the DataSet, DataReader, and

    DataAdapter.

    The important distinction between this evolved stageof ADO.NET and previous data architectures is that

    there exists an object -- the DataSet -- that is

    separate and distinct from any data stores. Because

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    29/74

    of that, the DataSet functions as a standalone

    entity. You can think of the DataSet as an always

    disconnected recordset that knows nothing about the

    source or destination of the data it contains. Inside a

    DataSet, much like in a database, there are tables,

    columns, relationships, constraints, views, and so

    forth.

    A DataAdapter is the object that connects to thedatabase to fill the DataSet. Then, it connects back

    to the database to update the data there, based on

    operations performed while the DataSet held the

    data. In the past, data processing has been primarily

    connection-based. Now, in an effort to make multi-

    tiered apps more efficient, data processing is turning

    to a message-based approach that revolves around

    chunks of information. At the center of this approach

    is the DataAdapter, which provides a bridge to

    retrieve and save data between a DataSet and its

    source data store. It accomplishes this by means of

    requests to the appropriate SQL commands made

    against the data store.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    30/74

    The XML-based DataSet object provides a consistent

    programming model that works with all models of

    data storage: flat, relational, and hierarchical. It

    does this by having no 'knowledge' of the source of

    its data, and by representing the data that it holds

    as collections and data types. No matter what the

    source of the data within the DataSet is, it is

    manipulated through the same set of standard APIs

    exposed through the DataSet and its subordinate

    objects.

    While the DataSet has no knowledge of the source

    of its data, the managed provider has detailed and

    specific information. The role of the managed

    provider is to connect, fill, and persist the DataSet

    to and from data stores. The OLE DB and SQL Server

    .NET Data Providers (System.Data.OleDb and

    System.Data.SqlClient) that are part of the .Net

    Framework provide four basic objects: the

    Command, Connection, DataReader and

    DataAdapter. In the remaining sections of this

    document, we'll walk through each part of the

    DataSet and the OLE DB/SQL Server .NET Data

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    31/74

    Providers explaining what they are, and how to

    program against them.

    The following sections will introduce you to some

    objects that have evolved, and some that are new.

    These objects are:

    Connections. For connection to and

    managing transactions against a database.

    Commands. For issuing SQL commands

    against a database.

    DataReaders. For reading a forward-only

    stream of data records from a SQL Server data

    source.

    DataSets. For storing, remoting and

    programming against flat data, XML data and

    relational data.

    DataAdapters. For pushing data into a

    DataSet, and reconciling data against a

    database.

    When dealing with connections to a database, there

    are two different options: SQL Server .NET Data

    Provider (System.Data.SqlClient) and OLE DB .NET

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    32/74

    Data Provider (System.Data.OleDb). In these

    samples we will use the SQL Server .NET Data

    Provider. These are written to talk directly to

    Microsoft SQL Server. The OLE DB .NET Data

    Provider is used to talk to any OLE DB provider (as it

    uses OLE DB underneath).

    Connections

    Connections are used to 'talk to' databases, and are

    respresented by provider-specific classes such as

    SQLConnection. Commands travel over connections

    and resultsets are returned in the form of streams

    which can be read by a DataReader object, or

    pushed into a DataSet object.

    Commands

    Commands contain the information that is submitted

    to a database, and are represented by provider-

    specific classes such as SQLCommand. A commandcan be a stored procedure call, an UPDATE

    statement, or a statement that returns results. You

    can also use input and output parameters, and

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    33/74

    return values as part of your command syntax. The

    example below shows how to issue an INSERT

    statement against the Northwind database.

    DataReaders

    The DataReader object is somewhat synonymous

    with a read-only/forward-only cursor over data. The

    DataReader API supports flat as well as hierarchical

    data. A DataReader object is returned after

    executing a command against a database. The

    format of the returned DataReader object is

    different from a recordset. For example, you might

    use the DataReader to show the results of a search

    list in a web page.

    DataSets and DataAdapters

    DataSets

    The DataSet object is similar to the ADO Recordset

    object, but more powerful, and with one otherimportant distinction: the DataSet is always

    disconnected. The DataSet object represents a

    cache of data, with database-like structures such as

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    34/74

    tables, columns, relationships, and constraints.

    However, though a DataSet can and does behave

    much like a database, it is important to remember

    that DataSet objects do not interact directly with

    databases, or other source data. This allows the

    developer to work with a programming model that is

    always consistent, regardless of where the source

    data resides. Data coming from a database, an XML

    file, from code, or user input can all be placed into

    DataSet objects. Then, as changes are made to the

    DataSet they can be tracked and verified before

    updating the source data. The GetChanges method

    of the DataSet object actually creates a second

    DatSet that contains only the changes to the data.

    This DataSet is then used by a DataAdapter (or

    other objects) to update the original data source.

    The DataSet has many XML characteristics,

    including the ability to produce and consume XML

    data and XML schemas. XML schemas can be used to

    describe schemas interchanged via WebServices. In

    fact, a DataSet with a schema can actually be

    compiled for type safety and statement completion.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    35/74

    DataAdapters (OLEDB/SQL)

    The DataAdapter object works as a bridge between

    the DataSet and the source data. Using the

    provider-specific SqlDataAdapter (along with its

    associated SqlCommand and SqlConnection) can

    increase overall performance when working with a

    Microsoft SQL Server databases. For other OLE DB-

    supported databases, you would use theOleDbDataAdapter object and its associated

    OleDbCommand and OleDbConnection objects.

    The DataAdapter object uses commands to update

    the data source after changes have been made to

    the DataSet. Using the Fill method of the

    DataAdapter calls the SELECT command; using the

    Update method calls the INSERT, UPDATE or

    DELETE command for each changed row. You can

    explicitly set these commands in order to control the

    statements used at runtime to resolve changes,including the use of stored procedures. For ad-hoc

    scenarios, a CommandBuilder object can generate

    these at run-time based upon a select statement.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    36/74

    However, this run-time generation requires an extra

    round-trip to the server in order to gather required

    metadata, so explicitly providing the INSERT,

    UPDATE, and DELETE commands at design time will

    result in better run-time performance.

    1. ADO.NET is the next evolution of ADO for

    the .Net Framework.

    2. ADO.NET was created with n-Tier,statelessness and XML in the forefront. Two

    new objects, the DataSet and DataAdapter,

    are provided for these scenarios.

    3. ADO.NET can be used to get data from a

    stream, or to store data in a cache for updates.

    4. There is a lot more information about

    ADO.NET in the documentation.

    5. Remember, you can execute a command

    directly against the database in order to do

    inserts, updates, and deletes. You don't need

    to first put data into a DataSet in order to

    insert, update, or delete it.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    37/74

    6. Also, you can use a DataSet to bind to the

    data, move through the data, and navigate

    data relationships

    2.2 About Microsoft SQL Server 7.0

    Microsoft SQL Server is a Structured Query Language

    (SQL) based, client/server relational database. Each

    of these terms describes a fundamental part of the

    architecture of SQL Server.

    Database

    A database is similar to a data file in that it is a

    storage place for data. Like a data file, a database

    does not present information directly to a user; the

    user runs an application that accesses data from the

    database and presents it to the user in an

    understandable format.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    38/74

    A database typically has two components: the files

    holding the physical database and the database

    management system (DBMS) software that

    applications use to access data. The DBMS is

    responsible for enforcing the database structure,

    including:

    Maintaining the relationships between data in

    the database.

    Ensuring that data is stored correctly, and that

    the rules defining data relationships are not

    violated.

    Recovering all data to a point of known

    consistency in case of system failures.

    Relational Database

    There are different ways to organize data in a

    database but relational databases are one of the

    most effective. Relational database systems are an

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    39/74

    application of mathematical set theory to the

    problem of effectively organizing data. In a relational

    database, data is collected into tables (called

    relations in relational theory).

    When organizing data into tables, you can usually

    find many different ways to define tables. Relational

    database theory defines a process, normalization,

    which ensures that the set of tables you define will

    organize your data effectively.

    Client/Server

    In a client/server system, the server is a relatively

    large computer in a central location that manages a

    resource used by many people. When individuals

    need to use the resource, they connect over the

    network from their computers, or clients, to the

    server.

    Examples of servers are: In a client/server database

    architecture, the database files and DBMS softwarereside on a server. A communications component is

    provided so applications can run on separate clients

    and communicate to the database server over a

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    40/74

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    41/74

    languages that can be used with relational

    databases; the most common is SQL. Both the

    American National Standards Institute (ANSI) and

    the International Standards Organization (ISO) have

    defined standards for SQL. Most modern DBMS

    products support the Entry Level of SQL-92, the

    latest SQL standard (published in 1992).

    SQL Server Features

    Microsoft SQL Server supports a set of features that

    result in the following benefits:

    Ease of installation, deployment, and use

    SQL Server includes a set of administrative and

    development tools that improve your ability to

    install, deploy, manage, and use SQL Server across

    several sites.

    Scalability

    The same database engine can be used across

    platforms ranging from laptop computers running

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    42/74

    Microsoft Windows 95/98 to large, multiprocessor

    servers running Microsoft Windows NT, Enterprise

    Edition.

    Data warehousing

    SQL Server includes tools for extracting and

    analyzing summary data for online analytical

    processing (OLAP). SQL Server also includes tools forvisually designing databases and analyzing data

    using English-based questions.

    System integration with other server software

    SQL Server integrates with e-mail, the Internet, and

    Windows.

    Databases

    A database in Microsoft SQL Server consists of a

    collection of tables that contain data, and other

    objects, such as views, indexes, stored procedures,

    and triggers, defined to support activities performed

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    43/74

    with the data. The data stored in a database is

    usually related to a particular subject or process,

    such as inventory information for a manufacturing

    warehouse.

    SQL Server can support many databases, and each

    database can store either interrelated data or data

    unrelated to that in the other databases. For

    example, a server can have one database that stores

    personnel data and another that stores product-

    related data. Alternatively, one database can store

    current customer order data, and another; related

    database can store historical customer orders that

    are used for yearly reporting. Before you create a

    database, it is important to understand the parts of a

    database and how to design these parts to ensure

    that the database performs well after it is

    implemented.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    44/74

    SYSTEM DESIGN

    AND

    DEVELOPMENT

    5. SYSTEM DESIGN AND DEVELOPMENT:

    5.1 DESCRIPTION OF A SYSTEM:

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    45/74

    Network:

    A Network is a set of devices (often referred to as

    nodes) connected by media links. A node can be a computer

    ,Printer, or any other device capable of sending and/or receiving

    data generated by other nodes on thenetwork. The links connecting

    the devices are often called communication Channels.

    Distributed Processing:

    Network use distributed Processing , in which a task is

    divided among multiple computers.

    Advantages of distributed processing included the

    following.

    Security/encapsulation.

    Distributed databases.

    Faster problem solving.

    Security through redundancy.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    46/74

    OSI Model :

    An ISO standard that covers all aspects of network

    communications is Open Systems Interconnection model. The

    Open systems Interconnection model is a layered framework for

    the design of network system that allows for communication

    across all type of computer systems. It consists of seven ordered

    layers , each of which defines a segment of the process of moving

    information across a network.

    The seven layers are:

    Physical Layer

    Data Link Layer

    Network Layer

    Transport Layer

    Session Layer

    Presentation Layer

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    47/74

    Application Layer

    Functions of the Layers :

    Physical Layer:

    The physical layer coordinates the functions required

    to transmit a bit stream over a physical medium. It deals with the

    mechanical and electrical specifications of the interface and

    transmission medium. It also defines the procedures and functions

    that physical devices and interfaces have to perform for

    transmission to occur.

    Data Link Layer:

    The data link layer transforms the physical layer, a raw

    transmission facility, to a reliable link and is responsible for node-

    to-node delivery . It makes the physical layer appear error free to

    the network layer. The data link

    layer divides the stream of bits received from the network layer

    into manageable data units called frames. The data link layer adds

    a header to the frame to define the physical address of the sender

    or receiver of the frame.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    48/74

    Network Layer:

    The network layer is responsible for the source-to-

    destination delivery of a packet possibly across multiple networks.

    The network layer ensures that each packet gets from its point of

    origin to its final destination.

    The network layer includes the logical addresses of the sender and

    receiver.

    Transport Layer:

    The transport layer is responsible for source to-

    destination delivery of the entire message. The network layer

    oversees end-end delivery of individual packets; it does not

    recognize any relationship between those packets . It treats each

    one independently. The transport layer creates a connection

    between the two end ports . A connection is a single logical path

    between the source and destination that is associated with all

    packets in a message . In this layer the message is divided into

    transmittable segment containing a sequence number.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    49/74

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    50/74

    The Application layer enables the user, whether human

    or software, to access the network. A network virtual terminal is a

    software version of a physical terminal and allows a user to log on

    to a remote host.

    A client is defined as a requester of services and a

    server is defined as the provider of services. A single machine can

    be both a client and a server depending on the software

    configuration.

    NETWORK MANAGEMENT

    5.1.1 Client/Server Architecture :

    In this architecture we describe how the secure

    streaming technique is used in the end-to-end ARMS system. The

    main components of the architecture are illustrated . he

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    51/74

    components consist of the broadcaster which is the source of

    encrypted content, packaged for adaptation, the Video Store to

    store the possibly multiply encoded content , the Streaming Server

    which uses a simple and efficient stream-switching technique for

    adaptation, and finally the playback clients. The figure illustrates a

    simple configuration with one instance of each of the main

    components. In large scale deployments , the streaming servers

    can be networked for distribution and there can be multiple

    Broadcasters and Video Stores.

    Client/Server Architecture

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    52/74

    .

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    53/74

    .

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    54/74

    .

    MODULES :MODULES :

    SOURCE/ CLIENT MODULE

    ROUTER

    LINE CARD

    INGRESS PORT

    CONTROL CENTRE/CARD

    PACKET FORWARDING CLASSEGRESS PORT

    DESTINATION /CLIENT MODULE

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    55/74

    DATA FLOW DIAGRAM :DATA FLOW DIAGRAM :

    CLIENTSOCKET

    CLIENT FILE SEND

    PACKSENDING FILE

    ROUTER SOCKET

    INGRESS PORT

    LINE CARDCLASS

    MASTER LINE CARDROUTE CALC CLASS

    ROUTER CLASS

    PACKET FORWARDCLASS

    PACKET SEND CLASSROUTER REQUESTCLASS

    CONTROL CENTRECLASS

    EGRESS PORT

    TO CLIENT/TO ROUTER/ADVERTISING

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    56/74

    CLIENT MODULE /SOURCE:CLIENT MODULE /SOURCE:

    Prepare Packet

    File send

    CLIENT MODULE /Destination:CLIENT MODULE /Destination:

    File Receive

    Store

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    57/74

    INGRESS PORTINGRESS PORT

    Receive Incoming fromRouter/Client

    LINE CARDLINE CARD

    Interface Between Router AndClient

    Contain Sub-RTM

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    58/74

    CONTROL CENTRE/CARD

    Contain Routing Table

    ROUTER REQUEST CLASS :ROUTER REQUEST CLASS :

    Sending Advertise Packet toRouters

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    59/74

    EGRESS PORT :

    Outgoing Port to Routers/Client

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    60/74

    SENDING FILE

    ROUTER SOCKET

    LINE CARD CLASS

    ROUTER CLASS

    EGRESS PORT

    client

    UML - Use Case Diagram

    client

    Ingressrouter

    router

    Egress

    router

    Receive Packet

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    61/74

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    62/74

    COLLABORATION DIAGRAM:

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    63/74

    Update menu used to update the Customer transaction by using

    Markov Algorithm

    TESTING

    AND

    IMPLEMENTATION

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    64/74

    6 .TESTING AND IMPLEMENTATION

    6.1 TESTING:

    Testing is a process of executing a program with a intent of

    finding an error.

    Testing presents an interesting anomaly for the software

    engineering.

    The goal of the software testing is to convince system

    developer and customers that the software is good enough for

    operational use. Testing is a process intended to build confidence

    in the software.

    Testing is a set of activities that can be planned in advance

    and conducted

    systematically.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    65/74

    Testing is a set of activities that can be planned in advance

    and conducted

    systematically.

    Software testing is often referred to as verification &

    validation.

    TYPE OF TESTING:

    The various types of testing are

    White Box Testing

    Black Box Testing

    Alpha Testing

    Beta Testing

    Win Runner And Load Runner

    Load Runner

    WHITE BOX TESTING:

    It is also called as glass-box testing. It is a test case

    design method that uses the control structure of the

    procedural design to derive test cases.

    Using white box testing methods, the software engineer

    can derive test cases that

    1. Guarantee that all independent parts within a module

    have been exercised at least once,

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    66/74

    2. Exercise all logical decisions on their true and false

    sides.

    BLACK BOX TESTING:

    Its also called as behavioral testing . It focuses on the

    functional requirements of the software.

    It is complementary approach that is likely to uncover a .

    different class of errors than white box errors.

    A black box testing enables a software engineering to derive

    a

    sets of input conditions that will fully exercise all functional

    requirements for a program.

    ALPHA TESTING:

    Alpha testing is the software prototype stage when the

    software is first able to run. It will not have all the intended

    functionality, but it will have core functions and will be

    able to accept inputs and generate outputs. An alpha test

    usually takes place in the developer's offices on a separate

    system.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    67/74

    BETA TESTING:

    The beta test is a live application of the software in an

    environment that cannot be controlled by the developer. The beta

    test is conducted at one or more customer sites by the end user of

    the software.

    WIN RUNNER & LOAD RUNNER:

    We use Win Runner as a load testing tool operating at the GUI

    layer as it allows us to record and playback user actions from a

    vast variety of user applications as if a real user had manually

    executed those actions.

    LOAD RUNNER TESTING:

    With Load Runner , you can Obtain an accurate picture of end-to-

    end system performance. Verify that new or upgraded

    applications meet specified performance requirements.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    68/74

    6.1.1 TESTING USED IN THIS PROJECT:

    6.1.2 SYSTEM TESTING :

    Testing of the debugging programs is one of the

    most critical aspects of the computer programming triggers,

    without programs that works, the system would never produce the

    output for which it was designed. Testing is best performed when

    user development are asked to assist in identifying all errors and

    bugs. The sample data are used for testing . It is not quantity butquality of the data used the matters of testing. Testing is aimed at

    ensuring that the system was accurately an efficiently before live

    operation commands.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    69/74

    6.1.3 UNIT TESTING:

    In this testing we test each module individually

    and integrate with the overall system. Unit testing focuses

    verification efforts on the smallest unit of software design in the

    module. This is also known as module testing. The module of the

    system is tested separately . This testing is carried out during

    programming stage itself . In this testing step each module is found

    to working satisfactorily as regard to the expected output from the

    module. There are some validation checks for fields also. It is very

    easy to find error debut in the system.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    70/74

    MODULE 5: (AES PROCESS)

    .1.4VALIDATION TESTING:

    At the culmination of the black box testing,

    software is completely assembled as a package, interfacing error

    have been uncovered and corrected and a final series of software

    tests. That is, validation tests begin, validation testing can be

    defined many ways but a simple definition is that validation

    succeeds when the software functions in manner that can be

    reasonably expected be the customer. After validation tests hasbeen conducted one of the two possible conditions exists.

    TEST CASE EXPECTED OBTAINED REMARK

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    71/74

    NO OUTPUT OUTPUT S

    1. Displays File

    Size ,

    Number Of

    Frames ,

    Transmission

    Time

    And Frame

    latency

    Based on input

    data given.

    Displays File

    Size,

    Transmission

    Time but not

    reception time

    and frame

    latency.

    Error occurs in

    transmission of

    files.

    CONCLUSION

    8.CONCLUSION:

    ConclusionThe RTM is one of the most important components of

    a

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    72/74

    router. It plays a decisive role for routing

    performance and connectivity

    of the network. In this article, we presented a novel

    distributed architecture model for the RTM for next-

    generation

    IP routers. The model we propose can exploit

    additional com-

    s Figure 6. Performance comparison between the

    centralized and the proposed distributed

    architectures: a) memory used by RTMs in our

    proposed distributed architecture and in the

    centralized architecture; b) CPU resources used by

    RTMs in our proposed distributed architecture

    Authorized licensed use limited to: Sakthi Engineering

    College. Downloaded on January 20, 2009 at 10:55 from

    IEEE Xplore. Restrictions apply.

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    73/74

    REFERENCE

    REFERENCES

    [1] Global Consumer Attitude Towards On-Line

    Shopping,http://www2.acnielsen.com/reports/docum

    ents/2005_cc_online

    shopping.pdf, Mar. 2007.[2] D.J. Hand, G. Blunt, M.G. Kelly, and N.M. Adams,

    Data Miningfor Fun and Profit, Statistical Science,

    vol. 15, no. 2, pp. 111-131,2000.

    [3] Statistics for General and On-Line Card Fraud,

    http://www.epaynews.com/statistics/fraud.html, Mar.

    2007.

    [4] S. Ghosh and D.L. Reilly, Credit Card Fraud

    Detection with aNeural-Network, Proc. 27th Hawaii

    Intl Conf. System Sciences:Information Systems:

    Decision Support and Knowledge-Based Systems,vol.

    3, pp. 621-630, 1994.

    [5] M. Syeda, Y.Q. Zhang, and Y. Pan, Parallel

    Granular Networksfor Fast Credit Card Fraud

    Detection, Proc. IEEE Intl Conf. Fuzzy

  • 8/14/2019 Abstract in Recent Years, The Exponential Growth

    74/74

    Systems, pp. 572-577, 2002.

    [6] S.J. Stolfo, D.W. Fan, W. Lee, A.L. Prodromidis,

    and P.K. Chan,Credit Card Fraud Detection Using

    Meta-Learning: Issues and

    Initial Results, Proc. AAAI Workshop AI Methods in

    Fraud and RiskManagement, pp. 83-90, 1997.

    [7] S.J. Stolfo, D.W. Fan, W. Lee, A. Prodromidis, and

    P.K. Chan,

    Cost-Based Modeling for Fraud and Intrusion

    Detection: Results

    from the JAM Project, Proc. DARPA Information

    Survivability Conf.

    and Exposition, vol. 2, pp. 130-144, 2000.