Cisco Router Network Design in Hierarchical Structure

The hierarchical structure of the Cisco router network design model is based on the type of services provided at each layer. The notion of using layers creates a modular architecture enabling growth and flexibility for new technologies at each layer. The Cisco hierarchical design model consists of three layers. 

 

The core layer provides the high-speed backbone for moving data between the other layers. This layer is gear towards the delivery of packets and not packet inspection or manipulation.

The distribution layer provided policy-based networking between the core and access layer. The distribution layer provides boundaries to the network topology and provides several services.

These services are:

  • Address or area aggregation
  • Departmental or workgroup access
  • Broadcast/multicast domain definition
  • Virtual LAN (VLAN) routing
  • Any media transitions that need to occur
  • Security

The access layer is the edge of the network. Being on the edge the access layer is the entry point to the network for the end user community. Devices participating in the access layer may perform the following functions:

  • Shared bandwidth
  • Switched bandwidth
  • MAC layer filtering
  • Microsegmentation

It is important to remember that the Cisco hierarchical design model addresses functional services of a network. The different layers describe may be find in routers or switches. Each device may partake in the functions of more than one layer. Separation of functional layers is not mandatory however;

maintaining a hierarchical design fosters a network optimized for performance and management.

      1. The Network Infrastructure Life-Cycle

Every corporation has a network infrastructure in place as the framework supporting the business processes. Just as applications and systems have life cycles so does a network infrastructure. This section highlights a network infrastructure life-cycle that may be use as a general guideline for designing and implementing Cisco base networks.

        1. Executive Corporate Vision

Corporate organizational restructuring through regional consolidation or through business group integration will certainly have an effect on the network infrastructure. Aligning the corporate vision with the business directives builds the foundation for the network infrastructure.

        1. Gather Network Infrastructure Information

This involves research and discovery of the current network WAN topology as well as corporate and branch office LAN topologies. A full understanding of end-to-end network configuration is require. Additionally, bandwidth allocations and usage costs must be determine to provide the complete picture.

        1. Determine current network requirements

Communication protocols, client/server architectures, e-mail, distribute processing, Inter— and Intranet, voice and video, each has its own unique characteristics and can place demands on the network. These demands have to be recognized and understood for planning an enterprise wide solution. The result from this study is a network profile for each business process and the network itself.

        1. Assess current network operational processes

Network operational processes involve not just daily trouble shooting but the other disciplines of network management: Inventory, Change, Configuration, Fault, Security, Capacity/Performance, and Accounting. Documenting the processes in place today will assist in evaluating the current baseline of service provided and identify areas that may need re-engineering to meet the changing business requirements.

        1. Research plans for new applications

The effect of new applications on network characteristics must be discovered prior to business groups moving into development, testing and production. Desktop video conferencing and voice communications along with data traffic requires up front knowledge to re-engineer a network. Business group surveys and interviews along with each group’s strategic plan will provide input to creating a requirements matrix.

        1. Identify networking technologies

The selection of the appropriate technologies and how they can be of use in meeting current and future networking requirements relies on vendor offerings and their support structure. Paramount to this success is the partnership with and management of the vendors through an agreed on working relationship.

        1. Define a flexible strategic/tactical plan

The strategic plan in today’s fast pace changing technology environment requires flexibility. A successful strategic plan requires business continuity through tactical choices. The strategic plan must demonstrate networking needs in relation to business processes both current and future.

        1. Develop Implementation Plan

This is the most visible of all the previous objectives. The planning and research perform prior can be for naught if the implementation does not protect current business processes from unschedule outages. This must meet current business requirements and demands while migrating the network infrastructure to the strategic/tactical design. The perception to the business community must be business as usual.

        1. Management and Review

The effectiveness of the new infrastructure is achieved through management and review. Reports highlighting the network health measured against expected service levels based on the strategic/tactical plan and design reflect the ability of the network to meet business objectives. The tools and analysis used here provide the basis for future network infrastructures.

      1. Design Criteria (Design Internet Basics)

In planning for your network design there are many criteria to consider. These criteria are based on the current network design and performance requirements as measured against the business direction compared to internetworking design trends.

The trends of internetworking design affect the four distinct components of an enterprise internetwork. These components are:

Local Area Networks – These are networks within a single location that connect local end users to the services provided by the entire enterprise network.

Campus networks – These are networks within a small geographic area interconnecting the buildings that make up the corporate or business entity for the area.

Wide-area networks (WAN) – These networks span large geographic areas and interconnect campus networks.

Remote networks – These types of networks connect branch offices, mobile users or telecommuters to a campus or the Internet.

Figure 3.2 illustrates today’s typical enterprise-wide corporate network topology.

        1. The Current LAN/Campus Trend

LANs and Campus networks are grouped together for the simple reason that they share many of the same networking issues and requirements. Depending on technologies used a LAN may be focused within a building or span buildings. The spanning of a LAN makes up the campus network. Figure 3.3 diagrams a LAN/Campus network topology.

Campus networks are a hybrid of LANs and WANs. From LAN/WAN technologies campus networks use Ethernet, Token Ring, Fiber Distributed Data Interface (FDDI) Fast Ethernet, Gigabit Ethernet and Asynchronous Transfer Mode (ATM).

Two LAN technologies that serve to optimize bandwidth and increase flexibility for LAN design are Layer 2 and Layer 3 switching.

In short,

Layer 2 switching occurs at the data link layer of the OSI Reference Model and Layer 3 switching occurs at the Network layer of the OSI reference Model.

Both switching algorithms increase performance by providing higher bandwidth to attached workgroups, local servers and workstations. The switches replace LAN hubs and concentrators in the wiring closets of the building.

The ability to switch end user traffic between ports on the device has enabled the concept of Virtual LANs (VLANs). Defining VLANs on the physical LAN enables logical groupings of end user segments or workstations. This enables traffic specific to this VLAN grouping to remain on this virtual LAN rather than use bandwidth on LAN segments that are not interested in the grouped traffic.

For example,

the Finance VLAN traffic does not affect the Engineering VLAN traffic. Table 3.x lists the important technologies affecting LAN and Campus network design.

Routing technologies Routing has long been the basis for creating internetworks. For use in a LAN/Campus environment, routing can be combined with Layer 3 switching. Layer 3 switching may also replace the entire function of a router.
LAN switching technologies
Ethernet switching Ethernet switching is Layer 2 switching. Layer 2 switching can enable improved performance through dedicated Ethernet segments for each connection.
Token Ring switching Token Ring switching is also Layer 2 switching. Switching token-ring segments offers the same functionality as Ethernet switching. Token Ring switching operates as either a transparent bridge or a source-route bridge.
ATM switching technologies ATM switching offers high-speed switching technology that integrates voice, video, and data. Its operation is similar to LAN switching technologies for data operations.
        1. Wide Area Network Design Trends

Routers are typically the connection points to WANs. Being at this juncture, the routers have become an important decision point for the delivery of traffic. With the advent of switching the routers are slowly moving away from being the WAN device.

The WAN services are now being handled by switches with three types of switching technologies. These are circuit, packet and cell switching.

Circuits switching provides dedicated bandwidth while packet switched enabled efficient use of bandwidth with flexibility to service multiple requirements. Cell switching combines the best of both circuit and packet switched networks. ATM is the leading cell-switched technology used in the WAN today.

Because the WAN links end up servicing all traffic from one location to another,

it is important that the bandwidth and performance be optimized. The optimization is due in part to the explosive growth of remote site connectivity, enhanced application architectures such as, client/server and intranets, and the recent development of consolidating servers to a centralized location to ease administration and management.

These factors have reversed the rules for traffic profiles form that of 80% LAN and 20 % WAN to 80 % WAN and 20% LAN. This flip-flop of traffic characteristics has increased the requirement for WAN traffic optimization, path redundancy, dial backup and Quality of Service (QoS) to ensure application service levels over the WAN.

The technologies available today that enable effective and efficient use of WANs are summarized in Table 3.x. Coming on the horizon are such technologies as:

Digital Subscriber Line (DSL), Low-Earth Orbit (LEO) satellites, and advanced wireless technologies.

WAN Technology Typical Uses
Analog modem Analog modems are typically used for temporary dial-up connections or for backup of another type of link. The bandwidth is typically 9.6bps – 56 Kbps.
Leased line Leased lines have been the traditional technology for implementing WANs. These are links “leased” from communications services companies for exclusive use by your corporation.
Integrated Services Digital Network (ISDN) ISDN is a dial-up solution for temporary access to the WAN but adds the advantage of supporting voice/video/fax on the same physical connection. As a WAN technology, ISDN is typically used for dial-backup support at 56, 64 or 128 Kbps bandwidth.
Frame Relay Frame Relay is a distance insensitive telco charge thereby making it very cost effective. It is used in both private and carrier-provided networks and most recently is being used to carry voice/video/fax/data.
Switched Multimegabit Data Service (SMDS) SMDS provides high-speed, high-performance connections across public data networks. It can also be deployed in Metropolitan Area Networks (MANs). It is typically run at 45 Mbps bandwidth.
X.25 X.25 can provide a reliable WAN circuit however does not provide the high bandwidth requirements as a backbone technology.
WAN ATM WAN ATM is used as the high bandwidth backbone for supporting multiservice requirements. The ATM architecture supports multiple QoS classes for differing application requirements delay and loss.
Packet over SONET (POS) POS is an oncoming technology that transports IP packets encapsulated in SONET or SDH frames. POS meets the high bandwidth capabilities of ATM and through vendor implementations supports QoS.
        1. Remote Network Trends

Branch offices, telecommuters and mobile users constitute remote networks. Some of these may use dial-up solutions with ISDN or analog modems. Others may require dedicated lines allowing access to the WAN 24 hours a day 7 days a week (24×7). A study of the users business requirements will dictate the type of connection for these remote locations. Using ISDN and vendor functionality, a remote location can be serviced with 128 Kbps bandwidth to the WAN only when traffic is destined out of the remote location.

Analysis of the ISDN dial-up cost based on up time to the WAN, as compared to the cost of a dedicated line to the WAN, must be determined for each location. This analysis will provide a break-even point on temporary versus dedicated WAN connectivity. Any of the various technologies discussed for the WAN may be well suited for remote network connectivity.

      1. Application availability versus cost effectiveness

It is the job of the network to connect end users with their applications. If the network is not available then the end users are not working and the company loses money. Application availability is driven by the importance of the application to the business. This factor is then compared against the cost of providing application availability using:

  • Redundant lines for alternate paths
  • Dial-back up connectivity
  • Redundant devices with redundant power supplies for connecting the end users
  • On-site or remote technical support
  • Network management reach into the network for troubleshooting
  • Disaster recovery connectivity of remote locations to the disaster recovery center

Designing an internetwork therefore has the main objective of providing availability and service balanced with acceptable costs for providing the service.

The costs are generally dominated by three elements of supporting a network infrastructure. These are:

  • The number and location of hosts, servers, terminals and other devices accessing the network; the traffic generated by these devices and the service levels required to meet the business needs.
  • The reliability of the network infrastructure and traffic throughput that inherently affect availability and performance thereby placing constraints on meeting the service levels required.
  • The ability of the network equipment to interoperate, the topology of the network, the capacity of the LAN and WAN media and the service required by the packets all affect the cost and availability factor.

The ultimate goal is to minimize the cost of these elements while at the same time delivering higher availability. The total-cost of ownership (TCO) however is dependent on understanding the application profiles.

      1. Application profile

Each application that drives a business network has a profile. Some profiles are based on corporate department requirements and others may be a directive for the entire company. A full understanding o the underlying architecture of the application and its use of the network is required for creating an application profile. Three basic components drive a network profile. Figure 3.4 illustrates these graphically. These are:

  • Response time
  • Throughput
  • Reliability

Response time is a perceived result by the end user and a measured function of the network engineer. From a user standpoint, it is the reduced “think-time” of interactive applications that man dates acceptable response time. However, a network design that improves response time is relative to what the end user has perceived as normal response time.

A network engineer will break down the components that make up the response time into the following components:

host-time and network time. The difference between the two are that host time is application processing, be this disk access to retrieve data or analysis of data. Network time is the transit time as measured from leaving the host to the network interface of the end user device.

Host time is then again computed on the workstation. Typically, host time on a workstation is based on presentation to the end user. Online interactive applications require low response times. These applications are usually referred to as time sensitive applications.

Applications that rely on the delivery of large amounts of data are termed throughput-intensive applications. Typically, these applications perform file transfers. They require efficient throughput however, many of these applications also depend on the delivery of the data within a time window. This is where they can adversely affect interactive application response times due to their throughput.

Reliability is often referred to as up time.

Applications requiring a high reliability inherently require high accessibility and availability. This intern requires hardware and topology redundancy, not only on the network side but also on the application host or server side. The importance of the function served by the application is weighed by the cost of downtime incurred by the business. The higher the cost-of-downtime the higher the requirement for reliability.

Creating an application becomes paramount in understanding the needs of a network design.

Application profiles are assessed through exercising some or all of the following methods:

  • Profile the user community – Determine corporate versus departmental internetworking requirements by separating common applications from specific applications for each community. If possible, develop the application flow from the end user to the host/server for each common and specific application. Using network management tools gather network traffic profiles to parallel the user community.
  • Interviews, focus groups and surveys – Using these methods insight into current perceptions and planned requirements are discovered. This process is key to developing the current baseline of the network in addition to coalescing information about planned requirements shared by independent departments. Data gathered here in combination with the community profiles is used for developing the new network design.
  • Design Testing – This is the proof-of-concept stage for the resulting design. Using simulated testing methods or real-time lab environments the design is measured against the requirements for response-time, throughput and reliability.
      1. Cost Efficiency

The network is now an asset to all corporations. As such, investment into the network must be viewed as a total-cost-of-ownership (TCO). These costs are not only equipment investment but also include:

Total cost of equipment – this includes not only hardware but software, installation costs, maintenance costs and upgrade costs.

Cost of performance – is the variable against which you measure the improved network performance and reliability against the increase of business conducted. The ratio between the two determines the effectiveness of the investment.

Installation cost – the physical cabling infrastructure to support the new design becomes a large one-time investment cost. Implement a physical cabling infrastructure that meets current and future networking technologies and requirements.

Growth costs – Reduce growth costs by implementing technologies today that can meet the direction of technologies tomorrow.

Administrative and Support – Limit the complexity of the internetwork design.

The more complicated the higher the cost for training, administration, management and maintenance.

Cost of downtime – Analyze the cost of limited, reduced or inaccessible application hosts, servers and databases. A high down time cost may require a redundant design.

Opportunity costs – Network design proposals should provide a minimum of two designs with a list of pros and cons to each design. Opportunity costs are the costs that may be realize by not choosing a design option. These costs are measure more in a negative way; not moving to a new technology may result in competitive disadvantage, higher productivity costs and poor performance.

Investment protection – The current network infrastructure is often salvage due to the large investment in cabling, network equipment, hosts and servers.

However,

For most networks investment costs are recovered within three years. Understand the cycle of cost recovery at your corporation. Apply this understanding to the design as a corporate advantage in the design proposal.

Keep in mind that the objective of any network design is the delicate balance of meeting business and application requirements while minimizing the cost to meet the objective.

      1. Network Devices and Capabilities

The phenomenal growth of internetworks has predicated the move from bridges to routers and now switches. There are four basic devices used in building an internetwork. Understanding the functions of each is important in determining the network design. These four devices are: Hubs, bridges, routers and switches.

Hubs are often called concentrators and made possible centralized LAN topologies. All the LAN devices are connected to the hub. The hub essentially regenerates the signal received form one port to another acting as a repeater.

These devices operate at the physical layer (Layer 1) of the OSI Reference Model.

Bridges connect autonomous LAN segments together as a single network and operate at the data link layer (Layer 2) of the OSI Reference Model. These devices use the Media Access Control (MAC) address of the end station for making a decision forwarding the packet. Bridges are protocol independent.

Routers performing a routing function operate at the network layer (Layer 3) of the OSI Reference Model. These devices connect different networks and separate broadcast domains. Routers are protocol dependent.

Switches were first advanced multiport bridges with the ability to separate collision domains. Layer 2 switches enhancing performance and functionality through virtual LANs have replaced hubs. The second incarnation of switches, enable them to perform Layer 3 routing decisions thereby performing the function of a router.

        1. Bridging and Routing

Bridging for this discussion is concerned with transparent bridging. This is opposed to Source-Route Bridging (SRB) which is closer to routing than bridging. Bridging occurs at the MAC sublayer of IEEE 802.3/802.5 standard applied to the data link layer of the OSI Reference Model.

Routing takes place at the Network layer of the OSI Reference Model. Bridging views the network as a single logical network with one hop to reach the destination. Routing enables multiple hops to and between multiple networks. This leads to four distinct differences between the routing and bridging:

Data-link packet header does not contain the same information fields as network layer packets.

Bridges do not use handshaking protocols to establish connections. Network layer devices utilize handshaking protocols.

Bridges do not reorder packets from the same source while network layer protocols expect reordering due to fragmentation.

Bridges use MAC addresses for end node identification. Network layer devices such as routers, use a network layer address associated with the wire connecting to which the device is attached.

While there are these differences between bridging and routing there are times where bridging may be required or preferred over routing and vice-a-versa.

Advantageous of bridging over routing:

Transparent bridges are self-learning therefore require minimal, if any, configuration. Routing requires definitions for each interface for the assignment of a network address. These network addresses must be unique with in the network.

Bridging has less overhead for handling packets than does routing.

Bridging is protocol independent while routing is protocol dependent.

The Bridging will forward all LAN protocols. Routing only uses network layer information and therefore can only route packets.

In contrast routing has the following advantageous over bridging:

Routing allows the best path to be chosen between source and destination. Bridging is limited to a specific path.

Routing is a result of keeping updated complete network topology information in routing tables on every routing node. Bridging maintains a table of devices found off its interfaces. This causes bridges to learn the network slower than routing thereby enabling routing to provide a higher level of service.

Routing uses network layer addressing which enables a routing device to group the addresses into areas or domains creating a hierarchical address structure. This leads to an unlimited amount of supported end nodes. Bridging devices maintain data link layer MAC addresses, therefore they can not be grouped, and hence results in a limited number of supported end nodes.

Routing devices will block broadcast storms from being propagated to all interfaces.

Bridging spans the physical LAN segment to multiple segments and therefore forward a broadcast to all attached LAN segments.

Routing devices will fragment large packets to the smallest packet size for the selected route and then reassemble the packet to the original size for delivery to the end device. Bridges drop packets that are too large to send on the LAN segment without notification to the sending device.

Routing devices will notify transmitting end stations to slow down (congestion feedback) the transmission of data when the network itself becomes congested. Bridging devices do not possess that capability.

The general rule of thumb in deciding to route or bridge is to bridge only when needed. Route when ever possible.

        1. Switching

The process of witching is the movement of packets from the receiving interface to a destination interface. Layer 2 switching uses the MAC address found with in the frame. Layer 3 switching uses the network address found within the frame.

The Layer 2 switching is essentially transparent bridging. A table is kept within the switching device for mapping the MAC address to the associated interface. The table is built by examining the source MAC address of each frame as it enters the interface. The switching function occurs when the destination MAC address is examined and compared against the switching table.

If a match is found the frame is sent out the corresponding interface. A frame that contains a destination MAC address not found in the switching table is broadcast out all interfaces on the switching device. The returned frame will allow the switching device to learn the interface and therefore place the MAC address in the switching table.

MAC addresses are predetermined by the manufacturers of the network interface cards (NICs).

These cards have unique manufacturer codes assigned by the IEEE with a unique identifier assigned by the manufacturer. This method virtually insures unique MAC addresses. These manufacturer addresses are often referred to as burned-in-addresses (BIA) or Universally Administered Addresses (UAA). Some vendors however, allow the UAA to be overridden with a Locally Administered Address (LAA). Layer 2 switched networks are inherently considered a flat network.

In contrast,

Layer 3 switching is essentially the function of a router. Layer 3 switching devices build a table similar to the Layer 2 switching table. Except in the case of the Layer 3 switching table the entries are mapping network-layer addresses to interfaces. Since the network-layer addresses are based on, assigning a logical connection to the physical network a hierarchical topology is created with Layer 3 switching.

As packets enter an interface on a Layer 3 switch, the source network-layer address is stored in a table that cross-references the network-layer address with the interface. Layer 3 switches carry with them the function of separating broadcast domains and network topology tables for determining optimal paths.

Combining Layer 2 and Layer 3 switching,

within a single device reduces the burden on a router to route the packet from one location to another. Switching therefore increases throughput due to the decisions being done in silicon, reduces CPU overhead on the router, and eliminates hops between the source and destination device.(newidb2-2)

      1. Backbone Considerations

The network backbone is the core of the three layer hierarchical model. Many factors affect the performance of the backbone. These factors are:

  • Path optimization
  • Traffic prioritization
  • Load balancing
  • Alternate paths
  • Switched access
  • Encapsulation (Tunneling)

Path optimization is generally a function of a router that occurs using the routing table created by the network layer protocols. Cisco routers support all of the widely implemented IP routing protocols. These include: Open Shortest Path First (OSPF), RIP, IGRP, EIGRP, Border Gateway Protocol (BGP), Exterior Gateway Protocol (EGP), and HELLO. Each of these routing protocols calculates the optimal path from the information provided within the routing tables.

The calculation is based on metrics such as, bandwidth, delay, load, and hops. When changes occur in the network, the routing tables are update throughout all the routers within the network.

The process of all the routers updating their tables and recalculating the optimal paths is call convergence.

With each new generation of IP routing protocols, the convergence time is reduce. Currently the IP routing calls with the smallest convergence times are Cisco proprietary routing protocols IGRP and EIGRP.

Traffic prioritization is a form of policy-based routing that prioritizes the network traffic. This allows time sensitive and mission critical traffic to take precedence over throughput-sensitive type traffic. Cisco routers employ three types of traffic prioritization. These are priority queuing, custom queuing and weighted-fair queuing.

Priority queuing is the simplest form of traffic prioritization. It is designed primary for low speed links. The traffic under priority queuing is classified based on criteria among which are protocol and subprotocol types. The criteria profile is then assigned to a one of four output queuing.

These queues are high, medium, normal and low.

In IP based networks, the IP type-of-service (TOS) feature and Cisco IOS software ability to prioritize IBM logical unit traffic, enable priority queuing for intraprotocol prioritization.

Custom queuing answers a fairness problem that arises with priority queuing. With priority queuing, low priority queues may receive minimal service, if any service. Custom queuing takes the addresses this problem by reserving bandwidth for a particular type of traffic.

Cisco custom queuing therefore allows the prioritization of multiprotocol traffic over a single link. For example, the greater the reserved bandwidth for a particular protocol, the more service received. This provides a minimal level of service to all over a shared media.

The exception to this is under utilization of the reserved bandwidth.

If traffic is not consuming the reserved bandwidth percentage then the remaining percentage of reserved bandwidth will be shared by the other protocols. Custom queuing may use up to 16 queues. The queues are serviced sequentially until the configured byte count has been sent or the queue is empty.

Weighted fair queuing uses an algorithm similar to time-division multiplexing. Each session over an interface is placed into a queue and allocated a slice of time for transmitting over the shared media. The process occurs in a round robin fashion. Allowing each session to default to the same weighting parameters ensure that each session will receive a fair share of the bandwidth.

This use of weighting protects time-sensitive traffic by ensuring available bandwidth and therefore consistent response times during heavy traffic loads.

The weighted fair algorithm identifies the data streams over an interface dynamically. Because the algorithm is based on separating the data streams into logical queues, it cannot discern the requirements of different conversations that may occur over the session.

This is an important point when considering queuing methods for protecting IBM SNA traffic. Weighted fair queuing becomes a disadvantage for SNA traffic when the SNA traffic is encapsulated in DLSw+ or RSRB.

The differences between the three queuing methods are dependent on the needs of the network. However, for administrative point of view weighted fair queuing is far easier due to it being a dynamically built queue versus priority and custom queuing which both required the definitions of access lists, pre-allocated bandwidth and predefined priorities.

Load balancing for IP traffic occurs with two to four paths to the destination network.

It is not necessary for these paths to be of equal cost. The load balancing of IP traffic may occur on a per-packet basis and or a per-destination basis. Bridged traffic over multiple serial links becomes balanced by employing a Cisco IOS software feature called circuit groups. This feature logically groups the multiple links as a single link.

Redundancy is a major design criterion for mission critical processes. The use of alternate paths not only requires alternate links but requires terminating these links in different routers.

Alternate paths are only valuable when single point of failure is avoid.

Recovery of dedicated leased connections is mandatory for ensuring availability and service. This function is often term switch access or switch connection however, it does not relate to the Layer 2 or Layer 3 switching function. Switched access calls for the instantaneous recovery of WAN connectivity due to an outage on the dedicated leased line.

It is also using to supplement bandwidth requirements using a Cisco IOS software feature call bandwidth-on-demand (BOD) which uses Dial-on-demand routing (DDR). Using DDR along with the dedicated leased WAN connection, a remote location can send large mounts of traffic in a smaller time frame.

Encapsulation techniques are using for transporting non-routable protocols. IBM’s SDLC or SNA is a non-routable protocol. They are also use when the design calls for a single protocol backbone. These techniques are also refer to as tunneling.

      1. Distributed Services

Within the router network, services may be distribute for maximizing bandwidth utilization, routing domains and policy networking. The Cisco IOS software supports these distribute services through:

  • Effective backbone bandwidth management
  • Area and service filtering
  • Policy-based distribution
  • Gateway services
  • Route redistribution
  • Media translation

Preserving valuable backbone bandwidth is accomplished using the following features of Cisco IOS software:

  • Adjusting priority output queue lengths so overflows are minimized.
  • Adjust routing metrics such as bandwidth and delay to facilitate control over path selection.
  • Terminate local polling, acknowledgement and discovery frames at the router using proxy services to minimize high volume small-packet traffic over the WAN.

Traffic filtering provides policy-based access control into the backbone form the distribution layer. The access control is based on area or service. Typically, we see the use of service access controls as a means for limiting an application service to a particular segment on the router.

Traffic filtering is based on Cisco IOS software access control lists. These access control lists can affect inbound and outbound traffic of a specific interface or interfaces.

On both inbound and outbound the traffic may be permit or deny.

Policy-based networking is a set of rules that determine the end-to-end distribution of traffic to the backbone. Policies may be define to affect a specific department, protocol, or corporate policy for bandwidth management. The CiscoAssure initiative is a policy-based direction that enables the various network equipment to work together to ensure end-to-end policies.

Gateway functions of the router enable different versions of the same networking protocol to internetwork. An example of this is connecting a DECnet Phase V network with a DECnet Phase IV network. These DECnet versions have implemented different addressing schemes.

Cisco IOS within the router performs as an address translation gateway (ATG) for transporting the traffic between the two networks.

Another example is AppleTalk translational routing between different versions of AppleTalk.

Route Redistribution enables multiple IP routing protocols to interoperate through the redistribution of routing tables between the two IP routing protocols within the same router.

There are times in corporate networks that communications between different media is a requirement. This is more and more with the expansion of networks and newer technologies. For the most part media translation occurs between Ethernet frames and token-ring frames. The translation is not a one for one since an Ethernet frame does not use many of the fields used in a token-ring frame.

An additional translation that is observe is that form IBM SDLC to Logical Link Control 2 (LLC2) frames. This enables serial attached IBM SDLC connections to access LAN attached devices.

      1. Local Services

At the local access layer of the three layer model features provided by the Cisco IOS within the router, provide added management and control over access to the distribution layer. These features are:

  • Value-added Network Addressing
  • Network Segmentation
  • Broadcast and Multicast Capabilities
  • Naming, Proxy, and Local Cache Capabilities
  • Media Access Security
  • Router Discovery

The discovery of servers and other services may sometimes cause broadcasts within the local area network.

A feature on Cisco IOS software directs these requests to specific network-layer addresses. This feature is call helper addressing. Using this feature limits the broadcast to only segments of the helper addresses defined for that service. This is best use when protocols such as Novell IPX or DHCP typically search the entire network for a server using broadcast messages. Helper addresses thereby preserve bandwidth on segments that do not connect the server requested.

Network congestion is typically a result of a poorly design network. Congestion is manageable by segmenting networks into smaller more manageable pieces. Using multiple IP subnets, DECnet areas and AppleTalk zones further segments the network so that traffic belonging to the segment remains on the segments.

Virtual LANs further enhance this concept by spanning the segmentation between network equipment.

While routers control data link (MAC address) broadcasts they allow network layer (Layer 3) broadcasts. Layer 3 broadcasts are often use for locating servers, and services require by the host. The advent of video broadcasts has proliferated the use of multicast packets over a network.

Cisco IOS does its best in reducing broadcast packets over IP networks through directed broadcasts to specific networks rather than the entire network. In addition, the Cisco IOS will employ a spanning-tree technique when flood broadcasts are recognize minimizing excessive traffic but enabling the delivery of the broadcast to all networks.

IP multicast traffic moves form a single source to multiple destinations.

IP multicast is supporting by a router running Cisco IOS with the Internet Group Management protocol (IGMP) implement. Using IGMP the router can serve as a multicast distribution point delivering packets to only segments that are members of the multicast group and ensuring loop-free paths eliminating duplicate multicast packets.

The Cisco IOS software contains many features for further reducing bandwidth utilization using naming, proxy and local cache functions. The function drastically reduces discovery, polling and searching characteristics of many of the popular protocols from the backbone.

The following is a list of the features available with Cisco IOS that limits these types of traffic from the backbone:

Name services – NetBIOS, DNS, and AppleTalk Name Binding Protocol

Proxy services – NetBIOS, SNA XID/Test, polling, IP ARP, Novell ARP, AppleTalk NBP

Local Caching – SRB RIF, IP ARP, DECnet, Novell IPX

    1. Selecting Routing Protocol

Routing protocols are the transport of IP based networks. Examples of routing protocols are:

Routing Information Protocol (RIP)

Routing Information Protocol 2 (RIP2)

Interior Gateway Routing Protocol (IGRP)

Enhanced Interior Gateway Routing Protocol (EIGRP)

Open Shortest Path First (OSPF)

Intermediate System – Intermediate System (IS-IS)

In selecting a routing protocol for the network,

the characteristics of the application protocols and services must be take into consideration. Network designs enabling a single routing protocol are best for network performance, maintenance and troubleshooting. There are six characteristics of a network to consider when selecting a routing protocol. These are:

  • Network Topology
  • Addressing and Route Summarization
  • Route Selection
  • Convergence
  • Network Scalability
  • Security
        1. Network Topology

Routing protocols view the network topology in two ways. These are flat or hierarchical. The physical network topology is the connections of all the routers within the network. Flat routing topologies use network addressing to segregate the physical network into smaller interconnected flat networks. Examples of routing protocols that use a non-hierarchical flat logical topology are RIP, RIP2, IGRP and EIGRP.

OSPF and IS-IS routing networks are hierarchical in design. As shown in Figure 3.6, hierarchical routing networks assign routers to a routing area or domain. The common area is consider the top of the hierarchy off which the other routing areas communicate through. Hierarchy routing topologies assign routers to areas.

These areas are the routing network addresses use for delivering data from one subnet to another. The areas are a logical grouping of contiguous networks and hosts. Each router maintains a topology map of its own area but not of the whole network.

        1. Addressing and Route Summarization

Some of the IP routing protocols have the ability to automatically summarize the routing information. Using summarization, the route table updates that flow between routers is greatly reduce thereby saving bandwidth, router memory and router CPU utilization. As shown in 3.7 a network of 1000 subnets must have a 1000 routes. Each of the routers within the network must therefore maintain a 1000 route table.

If we assume that the network is using a Class B addressing scheme with a subnet mask of 255.255.255.0, summarization reduces the number of routes within each router to 253. There are three routes in each of the routers describing the path to the other subnets on the other routers and 250 routes describing the subnets connected to each router.

        1. Route Selection

In networks where high availability and redundancy are a requirement, the route selection algorithm of the routing protocol becomes an important factor in maintaining acceptable availability. Each of the routing protocols uses some type of metric to determine the best path between the source and the destination of a packet. The available metrics are combine to produce a “weight” or “cost” on the efficiency of the route.

Depending on the routing protocol in use multiple paths of equal cost may provide load balancing between the source and destination thereby spreading the load across the network. some protocols like EIGRP can use unequal cost paths to load balance.

This ability to load balance further improves the management of network bandwidth.

Load balancing over multiple paths is perform on a per-packet or per-destination basis. Per–packet distributes the load across the possible paths in proportion to the routing metrics of the paths. For equal cost paths this results in a round-robin distribution. There is however, the potential of a per-packet load balancing technique that the packets are receive out of order. Per-destination load balancing distributes the packets based on the destination over the multiple paths to the destination.

For instance, as shown in Figure 3.8, packets destined for subnets attached to router R2 from router R1 use a round-robin technique based on the destination. Packets destined for subnet 1 flow over link 20, while packets destined for subnet 2 flow over link 21 versus the per packet basis of alternating the packets for subnet 1 and subnet 2 over the two links.

        1. The concept of convergence

Convergence is the time it takes a router to recognize a network topology change, calculate the change within its own table and then distribute the table to adjacent routers. The adjacent routers then perform the same functions. The total time it takes for the routers to begin using the new calculate route is call the convergence time. The time for convergence is critical for time-sensitive traffic.

If a router takes too long to detect, recalculate and then distribute the new route, the time-sensitive traffic may experience poor performance or the end nodes of the connection may then drop.

In general, the concern with convergence is no the addition of new links or subnet s in the network. The concern is the failure of connectivity to the network. Routers recognize physical connection losses rapidly. The issue for long convergence time is the failure to detect poor connections within a reasonable amount of time. Poor connections such as line errors, high collision rates and others, require some customization on the router for detecting these types of problems faster.

        1. Network Scalability

The ability of routing protocols to scale to a growing network is not so much a weakness of the protocol but the critical resources of the router hardware. Routers require memory, CPU and adequate bandwidth to properly service the network.

Routing tables and network topology are storing in router memory. Using a route summarization technique as described earlier reduces the memory requirement. In addition, routing protocols that use areas or domains in a hierarchical topology requires the network design to use small areas rather than large areas to help in reducing the memory consumption.

Calculation of the routes is a CPU intensive process. Through route summarization and the use of link-state routing protocols the CPU utilization is greatly reduce since the number of routes needing re-computing is reduce.

Bandwidth on the connections to each router becomes a factor in not only scaling the network but in convergence time. Routing protocols learn of neighbor routers for the purpose of receiving and sending routing table updates.

The type of routing protocol in use will determine its affect on the bandwidth.

Distance-vector routing protocols such as RIP and IGRP send their routing tables at regular intervals. The distance-vector routing protocol waits for the time interval before sending its update even when a network change has occurred.

In stable networks this type of updating mechanism wastes bandwidth, however, protects the bandwidth from an excessive routing update load when a change has occurred. However, due to the periodic update mechanism, distance vector protocols tend to have a slow convergence time.

Link-state IP routing protocols such as OSPF and IS-IS address bandwidth wastefulness of distance-vector routing protocols and slow time to converge.

However,

due to the complexity of providing this enhancement link-state protocols are CPU intensive, require higher memory utilization and bandwidth during convergence. During network stability, link-state protocols use minimal network bandwidth. After start-up and initial convergence, updates are sending to neighbors only when the network topology changes.

During a recognized topology change, the router will flood its neighbors with the updates. This may cause excessive load on the bandwidth, CPU and memory of each router. However, convergence time is lower than that of distance-vector protocol.

Cisco’s proprietary routing protocol EIGRP is an advanced version of distance-vector protocols with properties of link-state protocols. From distance-vector protocols, EIGRP has taken many of the metrics for route calculation.

The advantageous of link-state protocols are using for sending routing updates only when changes occur.

While EIGRP preserves CPU, memory and bandwidth during a stable network environment, it does have high CPU, memory and bandwidth requirements during convergence.

The convergence ability of the routing protocols and their affect on CPU, memory and bandwidth has resulted in guidelines form Cisco on the number of neighbors that can be effectively supported.

Table 3.x lists the suggested neighbors for each protocol.

Routing Protocol Neighbors per Router
Distance vector (RIP, IGRP) 50
Link state (OSPF, IS-IS) 30
Advanced distance vector (EIGRP) 30
      1. Security

Routing protocols can be use to provide a minimal level of security. Some of the security functions available on routing protocols are:

  • Filtering route advertisements
  • Authentication

Using filtering, routing protocols can prohibit the advertisements of routes to neighbors thereby protecting certain parts of the network. Some of the routing protocols authenticate their neighbor prior to engaging in routing table updates. Though this is protocol specific and generally a weak form of security, it does protect unwanted connectivity from other networks using the same routing protocol.

Leave a Reply

Your email address will not be published. Required fields are marked *