The Multilayer Campus Design

The multilayer campus design consists of a number of building blocks connected across a campus backbone. Several alternative backbone

designs are discussed in the next section. See Figure 2 to view the generic campus design. Note the three characteristic layers: access,

distribution, and core. In the most general model, Layer 2 switching is used in the access layer, Layer 3 switching in the distribution layer,

and Layer 3 switching in the core.

One advantage of the multilayer campus design is scalability. New buildings and server farms can be easily added without changing the

design. The redundancy of the building block is extended with redundancy in the backbone. If a separate backbone layer is configured, it

should always consist of at least two separate switches. Ideally, these switches should be located in different buildings to maximize the

redundancy benefits.

The multilayer campus design takes maximum advantage of many Layer 3 services including segmentation, load balancing, and failure

recovery. IP multicast traffic is handled by Protocol Independent Multicast (PIM) routing in all the Layer 3 switches. Access lists are applied

at the distribution layer for granular policy control. Broadcasts are kept off the campus backbone. Protocol-aware features such as DHCP

forwarding convert broadcasts to unicasts before packets leave the building block.

 

 

In the generic campus model in Figure 2, each module has two equal cost paths to every other module. See Figure 3 for a more highly

redundant connectivity model. In this model, each distribution-layer switch has two, equal cost paths into the backbone. This model provides

fast failure recovery, because each distribution switch maintains two equal cost paths in the routing table to every destination network. When

one connection to the backbone fails, all routes immediately switch over to the remaining path in about one second after the link failure is

detected.

An alternative design that also achieves high-availability is to use the design model in Figure 2, but use EtherChannel links everywhere.

EtherChannel achieves a high-degree of availability as well as load balancing across the bundle. A benefit of the Catalyst 6x00 is that

availability can be improved further by attaching the links to different line cards in the switch. In addition, the Catalyst 6X00 and Catalyst

8500 products support IP-based load balancing across EtherChannel. The advantage of this approach, versus the design in Figure 3, is that the

number of routing neighbors is smaller. The advantage of the design in Figure 3 is the greater physical diversity of two links to different

switches.

Campus Backbone Design--Small Campus Design

Five alternative campus backbone designs will be described. These five differ as to scalability, while maintaining the advantage of Layer 3

services.

Collapsed Backbone—Small Campus Design

The collapsed backbone consists of two or more Layer 3 switches as in the building network. This design lends itself well to the small- to

medium-sized campus network or a large building network, but is not recommended for a larger campus network. Scalability is limited

primarily by manageability concerns. It is also a consideration that the Layer 3 switches in the backbone must maintain Address Resolution

Protocol (ARP) entries for every active networked device in the campus. Excessive ARP activity is CPU-intensive and can affect overall

backbone performance. From a risk and performance point-of-view it is desirable to break larger campus networks into several smaller

collapsed modules and connect them with a core layer.

 

 

Figure 4 illustrates the collapsed backbone model. The server farm is incorporated directly into the collapsed backbone. Use the passive

interface command on wiring closet subnet interfaces of the backbone switches to reduce routing protocol overhead.

Full-Mesh Backbone—Small Campus Design

 

A full-mesh backbone consists of up to three modules with Layer 3 switches linked directly together forming a full-connectivity mesh. Figure

5 shows a small campus network with the full-mesh backbone. The full-mesh design is ideal for connecting two or three modules together.

However, as more modules are added, the number of links required to maintain a full-mesh rises as the square of the number of modules. As

the number of links increases, the number of subnets and routing peers also grows and the complexity rises.

The full-mesh design also makes upgrading bandwidth more difficult. To upgrade one particular module from Fast Ethernet links to

Gigabit Ethernet links, all the other modules must be upgraded at the same time where they mesh together. Therefore, upgrades and changes

are required everywhere. This approach is in contrast to using a dedicated Layer 2 or Layer 3 core to interconnect the distribution modules.

Partial Mesh—Small Campus Design

Figure 6 Partial-Mesh Campus Backbone

The partial-mesh backbone is similar to the full-mesh backbone with some of the trunks removed. Figure 6 depicts the partial-mesh campus

backbone. The partial-mesh backbone is appropriate for a small campus where the traffic predominately goes into one centralized server farm

module. Place high-capacity trunks from the Layer 3 switches in each building directly into the Layer 3 switches in the server farm.

One minor consideration with the partial-mesh design is that traffic between client modules requires three logical hops through the

backbone. In

بقیشو بعدن میگم الان حوصله آپلود کردن بقیه عکسارو ندارم