Tải bản đầy đủ - 0 (trang)
4 Concept of Label-Switching: The Basis of ATM and MPLS

4 Concept of Label-Switching: The Basis of ATM and MPLS

Tải bản đầy đủ - 0trang


This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks

Labelling schemes can alternatively be completely arbitrary. As long as the sequence of relays represented by {in-port, label} to {out-port,

new-label} at each node is written coherently, the label values themselves can be completely arbitrary tags. The special significance of the

labeling scheme used for the example will become more apparent later when we begin to consider actual SONET/SDH timeslots or

DWDM circuit establishment under generalized MPLS (GMPLS). In those cases the label sequence becomes a specification of literally

what time slots, wavelengths or fibers we want cross-connected to realize a hard physical circuit, rather than strictly just a packet or cell

relay function such as a label switched path.

Table 1-5. Label swapping table entries to effect the paths in Figure 1-13



in port


out port

new label




(dest = F)

to A




from B


to E




from A


to F




from E

e (=egress)






(dest = F)

to C




from A


to D




from C


to E




from D


to F




from E




Another important point to observe is that from the point of view of the entry node B in the first example, the label E on port A is completely

equivalent to a single tag for "the route to node F." Since the mapping between labels is constant at each node, the complete path is

determined by the initial label (and port) value. Node B need not know any other details; anything it puts on port A with label E will egress

at node F. This illustrates the sense in which a label-switched path truly establishes a dedicated circuit or tunnel through the network

between the respective end-node pairs. In fact in the example with two different paths between nodes B and F, node B could consider that

it has two separate permanent circuits that it can use to send traffic to node F: route 1 = {port A, label E}, route 2 = {port C, label D}. A pair

of such virtual circuits or label-switched paths can be used in practice for load balancing over the two routes, or as a form of protection


Label-switching was adopted for both ATM and MPLS out of a fundamental need for a circuit-like logical construct for many networking

purposes. Pure datagram flows cannot be controlled enough to support QoS assurances and effective and fast schemes for restoration

are greatly enabled by the manipulation of either physical or logical circuit-like quantities, rather than redirection of every single packet or

cell through conventional IP routing tables. The reason failure recovery is faster within a circuit-oriented paradigm is ultimately the same

reason that basic delay, throughput and loss rates of cell or packet transport are improved by switching as opposed to routing.

Routing involves determining a next-hop decision at every node based on the absolute global destination (and sometimes source and

packet type) information of the packet in conjunction with globally-determined routing tables at each node. This is fundamentally a more

time-consuming and unreliable process than label-switching. A conventional IP router has to perform a maximum length address-matching

algorithm on every packet and rely on continual topology updates to ensure a correct routing table entry for every possible destination IP

address in the administrative area. The maximal length aspect of the address-matching refers to the fact that a packet's destination

address may produce no exact match in the routing table. In such cases the packet is forwarded toward the subnetwork with the maximal

This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks

partial address match.

In label switching, however, the next hop decision is made with entirely local information established previously in an explicit connection

setup process. The local information is the label swapping tables in an LSR, or simply the physical connection state between input and

output timeslots or wavelengths in a cross-connect. In a physical fiber or metallic switch, or in a timeslot-changing switch, or a wavelength

cross-connect, the switching information (i.e., the outport and next timeslot or wavelength) is stored in the physical connection state. In an

LSR or ATM switch, the forwarding port and next-label are stored in tables that are typically much smaller than a full IP routing table, and

operate on the basis of an exact label match which is fundamentally much faster, autonomous, and reliable than IP datagram routing.

1.4.1 Multi-Protocol Label Switching (MPLS)

In MPLS, a relatively short "label" replaces the full interpretation of an IP packet's header as necessary under conventional forwarding. In

traditional IP forwarding, packets are processed on the basis of the global destination address at every router along the path. This is also

known as hop-by-hop routing. In contrast, MPLS encapsulates IP packets with a label when the packet first arrives into an MPLS routing

domain. The MPLS edge router will look at the information in the IP header and select an appropriate label for it. The label selected by the

MPLS edge router can be based on QoS and explicit routing considerations, not just the destination address in the IP header. This is in

contrast to conventional IP routing, where the destination address will determine the next hop for the packet. All subsequent routers in the

MPLS domain will then forward the packet based on the label, not the IP header information. Then, as the labeled packets exit the MPLS

domain, the edge router through which it is leaving will remove the label. The value of the label usually changes at each LSR in the path as

the incoming label is looked up and swapped for another. Thus labels only have meaning between neighboring nodes. With this, MPLS

can be used to forward data between neighboring nodes even if no other nodes on the network support MPLS.

MPLS uses two categories of label-based routers: label edge routers (LERs) and label switching routers (LSRs). An LSR is used in the

core of an MPLS network and participates in the establishment of LSPs and high-speed switching of flows over existing LSPs. Notably

many LSRs are implemented with high capacity ATM label-switching switch hardware as their core with a conventional router added to

participate in the LSP establishment protocol and to perform routing functions for the residual traffic of a normal IP datagram nature (i.e.,

packet traffic that does not constitute significant flows per se to warrant establishment of an LSP). The LER supports access to the MPLS

core network. Edge routers are classified as either ingress or egress. An MPLS ingress router performs the initial routing lookup and LSP

assignment for launching the packets into the fabric of logical pipes through the MPLS core. The egress LER removes labels and performs

a further routing table lookup to continue the packet on its way under conventional routing. LERs can have many types of access interface

(frame relay, ATM, Ethernet for example).

MPLS allows a packet flow to be fully "routed" only once, at the entry to an IP network, and thereafter label-switched all the way to its

egress router. Aside from the fact that the packet flows remain asynchronous and statistically multiplexed, in all logical regards, once the

IP packets for that destination are encapsulated with a label and put in the appropriate outgoing queue from the egress node, it is like they

have been dropped into a dedicated pipe or tunnel direct to the destination. The train of switching relationships written in the

label-swapping tables en route creates a hard-wired sequence of relays that is fully amenable to hardware implementation at each node,

and referred to as a Label Switched Path (LSP). This improves router network throughput and delay performance compared to routing

every packet at every node en route based on its ultimate destination IP address.

In assigning IP packets to LSPs, an LER makes use of the concept of forwarding equivalence. A Forwarding Equivalence Class (FEC) is a

set of IP destination addresses that, from the standpoint of the given entry node all share the same local routing decision. In conventional

IP forwarding, each router needs to maintain FEC tables to indicate for each output port the set of IP destination addresses that should be

sent out that port. In MPLS the FEC tables need be kept only at LERs and record for each IP address what LSP to put it on (i.e., what

initial label and outgoing port to assign it to). FECs at the LERs are based on class of service requirements and on IP address. The table

at the LER that lists the initial label, and outgoing port for each FEC is called its label information base (LIB).

For more detailed background on MPLS the book by Davie and Reckhter [DaRe00], and references therein, are suggested. The relevance

here is twofold. In Chapter 2 we see how MPLS is extended into Generalized MPLS (GMPLS), with applications in the automatic

provisioning of service paths through a transport network at any level such as DS-1 through to whole lightpaths. As with ATM, MPLS is

also a transport technique that involves circuit-oriented logical constructs for the transport of stochastic packet flows, and hence is also

amenable to the controlled oversubscription design strategy of Chapter 7.

[ Team LiB ]


This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register

. it. Thanks

[ Team LiB ]

1.5 Network Planning Aspects of Transport Networks

1.5.1 Modularity and Economy-of-Scale

SONET provides an example of a hierarchy of standard rates and formats on which multiplexing and transmission system products can be

based. Some industry experts expect that an SDH-like progression (4, 16, 64…) of wavelengths per fiber may also arise for DWDM

transmission. The adoption of a discrete set of modular capacities aids both users and system designers. It allows specific technology

choices (such as frequency spacing, carrier generation, EDFA noise, gain, bandwidth, etc.) to be combined and optimized to realize a

product offering at each price-performance point that the market needs. Some vendors may specialize in the market for 16-l systems while

another may use quite different technology to specialize in, say, 768-l systems. As a result, the actual installed capacity in a SONET (or an

optical network as well) is inevitably modular in nature. The relevance to network design is that although we may often compare design

approaches on the basis of total unit-capacity requirements, we have to keep in mind that the real capacities and costs will be stair-step

functions of cost versus capacity. Moreover, cost is rarely in linear dependence on the capacity; so there is an economy of scale effect. In

detailed studies it can be important to consider the modular and nonlinear cost structure of the actual equipment to be specified. As an


example, consider a bidirectional 48-l DWDM system. The cost structure will reflect all the following physical items:



This discussion is adapted for use here from D

[ oGr00].

One fiber pair and pro-rated cost of right-of-way acquisition, duct and cable, installation, and repeater/amplifier housing costs

b. 48 electrical (transmit) channel interfaces


Generation and modulation of 48 optical carriers

d. Optical WDM multiplexor


In-line optical amplifiers with bandwidth and power capabilities suitable for 48 wavelengths, every 60 to 100 km typically

f. An average distance-amortized cost for 48 regenerators, every 300 km, say

g. Optical WDM demultiplexor

h. 48 electrical (receive) channel interfaces

i. Redundant common power, maintenance processor, rack, cabling, and equipment bay installation costs

Items (a), (d), (e), (g), (i), and a large part of (f), are one-time costs required for the system's existence, even if just one channel is

operated. Such "get started" cost items are traditionally called the "common equipment" in telecommunications and are a major contributor

to the nonlinear cost structure. To turn up the first channel, all the common equipment must be present. In principle, all subsequent

electrical and optical per-channel circuit packs can then be provisioned on a one-by-one basis as needed, but if growth is high or

on-demand path establishment is desired, then all of these may also be preprovisioned. In the limit of full preprovisioning there is a large

step in cost to establish the system, but no further cost depending on how many channels are actually in use. This is the modularity

aspect. The economy of scale aspect is that, in this example, a 192-l system may be only two or three times the cost of the 48-lsystem (not

four times).

If the per-channel costs are actually equipped only on an as-needed basis, then we actually have two levels of capacity-cost modularity.

Say for example that for current working requirements a given facility requires 37 wavelength channels. Then the common equipment for a

standard 48-l system may be required, but only 37 of the per-channel costs need actually be incurred. Thus the cost structure for transport

capacity actually has a least two scales on which it is modular. A third sense in which capacity cost is extremely modular is if we recognize

This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks

the "get-started" cost of having the facility right-of-way in the first place. This is usually only a factor in considering topology design itself,


There are several cases where modeling a single scale of modularity can be justified, however, corresponding to the full capacity of each

basic module size. First, if the growth rate is high and the cost of dispatching maintenance crews to populate new channel cards

one-by-one is also high, then it may make more sense to fully equip the system when installed. Secondly, to support truly dynamic path

provisioning the systems may be fully preprovisioned when installed. Additionally, in any case where the average system utilization is high

following some planning design study, then the "fully equipped" model also tends to be justified in its own right. Another way to limit the

complexity to a single modularity scale while recognizing only partial channel equipping is to approximate the average fill factor of

transmission systems and lump the total per-channel costs of this average fill level into the module cost.

A notation used to represent economy of scale is "N times Y times." For instance 3x2x means that a tripling of capacity results in a

doubling of cost. "3x2x" is fairly characteristic of mature SONET transmission technology over the OC-24 to OC-192 range. Similar or even

stronger economy of scale effects may characterize mature DWDM transmission systems.

1.5.2 Fiber Route Structures

In transport network planning we refer to the fiber route structure on which physical facilities can be built as the network graph. This is the

actual set of right-of-ways on which permission to lay fiber cables is in hand. Several recently built networks are based on rights-of-way

obtained from a cooperating, co-owned or defunct power, gas, or railway utility and so inherit some of the topological properties of those

networks. Typical transport network graphs tend to have an irregular mesh interconnection pattern with an average number of

geographically diverse rights-of-ways at junction sites that varys from two to at most about seven at any individual node. This measure of

nodal route-diversity is called the span-degree or just the "degree" of a node. The corresponding average over all nodes is also referred to

as the network average nodal degree,

In some North American networks


may be as low as 2.3, indicating a quite sparse topology. A degree of two at every node is the

minimum possible in any network that has a fully restorable topology. In contrast some European networks have

as high as 4.5.Figure

1-14 allows visualization of these two extremes, using examples from published data on real networks. Note, however, that even the most

richly connected transport graphs are quite sparse relative to truly random graphs of full mesh graphs. Transport network graphs tend to

look neither like fully connected networks nor like random graphs where any node may with equal probability be connected to any other. In

addition to the characteristic

in the range of 2 - 4.5 real networks tend to show a high "locality" in that nodes tend to be connected to

other nodes that are near them in the plane. Related to this locality characteristic is planarity. Typical transport graphs are often strictly

planar or have only a few non-planar spans. (Planarity is treated further in Chapter 4.) Metropolitan area networks in both North America

and Europe tend to be similar to the more richly connected European national backbone example in Figure 1-14(b), whereas long-haul

networks in North America tend to be the most sparse. In contrast to the service layer networks realized over the transport network, the

transport graph is typically much sparser. Sometimes statements about nodal degree cause confusion because the reference is to

different layers. For instance, viewed at the STS-3c connectivity layer, a given location might logically have a degree of 160. But the same

node could be a degree-2 site in the facilities graph. Unless explicitly stated otherwise in this book, the default understanding about any

reference to the network graph or to a nodal degree, we are referring to the physical layer facilities graph.

Figure 1-14. The range of fiber network topologies: (a) a sparse North American long-haul

route-structure (

structure (

2.2), (b) a more richly connected European fiber backbone route

4.4), (c) another European regional network(


This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks

1.5.3 Network Redundancy

Another important characteristic of transport network and a measure of architectural efficiency for a survivable transport network is its

redundancy. Redundancy is always a consideration in conjunction with survivability design because it indicates how "lean" or efficient the

design is in its use of spare capacity. Redundancy is in a broad sense the price one has to pay to achieve a target level of restorability or

other measures of survivability. Capacity efficiency is defined as the reciprocal of the redundancy. It is ultimately cost that matters and this

is not always directly proportional to capacity-efficiency, however. High redundancy in ring-based metro networks does not always mean

they are economically inferior to a less redundant mesh network design, for example, because nodal costs may be lower with ring ADMs

than with mesh cross-connects. But what is always true is that to serve and protect a common base of demands, where all other factors

are the same, a more redundant design requires more capacity and hence does cost more. Especially in long-haul networks, cost does

correlate with capacity.

Often in research or basic comparative planning studies, exact cost data is also not available, so redundancy is used as a surrogate for

cost in comparing alternatives and as an intrinsic measure of network design efficiency. It should also be noted that although redundancy

is usually computed from capacity quantities, each unit of capacity on a span has related nodal equipment at its ends that is either a

per-channel cost to start with or is a common cost that can be pro-rated to each channel of capacity. So redundancy is perhaps the single

most generic surrogate that we have for actual cost effectiveness of an overall survivable network design.

The logical redundancy of a network is defined as the pure ratio of spare to working channel counts and does not take any distance effects

into account.

Equation 1.2

This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks

where S is the set of spans in the network,wm is the number of units of working capacity on spanm and sm is the corresponding number of

spare capacity units on span m. Spare capacity may be the protection fibers of APS or ring systems or designed-in spare channels for

restoration in a mesh network. Note that the definition of redundancy that we use differs from the other possible definition which is the ratio

of spare to total capacity present, i.e., spare / (working + spare). It is sometimes important to clarify which definition of redundancy is being

used. The geographical redundancy is of the same form asEquation 1.2 but each sm , wm term is weighted by a coefficient representing

either the distance of a span or the cost per channel on the span.

Both of these redundancy measures are somewhat idealized in that they allow working and spare capacities to be present in any assumed

quantity. But as discussed, the capacities of real transmission systems are modular. In such cases the modular redundancy is defined as:

Equation 1.3

where Cm is the total installed (modular) capacity on spanm. In some contexts the non-working capacity of modules on a span is

sometimes viewed as containing two sub-parts: (Cm – wm ) = sm + om where om is called the provisioning overhead due to modularity.

The modular geographic redundancy follows the same pattern as above: it is given byEquation 1.3 respectively, but every capacity term is

again weighted with the distance or cost of span m.

Note that when using redundancy as a figure of merit for comparison of survivability techniques, it is only meaningful to compare

alternatives in which the working demands are routed in an identical, or at least capacity-equivalent, way. In many problems this is the

case because demands follow shortest paths in all alternatives and the only comparative questions are about the spare capacity design to

protect such working flows. (These are later called "non-joint" design problems and redundancy is always a meaningful comparative

measure.) In cases where the working routes and working capacity may be manipulated or altered as well as spare capacity as part of the

overall design strategy, the simple redundancy measures above can be misleading. The reason is that if one increases working route

lengths, more total capacity may be required, but it will appear that redundancy per se is lower. For this reason the standard redundancy

of a network is also defined as:

Equation 1.4


is the total working capacity needed to supportshortest-path routing of all demands (i.e., the minimum possible working

capacity of a network) and

is the actual amount of working capacity used to support working paths in the network. Any excess of

working capacity above the minimum for shortest path routing is thus equivalent to additional spare capacity used in the design. This

allows meaningful comparison between more general design strategies (later called "jointly" optimized designs). Under standard

redundancy measures any two designs serving the same demands on the same graph can be fairly compared.

1.5.4 Shared-Risk Entities and Fault Multiplication

This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks

Generally, network operators who own and operate their own physical facilities ("facilities-based operators") pay considerable attention to

the physical separation or "diversity" of cable routings. Particular efforts are made to ensure that important backbone fiber cables are

routed over physically disjoint ducts, pole lines, or buried trenches, or at least on opposite sides of the same streets. The point is to

mitigate the impact that any one physical structure failure might have in terms of the number of transmission paths it brings down. It is

tempting to think that this is a new concern only associated with recent emphasis on automatic network-level rerouting for survivability.

Physical cable diversity has, however, been a general historical concern. In some senses diversity was even more important without an

embedded network restoration mechanism because the outage would last as long as the cable repairs. Care would be taken then that

important services, backbone transmission paths, and key trunk groups would be split and routed over diverse physical cable or

microwave routes. Telephone central office buildings typically have two to four separate underground cable vault entry points on opposite

sides or all compass points of the building footprint, precisely for diversity.

In more recent practice, where spare capacity is explicitly designed into the network for survivability, it is important to take the existence of

any "shared risk entities" into account in the capacity design. A shared risk group (SRG) or shared risk link group (SRLG) defines a set of

transport channels or paths that have something in common: they all fail together if a certain single item of physical equipment fails

[StCh01] [LiYa02] [SeYa01]. The most common ways in which multiple different paths share a risk of simultaneous disruption from a single

common physical failure are:


The paths share the same cable (or fiber if on WDM carriers).


The paths traverse separate cables that share a common duct or other structure such as a bridge crossing.


The paths are routed through a common node.

That paths should share the same cables extensively throughout the network is inevitable and expected. But in certain types of path

protection, the set of end-to-end paths that share each cable must be grouped together for failure planning purposes. Such groups of paths

are called shared risk link groups (SRLGs). The importance to path restoration schemes is that each SRLG that exists defines a specific

set of end-to-end path failures that must be simultaneously restored. More generally any failure of a cable, a duct, or a node defines a set

of network paths that are simultaneously disrupted. Each such physical element failure defines another SRLG. The SRLG concept is thus

the most encompassing category of shared risk failure scenarios. In practice, because single cable cuts are still the most frequent failure

event, the set of end-to-end paths affected by single cable cuts is also called the "default set of SRLGs."

Another class of restoration schemes are span-oriented, however, and do not need to consider or know the end-to-end details of all the

paths affected, because these schemes recover from the failure by replacement routing directly between the end nodes of the failed span

(i.e., the nodes adjacent to a failed span or node). In this context, isolated but complete failure of an individual cable span connecting

adjacent nodes is the basic single-fault model. The aspect of shared risk that is of concern under span-oriented restoration moves up from

planning oriented on individual links within paths to planning that is oriented to shared risk between cables that define the spans between

cross-connect nodes. We refer to these as shared risk span groups (SRSGs). Whereas SRLGs are strictly quite numerous (because each

cable segment defines at least one SRLG), there may be many fewer SRSGs in a network because each SRSG represents an unintended

lapse in physical layer cable or duct diversity. In other words, an SRSG is a situation where entire cable spans that are at least notionally

separate are actually routed together over some common structure. This is different than the routine and numerous forms of shared risk

between all end-to-end paths that traverse the same cable together. The classic case of an SRSG is that of a bridge crossing near a node

that is common to two spans. All of the transmission channels in both spans share a common risk of failure on the bridge crossing,

although to network planners the logical view may be of two separate cables, for which survivability planning would normally consider only

as separate individual failure scenarios. This type of physical layer diversity lapse may be relatively rare, but the importance of an SRSG in

network planning is that it allows for the simultaneous failure of what are otherwise nominally independent spans of the transport graph. In

other words, an SRSG sets up a situation where one physical failure can multiply into two or more logical span failures.

But certain fault multiplication effects can also arise without involvement of an SRSG. Figure 1-15 illustrates a true SRSG on the left, and a

nodal bypass arrangement, often referred to as express routes or glass-throughs, on the right. The nodal bypass sets up a particular type

logical dual-failure situation. In the nodal bypass one or more paths bypasses node B but follows the same ducts as other channels on

spans A-B and B-C. Such bypass arrangements are usually established when the demand (in the example, between nodes A and C)

warrants its own dedicated fiber(s) but a separate geographical route is not available. In this case the A-C express fibers can be well

enough loaded that they can be dedicated to the A-C direct logical span, saving OEO termination costs at the intermediate node B. But

this sets up certain logical dual failure situations. For instance, if duct A-B is cut, spans A-B and A-C appear to fail simultaneously.

Figure 1-15. Shared risk span group in the physical layer and closely related concept of

physical-to-logical fault escalation.

This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks

Perhaps because it might be considered sensitive data, or because it is hard for the network operator to determine, it is difficult to obtain

network models that are complete with data on SRLGs. Getting rid of span SRLGs has been a historical preoccupation of network

planners regardless of the survivability strategy because SRLGs inherently complicate and undermine any scheme for protection or

restoration, even manual patch recovery or 1+1 APS. It is an embarrassment when it is found out (now and then) that some ultra-important

service such as a Defense Department backbone link, planned on a diverse 1+1 basis, in fact shares the same bridge crossing (for

example) at some unexpected locale in the middle of the country. An interesting anecdote is about a pair of "mission critical" telemetry

links used to monitor high pressure gas pipelines in the Canadian North. For 3,000 km these 1+1 redundant signal paths were indeed

physically diverse and created at considerable expense to establish and audit the path realization. The story goes, however, that after

some years of operation it was discovered that 100 meters from the company operations center in a city 3,000 km to the south both signal

feeds crossed a bridge together. So the problem of SRLGs is not new, although the term is recent. The issue has existed for decades and

has traditionally haunted special-services efforts to establish 1+1 APS arrangements with confidence in the intended diversity in the

physical layer. Even when operators lease carrier services from other service providers to try and assure diversity, SRLGs can manifest

themselves. An outcome from Hurricane Hugo in the US was the apparent discovery that up to five different network operators had

transport spans crossing one particular, seemingly unimportant, small bridge in the rural southwest. This SRLG was discovered when the

bridge was washed out by the flood surge!

The effect of SRSGs and the logical dual failure scenarios that arise from bypass are both studied in further in Chapter 8. In Chapter 3 the

related but more general concept of dependent edges is discussedin terms of its implications for the choice of level —physical, logical, or

services layer—at which various survivability principles can be applied.

1.5.5 Demand Patterns

We previously described how various service layer traffic types are combined into standard "demand units" for delivery via the transport

network. Thus, different service layer architectures and traffic handling strategies influence the pattern of demand requirements seen by

the transport network. When many nodes have a strong demand flow to (from is always implied as well) one or two specific nodes the

pattern is described as a "hub-like" demand pattern. In a metropolitan network this propensity can arise when telephony traffic from many

sites is trunked to just one or two major centers for switching. In addition there may be only one or two toll-connecting offices to access the

long-distance switching network from each metro area. Within the metro area network it therefore appears that there is a strong demand to

these "hub nodes." The establishment of regional hubs for grooming also causes a hub-like demand pattern to arise from bringing many

DS-3 or STS-1 access connections back to a W-DCS for grooming into well-filled OC-48s (for example) for long haul transport. In the

limiting case, this leads to the notion of a perfectly hubbed demand pattern in which all nodes of a region exchange demands only with the

hub site(s).

This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks

As illustrated in Figure 1-12, such hub sites represent each region on the backbone inter-regional transport network. At this level the

regional hub sites are typically exchanging demand at the OC-48 or OC-192 level in a much more uniform random mesh-like pattern. In

practice, there may be detailed historical, demographic and economic explanations for the exact amount of demand exchanged between

each regional hub node, but the overall effect is that of an essentially random distribution of demands on each node pair. The random

mesh type of demand pattern may also arise in dense metropolitan area networks where the community of interest between all nodes is

essentially almost equal and a hubbing architecture is not used. Callers and other forms of communication from each node are as likely to

go to one node as any other in the rest of the network.

A historically recognized effect for long haul demand is referred to as a "gravity-like" inverse-distance effect on demand intensities. The

intensity of voice communication in particular depends proportionately on the size of the two centers and (at least historically) inversely on

the distance between population centers. There is a classic anecdote of the Severn Bridge in England that illustrates this tendency. The

Severn Bridge was a major construction effort to span a large estuary that previously separated two cities. The effect of the bridge was to

reduce the travel distance between the two communities from hundreds of miles to only a few. Telephone calling volumes between these

centres went up markedly after the bridge was built, giving experimental evidence for the inverse-distance effect when all other factors

were equal. When the bridge brought the two nodes closer together, their mutual attraction (and hence mutual traffic) rose more than


Of course, any real situation will reflect the individual historical and demographic circumstances affecting demand. In general, there will

always be some blend of uniform, hubbing, and gravity type effects in a real demand pattern seen at the transport layer. Recognition of

these three constituent pattern types can allow us to numerically model and systematically vary demand patterns for a variety of purposes

in network planning or research studies.

Numerical Modeling of Demand Patterns

Often network planning or research studies have to be conducted without the luxury of a known or reliably forecast demand matrix.

Obtaining accurate forecasts of the real demand is a whole discipline of its own. In other contexts, however, one is conducting comparative

network planning or research studies and needs only a representative demand model on which to base comparative design studies. While

there is an infinite number of demand patterns that could be tested, the practical researcher or planner has to use only a few test-case

demand patterns to try to reach generally true comparative conclusions about alternatives. A variety of different test-case demand patterns

can be made up from the following models:


Pure logical mesh with equal number of demands on all node pairs


Random distributed demand: uniform random number of demands on each pair


Hub-like demand pattern


Dual hub demand pattern


Mutual attraction demand models


"Gravity" demand (mutual attraction with inverse distance dependency)


Weighted superpositions of the above component patterns

The first two models are self explanatory. The hub and dual hub patterns often characterize metro or regional area networks involving one

or two main hub sites. In general, the demand between a hub and a non-hub node may also be inversely proportional to the distance

between them. All other node pairs (i.e., between two non-hub nodes) exchange either a constant or uniform random number of demand

units. A numerical model that can be used is:

Equation 1.5

This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks

Equation 1.6

where d0 and l0 are demand and distance scale constants, respectively, andli,hub is the distance from node i to the respective hub, di,hub

is the (integer) number of demands generated. The second part of the model generates a constant mutual demand between all other

non-hub nodes.

A principle that is believed to underlie real traffic intensities at long-distance scales is the concept of mutual attraction, independent of

distance. For example, New York City and Los Angeles are two of the most important centers in the U.S. New York City also exchanges

traffic with Ottawa, but it is lower because of Ottawa's lower population and economic status compared to Los Angeles. Population data

can be one measure of this notion of importance in the real world, but to generate test-case demand patterns with the same overall

characteristics, "importance factors" can be assigned to nodes at random. Another approach is to view nodal degree as a surrogate for the

presumed demographic importance. The argument is that large centers will tend to have higher nodal degree. In either case, once the

"importance factors" Ii are assigned to each node i, the mutual attraction demand model is:

Equation 1.7

The pure mutual attraction model is independent of distance between centers. The "gravity" demand model adds an inverse distance

dependency, partially offsetting the mutual importance effect. For example, extending our prior example to include Paris, we might agree

that Paris, New York and Los Angeles are all of the same "importance" class, but "distance" (including geographical, cultural, time-zone

offset, etc.) tends to attenuate the Los Angeles–Paris demand relative to the Los Angeles–New York demand. The fully general "gravity"

model produces a number of demands for each (i,j) node pair that responds to both importance and distance as follows:

Equation 1.8

where a is an exponent modeling the inverse-distance effect andd0 is a scaling constant set so that the individual demand quantities are in

the range intended. Of course the name "gravity" suggests a = 2. But in practice where this method has been compared to actual demand

patterns, a much closer to one (e.g., 1.1 to 1.3) have been typically found to be more accurate.

There is evidence that as IP data volumes outpace voice, demand in real backbone networks is tending even more toward distance

independence (i.e.,a

0). This may also be due in part to flat-rate (distance independent) voice calling plans. A 2001 study of North

American long-haul demand evolution showed that pair-wise total demand between nodes was almost uncorrelated to geographical

distance. This can be seen in Figure 1-16(a). In a sense, with the Internet, all points are now an insignificant "distance" apart from a social

or commercial standpoint. This gives increasing justification for network planning studies to use a distance independent demand model.

Figure 1-16. Evolution of demand patterns to be almost independent of distance within a

national network implying that most nodal flow is also transiting traffic (adapted from [FrSo01]).

This document was created by an unregistered ChmMagic, please go to http://www.bisenter.com to register it. Thanks

An implication of distance-independent demand is that most traffic at backbone network nodes tends to be transiting traffic. To illustrate,

Figure 1-16(b) shows the distribution of physical span distances between adjacent (i.e., directly connected) nodes in the same network to

which the data of Figure 1-16(a) pertain. Together Figure 1-16(a) and (b) imply that in the average backbone node, 70% of the flow at a

node transits the site. On average only 30% originates (or terminates) at a node. (This is another compelling argument for the role of

cross-connects in a transport layer to pass such flow through at low cost and high speed, without any electrical payload manipulation at

transiting nodes.)

Note that distance-independence does not imply that all demands are also equal in their magnitude. Figure 1-17(a) illustrates this with data

from an inter-exchange carrier network. The histogram shows the statistical frequency of various numbers of STS-1 equivalents of

demand between node pairs. (Data is only plotted for those node pairs exchanging a non-zero demand quantity.) About 90% of all

possible demand pairs exchange at least one STS-1. What the data show is that there tends to be three components to the overall

demand model:

A fraction of node pairs exchanging no transport-level demands.

A large number of demand pairs exchanging demands that are distributed consistent with the pure attraction (product of

importance factors) model.

A small number of demand pairs exchanging relatively huge demand values lying far out on the tail of the overall distribution.

Figure 1-17. (a) Histogram of demand magnitudes in an inter-exchange carrier network, (b)

Histogram of the product of uniformly distributed "importance" numbers for comparison.

Tài liệu bạn tìm kiếm đã sẵn sàng tải về

4 Concept of Label-Switching: The Basis of ATM and MPLS

Tải bản đầy đủ ngay(0 tr)