There are basically two types of data transfers: With con­nec­tion­less data transfer, data is sent at any time and without limits from the desired device to a target system without the path of the packages being set in advance. Each in­ter­con­nect­ed network node (usually a router) au­to­mat­i­cal­ly knows how to forward the data stream. Con­nec­tion­less data transfer offers a high degree of flex­i­bil­i­ty, but no guarantee that the necessary resources will be available.

In contrast, the path of the data packages in con­nec­tion-oriented data transfer is decided from the beginning. The network nodes involved (generally switches) receive the cor­re­spond­ing in­for­ma­tion for the for­ward­ing of the data from the preceding station, until the point when the packages have arrived at the target computer at the end of their path. In this way, the time-consuming routing process necessary for a con­nec­tion­less transfer is con­sid­er­ably ac­cel­er­at­ed. It also allows for optimal control of available network resources and their dis­tri­b­u­tion amongst in­di­vid­ual par­tic­i­pants. The so-called mul­ti­pro­to­col label switching (MPLS) allows this to be used for TCP/IP networks as well, though these are tech­ni­cal­ly in the con­nec­tion­less network category.

What is MPLS (mul­ti­pro­to­col label switching)?

In the mid-1990s, large com­mu­ni­ca­tion networks were char­ac­ter­ized by a much higher pro­por­tion of voice com­mu­ni­ca­tion (telephony) than data com­mu­ni­ca­tion (internet). Telecom­mu­ni­ca­tion providers at this time were still operating separate networks for both transfer types, which was quite expensive, but also didn’t ensure a com­pre­hen­sive quality of service. The high quality, con­nec­tion-oriented com­mu­ni­ca­tion networks stood in contrast to con­nec­tion­less data networks, which lacked the necessary bandwidth. The in­tro­duc­tion of the ATM protocol (Asyn­chro­nous Transfer Mode) largely solved this problem by allowing the voice and data to be trans­ferred via a shared in­fra­struc­ture. But mul­ti­pro­to­col label switching really provided a solution in the late 1990s for using available band­widths ef­fi­cient­ly.

To ac­com­plish this, MPLS finally relieved the over­loaded routing systems: Instead of defining the optimal route of a data package by the in­di­vid­ual in­ter­me­di­ate stations, as was done before, the new method offered the pos­si­bil­i­ty to pre-define paths that define a package’s way from the input point (ingress router) to the starting point (egress router). The relay points (label switched router) recognize these paths by eval­u­at­ing labels that contain the ap­pro­pri­ate routing and service in­for­ma­tion, and are assigned to the re­spec­tive data package. The eval­u­a­tion takes place using the ap­pro­pri­ate hardware (e.g. a switch) above the backup layer (layer 2), while the time-consuming routing on the switching layer (layer 3) is left out.

Thanks to the gen­er­al­ized MPLS extension, the tech­nol­o­gy orig­i­nal­ly developed only for IP networks is now also available for other network types, such as SONET/SDH (Syn­chro­nous Optical Net­work­ing / Syn­chro­nous Digital Hierarchy) or WSON (Wave­length Switched Optical Network).

How does mul­ti­pro­to­col label switching work?

The use of MPLS in IP networks requires a logical and physical in­fra­struc­ture con­sist­ing of MPLS-capable routers. The labeling process operates primarily within an au­tonomous system (AS) – a con­gre­ga­tion of different IP networks that are managed as a unit and connected via at least one common interior gateway protocol (IGP). Ad­min­is­tra­tors of such systems are generally internet providers, uni­ver­si­ties, or in­ter­na­tion­al companies.

Before the in­di­vid­ual paths can be built, the IGP being used needs to ensure that all routers of the au­tonomous system can reach one another. Then the corner points of the paths, which are also referred to as label switched paths (LSP), are defined. These pre­vi­ous­ly mentioned ingress and egress routers are usually at the inputs and outputs of a system. Ac­ti­va­tion of the LSPs is then either manual, semi-automatic, or fully automatic:

  • Manual con­fig­u­ra­tion: Each note that an LSP runs through needs to be in­di­vid­u­al­ly con­fig­ured; this approach is in­ef­fec­tive for large networks.
  • Semi-automatic con­fig­u­ra­tion: Only some in­ter­me­di­ate stations (for example, the first three hops) need to be con­fig­ured manually, while the rest of the LSPs receive in­for­ma­tion from the interior gateway protocol.
  • Fully automatic con­fig­u­ra­tion: The interior gateway protocol assumes the entire de­ter­mi­na­tion of the path in the fully automatic version; no path op­ti­miza­tion is achieved, though.

Data packages sent in a con­fig­ured MPLS network receive an ad­di­tion­al MPLS header from the ingress router. This is inserted between the in­for­ma­tion of the second and third layers, and is also referred to as a push operation. During the transfer, the in­di­vid­ual hops involved exchange the label with a cus­tomized version with its own con­nec­tion in­for­ma­tion (i.e. latency, bandwidth, and des­ti­na­tion hop) – this procedure is often called a swap operation. At the end of the path, the label is removed from the IP header as part of a pop operation.

The structure of mul­ti­pro­to­col label switching headers

MPLS extends the normal IP header by the so-called MPLS label stack entry, which is also known as the MPLS shim header. This entry is very short, with a length of 4 bytes (32 bits), which is why it can quickly be processed. The cor­re­spond­ing header line, which is inserted between the layer 2 and layer 3 headers, looks as follows:

Bits 0-19 Bits 20-22 Bit 23 Bits 24-31
Label TC S TTL

The ad­di­tion­al 32 bits of the MPLS label stack entry add four pieces of in­for­ma­tion to an IP package for the next network hop:

  • Label: The label contains the core in­for­ma­tion of the MPLS entry, which makes it the largest component with a length of 20 bits. As mentioned before, a label on the path is always unique, and only mediates between two specific routers. It’s then adapted ac­cord­ing­ly for the data transfer to the next in­ter­me­di­ate station.
  • Traffic Class (TC): Using the traffic class field, the header provides in­for­ma­tion about dif­fer­en­ti­at­ed services (DiffServ). This formula can be used for the clas­si­fi­ca­tion of IP packages to guarantee service quality. For example, the 3 bits of the network scheduler can com­mu­ni­cate whether a data package is pri­or­i­tized or can be sub­or­di­nat­ed.
  • Bottom of Stack (S): The bottom of stack defines whether the un­der­ly­ing trans­mis­sion path is a simple path or whether multiple LSPs are nested. If the latter is the case, then a package can receive multiple labels grouped together in the so-called label stack. The bottom of stack flag then informs the router that other labels are following, or that the entry contains the last MPLS label in the stack.
  • Time to Live (TTL): The last 8 bits of the MPLS label stack entry shows the lifespan of the data package. In this way, it’s possible to control how many routers the package can go through on its path (the limit is 255 routers).

The role of mul­ti­pro­to­col label switching today

In the 1990s, MPLS helped providers with the rapid de­vel­op­ment and growth of their networks. But the initial speed advantage for data transfer has been pushed to the back­ground with the new gen­er­a­tion of high-per­for­mance routers with in­te­grat­ed network proces­sors. As a procedure that can guarantee quality of service, though, it’s still used today by many service providers. This has to do with so-called traffic en­gi­neer­ing – a process that deals with the analysis and op­ti­miza­tion of data streams. In addition to the clas­si­fi­ca­tion of in­di­vid­ual data con­nec­tions, an analysis of the band­widths and ca­pac­i­ties of in­di­vid­ual network elements also takes place. Based on the results, the data load can then be optimally dis­trib­uted to strength­en the entire network. Another important area of ap­pli­ca­tion is virtual private networks (VPN) – self-contained, virtual com­mu­ni­ca­tion networks that use public network in­fra­struc­tures like the internet as a transport medium. In this way, devices can merge in a network without phys­i­cal­ly con­nect­ing with one another. There are two different types of such MPLS networks:

  • Layer 2 VPNs: Virtual private networks on the data link layer can either be designed for point-to-point con­nec­tions or for remote access. Layer 2 logically serves the user of such a VPN as an interface for es­tab­lish­ing a con­nec­tion. The point-to-point tunneling protocol (PPTP) or the layer 2 tunneling protocol (L2TP) serve as basic protocols. This gives service providers the option to offer their customers SDH-like services and Ethernet.
  • Layer 3 VPNs: Network-based layer 3 VPNs represent a simple solution for service providers to offer various customers com­plete­ly routed network struc­tures based on a single IP in­fra­struc­ture (re­gard­less of the private IP address ranges). The quality of service is ensured by way of customers being managed sep­a­rate­ly by in­di­vid­ual MPLS labels and pre­de­fined package paths. The network hops are also spared routing.

Operators of large WAN networks (Wide Area Network) profit from provider offers that are based on mul­ti­pro­to­col label switching: Correctly con­fig­ured, the strategic label switched paths optimize data traffic and ensure to the greatest extent possible that all users can have the bandwidth that they need at any time – while their own effort remains limited. For campus networks, such as uni­ver­si­ty or en­ter­prise networks, the method is also a suitable solution, if the necessary budget is available.

An overview of the benefits of MPLS VPNs

Mul­ti­pro­to­col label switching competes as a tech­nol­o­gy for virtual networks with the IP protocol stack extension IPsec, among others. The security upgrade of the internet protocol is char­ac­ter­ized, in par­tic­u­lar, by its own en­cryp­tion mech­a­nisms and low costs. The im­ple­men­ta­tion of in­fra­struc­ture by means of IPsec isn’t the re­spon­si­bil­i­ty of the provider, though, but rather the user – as opposed to the MPLS method. This requires a higher level of effort, giving the MPLS method an advantage. This isn’t the only benefit of ‘label’ networks, as the following list shows:

  • Low operating effort: Operating the MPLS network is the task of the provider, like the IP con­fig­u­ra­tion and the routing. Customers profit from a finished in­fra­struc­ture, and save a lot of effort that would otherwise be incurred by setting up their own network.
  • First-class per­for­mance: The pre­de­fined data paths ensure very fast transfer rates, only subject to small fluc­tu­a­tions. Service level agree­ments (SLA) are met between providers and customers, guar­an­tee­ing the desired bandwidth and quick as­sis­tance with problems.
  • High flex­i­bil­i­ty: VPNs based on mul­ti­pro­to­col label switching give internet providers a lot of leeway for the dis­tri­b­u­tion of resources, which also pays off for their customers. This way, very specific per­for­mance packages can be agreed upon, and networks can easily be extended at any time.
  • Option to pri­or­i­tize services: Thanks to MPLS in­fra­struc­ture, providers can offer various quality of service steps. The leased bandwidth is in no way static, but can be clas­si­fied (class of service). This way, the desired services can be pri­or­i­tized, such as VoIP, to guarantee a stabile transfer.

How secure are MPLS networks?

The ad­van­tages of MPLS and the tech­nol­o­gy based on virtual private networks are es­pe­cial­ly in­ter­est­ing for companies and in­sti­tu­tions that are spread over several sites and want to grant their customers access to their network. As a result, such virtual networks are often the first choice when building an IT in­fra­struc­ture. This allows users to combine into one network without requiring a physical con­nec­tion or public, routed IP address on the internet. Basically, a mul­ti­pro­to­col label switching VPN is only available to users who have the ap­pro­pri­ate data for setting up the con­nec­tion. This fact alone doesn’t make the virtual networks immune to unau­tho­rized access, though: The ‘private’ attribute isn’t used in such networks for secrecy and en­cryp­tion, but to make the IP addresses only ac­ces­si­ble in­ter­nal­ly. Without ad­di­tion­al en­cryp­tion, all in­for­ma­tion is trans­ferred in plaintext. But a cor­re­spond­ing cer­ti­fi­ca­tion also doesn’t offer one hundred percent pro­tec­tion, even if the normal internet traffic via the transfer router (also called the ‘Provider Edge’ (PE)) runs between the MPLS network and customer LAN. Some possible risks when using the MPLS in­fra­struc­tures are listed below:

  • MPLS packages land in the wrong VPN: Software errors and mis­con­fig­u­ra­tions are commonly the cause of IP packages with MPLS labels leaving the actual VPN and becoming visible in another network. In this case, the router falsely forwards the package to an un­trust­wor­thy system to which an IP route exists. It’s also possible that targeted data packages with altered labels (MPLS label spoofing) can be looped into a foreign VPN if the provider edge router accepts the cor­re­spond­ing packages.
  • Con­nec­tion of an unau­tho­rized transfer router: If various VPNs are connected to the MPLS in­fra­struc­ture, there is also the risk that a provider edge will be wrong­ful­ly in­te­grat­ed with another customer’s VPN. This can either happen via an un­in­tend­ed mis­con­fig­u­ra­tion or through a targeted attack. As a result, further network-based attacks can easily be carried out by the foreign user.
  • Logic structure of the provider network is visible: If an attacker gets a look at the logic structure of the MPLS network that the service provider has built, then attacks on the transfer router are in­cred­i­bly probable – es­pe­cial­ly if its addresses are visible.
  • Denial of service attack to the PE router: As an important node for the involved networks, the provider edge router is a par­tic­u­lar­ly vul­ner­a­ble target for denial of service attacks that com­pro­mise the avail­abil­i­ty of the VPN service. In this case, con­tin­u­ous routing updates, for example via EIGRP (enhanced interior gateway routing protocol) or OSPF (open shortest path first), can cause the over­load­ing of the router by the de­lib­er­ate overflow of small data packages.

In addition to en­cryp­tion, each VPN should have ad­di­tion­al security mech­a­nisms to protect the provider edge router against external attacks. The primary rec­om­men­da­tion for this is the es­tab­lish­ment of a de­mil­i­ta­rized zone between two firewalls, and the use of network mon­i­tor­ing systems. In addition, regular updates of software and hardware, as well as security measures against unau­tho­rized physical access to the gateways, should be the standard.

Go to Main Menu