Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
APPARATUS AND METHODS FOR VIRTUAL TOPOLOGIES IN IP ROUTING
Document Type and Number:
WIPO Patent Application WO/2022/167068
Kind Code:
A1
Abstract:
The present invention relates to methods (1, 2) of operating a routing node (Formula I) or a network manager node (Formula II), respectively, in a network (G = V, A) deploying an Interior Gateway Protocol (IGP), corresponding devices (3, 4) configured to operate as said nodes, and a corresponding computer program. The method (1) of operating the routing node (Formula I) comprises computing (103) virtual link metrics (Formula IV) of the links (A) of the network (G) in respective virtual IGP topologies, and computing (104) trees (Formula V) covering said virtual IGP topologies. Computing (103) the virtual link metrics (Formula IV) is based on IGP advertisements comprising link metrics (Formula VI) of the links (A) of the network (G) in respective IGP topologies, and on scalar vectors (Formula VII) associated with the virtual IGP topologies and respectively comprising scalar components (Formula VIII) associated with the corresponding IGP topologies. The method (2) of operating the network manager node (Formula II) comprises computing (204) said scalar vectors (Formula VII).

Inventors:
HUIN NICOLAS (DE)
MARTIN SEBASTIEN (DE)
LEGUAY JEREMIE (DE)
CAI SHENGMING (DE)
Application Number:
PCT/EP2021/052530
Publication Date:
August 11, 2022
Filing Date:
February 03, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECH CO LTD (CN)
HUIN NICOLAS (DE)
International Classes:
H04L12/715; H04L12/721
Foreign References:
EP1913731B12015-02-25
Other References:
GARDNER M TODD ET AL: "Using Multi-Topology Routing to improve routing during geographically correlated failures", 2014 10TH INTERNATIONAL CONFERENCE ON THE DESIGN OF RELIABLE COMMUNICATION NETWORKS (DRCN), IEEE, 1 April 2014 (2014-04-01), pages 1 - 8, XP032595646, DOI: 10.1109/DRCN.2014.6816134
STEVEN S W LEE ET AL: "Link weight assignment and loop-free routing table update for link state routing protocols in energy-aware internet", FUTURE GENERATION COMPUTER SYSTEMS, ELSEVIER SCIENCE PUBLISHERS. AMSTERDAM, NL, vol. 28, no. 2, 7 May 2011 (2011-05-07), pages 437 - 445, XP028306109, ISSN: 0167-739X, [retrieved on 20110518], DOI: 10.1016/J.FUTURE.2011.05.003
Attorney, Agent or Firm:
KREUZ, Georg (DE)
Download PDF:
Claims:
CLAIMS

1. A method (1) of operating a routing node ( v ∈ V) in a network (G = (V, A )) deploying an Interior Gateway routing Protocol (IGP), the method (1) comprising receiving (101) a plurality of IGP advertisements respectively comprising a link metric of a respective one (a ∈ A ) of a plurality of links (A) of the network (G) in a respective one (k E K) of a plurality (K) of IGP topologies in the network (G); receiving (102) a number of scalar vectors associated with a corresponding number (N) of virtual IGP topologies in the network (G) and respectively comprising a plurality of scalar components associated with the corresponding plurality (K) of IGP topologies in the network (G); computing (103) a virtual link metric of a respective one (a ∈ A ) of the plurality of links (A) of the network (G) in a respective one (n ∈ N) of the number (N) of virtual IGP topologies in the network (G) in dependence of the plurality (K) of link met- rics of the respective one (a ∈ A ) of the plurality of links (A) of the network (G) in the corresponding plurality (K) of IGP topologies in the network (G) and the scalar vector (An) associated with the respective one (n E N) of the number (N) of virtual IGP topolo- gies in the network (G); and computing (104) a tree covering the respective one (n E N) of the number (N) of virtual IGP topologies.

2. The method (1) of claim 1, the computing (103) of the virtual link metric com- prising adding (103A) the plurality (K) of link metrics of the respective one (a ∈ A ) of the plurality of links (A) of the network (G) in the corresponding plurality (K) of IGP topologies in the network (G) respectively multiplied by the corresponding one of the plurality of scalar components associated with the plurality (K) of IGP topologies in the network (G), or adding (103B) the plurality (K) of link metrics of the respective one (a ∈ A ) of the plurality of links (A) of the network (G) in the corresponding plurality (K) of IGP topologies in the network (G) respectively raised to the corresponding one of the plurality of scalar components associated with the plurality (K) of IGP topologies in the net- work (G), or adding (103C) the plurality (K) of link metrics of the respective one (a ∈ A ) of the plurality of links (A) of the network (G) in the corresponding plurality (K) of IGP topologies in the network (G) and the corresponding one of the plurality of scalar compo- nents associated with the plurality (K) of IGP topologies in the network (G).

3. The method (1) of claim 1 or claim 2, further comprising responsive to receiving an updated one of the number of scalar vectors associ- ated with the corresponding number (N) of virtual IGP topologies in the network (G), trig- gering the computing (103) of the virtual link metric

4. The method (1) of any of the preceding claims, further comprising responsive to receiving an IGP advertisement comprising an updated link metric of a respective one (a ∈ A ) of the plurality of links (A) of the network (G) in a respec- tive one ( k ∈ K) of the plurality (K) of IGP topologies in the network (G), triggering the computing (103) of the virtual link metric

5. The method (1) of any of the preceding claims, the tree comprising a shortest path tree rooted in the routing node (v).

6. The method (1) of any of the preceding claims, further comprising receiving (105) a locator information associated with a respective one (i) of the number of services (si, di) to the routing node ( v ∈ V) of the network (G); and forwarding (106), based on the received locator information, the respective one (i) of the number of services (si, di) over a respective one of the plurality (N) of virtual IGP topologies in the network (G).

7. The method (1) of claim 6, the locator information comprising a DSCP/ToS field in an IP packet header, or the locator information comprising a locator field in an SRv6 packet header.

8. A method (2) of operating a network manager node (m G 7) in a network (G = (V, A )) deploying an Interior Gateway routing Protocol (IGP), the method (2) comprising receiving (201) a plurality of IGP advertisements respectively comprising a link metric of a respective one (a ∈ A ) of a plurality of links (A) of the network (G) in a respective one (k ∈ K) of a plurality (K) of IGP topologies in the network (G); computing (204) a number of scalar vectors associated with a corresponding number (N) of virtual IGP topologies in the network (G) and respectively comprising a plurality of scalar components associated with the corresponding plurality (K) of IGP topologies in the network (G); and sending (205) the number of scalar vectors ) to at least one routing node (v G 7) of the network (G).

9. The method (2) of claim 8, further comprising adding (202) one or more IGP topologies to the plurality (K) of IGP topologies in the network (G).

10. The method (2) of claim 8 or claim 9, further comprising modifying (203) one or more of the plurality (K) of IGP topologies in the network (G).

11. The method (2) of any one of the claims 8 to 10, the computing (204) the number of scalar vectors comprising computing (204A) the number of scalar vectors in accordance with a number of Shared Risk Link Group (SLRG) constraints respectively associated with a subset of the plurality of links (A) of the network (G).

12. The method (2) of any one of the claims 8 to 11, further comprising receiving (206) a traffic matrix defining a number of services (si, di) between re- spective source and destination nodes (si, di ∈ V) of the network (G); and mapping (207) each (i) of the number of services (si, di) to a respective one of the plurality (K + N) of IGP topologies or virtual IGP topologies in the network (G) in ac- cordance with the traffic matrix.

13. The method (2) of claim 12, the mapping (207) of each (i) of the number of services (si, di) comprising duplicating (207A) a respective one (i) of the number of services (si, di) and mapping (207B) each (ii, i2) of the duplicate services (si, di) to disjoint ones of the plurality (K + N) of IGP topologies or virtual IGP topologies in the network (G) in accord- ance with a Frame Replication and Elimination for Reliability (FRER) scheme.

14. The method (2) of claim 12 or claim 13, the mapping (207) of each (i) of the number of services (si, di) comprising mapping (207C) a respective one (i) of the number of services (si, di) to an opera- tive one of the plurality (K + N) of IGP topologies or virtual IGP topologies in the network (G) in accordance with a Loop-Free Alternate (LFA) Fast Reroute (FRR) scheme.

15. The method (2) of any of the claims 12 to 14, the mapping (207) of each (i) of the number of services (si, di) comprising mapping (207D) a respective one (i) of the number of services sb d^ to disjoint ones of the plurality (K + N) of IGP topologies or virtual IGP topologies in the network (G) in accordance with a load balancing scheme.

16. The method (2) of any of the claims 12 to 15, the mapping (207) of each (i) of the number of services (si, di) further comprising sending (208) a locator information associated with a respective one (i) of the num- ber of services (si, di) to the routing node ( v ∈ V) of the network (G) for indicating a for- warding of the respective one (i) of the number of services (si, di) over a respective one of the plurality (N) of virtual IGP topologies in the network (G) via the routing node (v ∈ V) of the network (G).

17. The method (2) of claim 16, the locator information comprising a DSCP/ToS field in an IP packet header, or the locator information comprising a locator field in an SRv6 packet header.

18. A device (3) configured to operate as a routing node ( v ∈ V) of a network (G = (V, A )) deploying an IGP, the device (3) comprising a virtual metric updater (301) configured to perform the method (1) of one of the claims 1 to 7.

19. A device (4) configured to operate as a network manager node (m E 7) of a network (G = (V, A )) deploying an IGP, the device (4) comprising a virtual metric designer (401) configured to perform the method (2) of one of the claims 8 to 17.

20. A computer program, comprising executable instructions which, when executed by a processor, cause the processor to perform the method (1) of one of the claims 1 to 7 or the method (2) of one of the claims 8 to 16.

Description:
APPARATUS AND METHODS FOR VIRTUAL TOPOLOGIES IN IP ROUTING

TECHNICAL FIELD

The present disclosure relates to communication networks, and in particular to a method of operating a routing node, a method of operating a network manager node, corresponding devices configured to operate as the mentioned nodes, and a corresponding computer program.

BACKGROUND

In Internet Protocol (IP) networks, a routing of data packets typically involves an Interior Gateway Protocol (IGP) used for exchanging routing information between routing nodes (i.e., routers). In a popular link-state IGP known as Open Shortest Path First (OSPF), routers periodically share link metrics relating to their adjacent links with adjacent routers. In turn, the adjacent routers periodically share link metrics relating to their adjacent links and to the adjacent links of their adjacent routers, and so on. For reasons of efficiency, a plurality of link metrics shared with an adjacent router is aggregated into a Link State Advertisement (LSA) message. Upon reception of LSA messages, routers store their content in a Link State Database (LSDB) and process this information using a shortest path algorithm to maintain a Shortest Path Tree (SPT) towards all the other routers. As an outcome of this computation, a next hop router is identified for all possible destinations. This information is stored in a Forwarding Information Base (FIB) and used to forward packets.

Multi -Topology Routing (MTR) is an extension of IGP that has been devised to increase routing options, to support different application needs or to better load balance traffic in the network. MTR enables each router to execute multiple concurrent IGP instances, each one having its own IP prefixes, its own set of link states and its own topology as maintained in a separate Routing Information Base (RIB). MTR further enables each router to maintain its own Shortest-Path Tree (SPT) towards all destinations, from which a separate Forwarding Information Base (FIB) is derived. MTR allows the use of different shortest paths to consider different criteria (cost, hop, delay, ... ). However, once chosen, a shortest path must be used end-to-end. FlexAlgo is an extension of MTR that adds operator-defined routing algorithms, comprising an optimization objective (such as a minimization of a specified metric: IGP metric, delay metric, TE metric, ...) and constraints (such as an exclusion of certain link properties: link affinity, TE affinity, SRLG, ...). As an example, an operator-defined routing algorithm may mandate to “minimize delay metric and avoid link affinity ‘blue’”. Routers advertise FlexAlgo(s) they are participating in. Accordingly, there are more forwarding options than real topologies - more specifically, as many forwarding options as FlexAlgo(s) - but each FlexAlgo may use a single metric only.

Segment Routing (SR) can operate on top of IGP/MTR/FlexAlgo and leverages the source routing paradigm to steer a packet end-to-end through a network topology, using an ordered list of instructions called segments and represented by respective Segment Identifiers (SID). Each one of these instructions represents a function to be called at a specific location (i.e., router) in the network. In its simplest form, such a function may relate to moving forward in the ordered list of segments (i.e., forwarding the packet to the next hop router). SR enables concatenation of segments from different topologies at inflexion points as identified by the ordered list of segments. SR applied to the IPv6 data plane (SRv6) encodes the ordered list of segments as a stack of IPv6 addresses in a Segment Routing Header (SRH) (see RFC8754), i.e., in a particular IPv6 Routing Extension Header (see RFC8200). This requires one IPv6 address (128 bit) per inflexion point. Nonetheless, SR may not be available when lightweight Traffic Engineering (TE) with MTR/FlexAlgo is preferred.

While MTR/FlexAlgo provides a lightweight TE solution compared to Segment Routing (SR), it entails a linearly increasing signaling overhead as LSA messages are sent for each concurrent IGP instance. In addition, a packet routing is tied to only a few (generally 3 - 5) available IGP instances. Besides, a utilization of specific LSAs for each topology may affect convergence to a new stable routing state (i.e., a time to convergence of the LSDBs of all routers is increased). Indeed, LSA messages are used to update a topology every time a local modification appears to a link (e.g., modification of a link delay metric).

Accordingly, there is a need to improve a routing flexibility, a signaling overhead and a routing convergence in multi-topology routing. SUMMARY

This need is met by the embodiments defined in the enclosed independent claims. Advantageous implementations of the embodiments are further defined in the dependent claims.

A first aspect of the present disclosure relates to a method of operating a routing node in a network deploying an Interior Gateway Protocol (IGP). The method comprises: receiving a plurality of IGP advertisements respectively comprising a link metric of a respective one of a plurality of links of the network in a respective one of a plurality of IGP topologies in the network; receiving a number of scalar vectors associated with a corresponding number of virtual IGP topologies in the network and respectively comprising a plurality of scalar components associated with the corresponding plurality of IGP topologies in the network; computing a virtual link metric of a respective one of the plurality of links of the network in a respective one of the number of virtual IGP topologies in the network in dependence of the plurality of link metrics of the respective one of the plurality of links of the network in the corresponding plurality of IGP topologies in the network and the scalar vector associated with the respective one of the number of virtual IGP topologies in the network; and computing a tree covering the respective one of the number of virtual IGP topologies.

In an implementation of the first aspect, the computing of the virtual link metric may comprise: adding the plurality of link metrics of the respective one of the plurality of links of the network in the corresponding plurality of IGP topologies in the network respectively multiplied by the corresponding one of the plurality of scalar components associated with the plurality of IGP topologies in the network; or adding the plurality of link metrics of the respective one of the plurality of links of the network in the corresponding plurality of IGP topologies in the network respectively raised to the corresponding one of the plurality of scalar components associated with the plurality of IGP topologies in the network; or adding the plurality of link metrics of the respective one of the plurality of links of the network in the corresponding plurality of IGP topologies in the network and the corresponding one of the plurality of scalar components associated with the plurality of IGP topologies in the network. In an implementation of the first aspect, the method may further comprise: responsive to receiving an updated one of the number of scalar vectors associated with the corresponding number of virtual IGP topologies in the network, triggering the computing of the virtual link metric.

In an implementation of the first aspect, the method may further comprise: responsive to receiving an IGP advertisement comprising an updated link metric of a respective one of the plurality of links of the network in a respective one of the plurality of IGP topologies in the network, triggering the computing of the virtual link metric.

In an implementation of the first aspect, the tree may comprise a shortest path tree rooted in the routing node.

In an implementation of the first aspect, the method may further comprise: receiving a locator information associated with a respective one of the number of services to the routing node of the network; and forwarding, based on the received locator information, the respective one of the number of services over a respective one of the plurality of virtual IGP topologies in the network.

In an implementation of the first aspect, the locator information may comprise a DSCP/ToS field in an IP packet header, or a locator field in an SRv6 packet header.

A second aspect of the present disclosure relates to a method of operating a network manager node in a network deploying an Interior Gateway Protocol. The method comprises: receiving a plurality of IGP advertisements respectively comprising a link metric of a respective one of a plurality of links of the network in a respective one of a plurality of IGP topologies in the network; computing a number of scalar vectors associated with a corresponding number of virtual IGP topologies in the network and respectively comprising a plurality of scalar components associated with the corresponding plurality of IGP topologies in the network; and sending the number of scalar vectors to at least one routing node of the network.

In an implementation of the second aspect, the method may further comprise: adding one or more IGP topologies to the plurality of IGP topologies in the network. In an implementation of the second aspect, the method may further comprise: modifying one or more of the plurality of IGP topologies in the network.

In an implementation of the second aspect, the computing the number of scalar vectors may comprise: computing the number of scalar vectors in accordance with a number of Shared Risk Link Group (SRLG) constraints respectively associated with a subset of the plurality of links of the network.

In an implementation of the second aspect, the method may further comprise: receiving a traffic matrix defining a number of services between respective source and destination nodes of the network; and mapping each of the number of services to a respective one of the plurality of IGP topologies or virtual IGP topologies in the network in accordance with the traffic matrix.

In an implementation of the second aspect, the mapping of each of the number of services may comprise: duplicating a respective one of the number of services and mapping each of the duplicate services to disjoint ones of the plurality of IGP topologies or virtual IGP topologies in the network in accordance with a Frame Replication and Elimination for Reliability (FRER) scheme.

In an implementation of the second aspect, the mapping of each of the number of services may comprise: mapping a respective one of the number of services to an operative one of the plurality of IGP topologies or virtual IGP topologies in the network in accordance with a Loop-Free Alternate (LFA) Fast Reroute (FRR) scheme.

In an implementation of the second aspect, the mapping of each of the number of services may comprise: mapping a respective one of the number of services to disjoint ones of the plurality of IGP topologies or virtual IGP topologies in the network in accordance with a load balancing scheme.

In an implementation of the second aspect, the mapping of each of the number of services may further comprise: sending a locator information associated with a respective one of the number of services to the routing node of the network for indicating a forwarding of the respective one of the number of services over a respective one of the plurality of virtual IGP topologies in the network via the routing node of the network.

In an implementation of the second aspect, the locator information may comprise a DSCP/ToS field in an IP packet header, or a locator field in an SRv6 packet header.

A third aspect of the present disclosure relates to a device configured to operate as a routing node of a network deploying an IGP comprises: a virtual metric updater configured to perform the method according to the first aspect of the present disclosure or any of its implementations.

A fourth aspect of the present disclosure relates to a device configured to operate as a network manager node of a network deploying an IGP comprises: a virtual metric designer configured to perform the method according to the second aspect of the present disclosure or any of its implementations.

A fifth aspect of the present disclosure relates to a computer program comprising executable instructions which, when executed by a processor, cause the processor to perform the method according to the first aspect of the present disclosure or any of its implementations, or the method according to the second aspect of the present disclosure or any of its implementations.

The present disclosure provides the following advantages: virtual IGP topologies may be designed based on existing ones, and further user requirements in terms of QoS and protection may be taken into account, compact “lambda vectors” are communicated to each router to derive virtual topologies from the existing ones,

- the virtual topologies provide more routing flexibility than MTR,

- the virtual topologies do not entail any additional signaling overhead (they are silent, i.e. no link states or prefixes are exchanged between routers), and faster convergence (a stable routing state may be obtained more quickly).

It has to be noted that all devices, elements, units and means described in the present application could be implemented in the software or hardware elements or any kind of combination thereof. All steps which are performed by the various entities described in the present application as well as the functionalities described to be performed by the various entities are intended to mean that the respective entity is adapted to or configured to perform the respective steps and functionalities. Even if, in the following description of specific embodiments, a specific functionality or step to be performed by external entities is not reflected in the description of a specific detailed element of that entity which performs that specific step or functionality, it should be clear for a skilled person that these methods and functionalities can be implemented in respective software or hardware elements, or any kind of combination thereof.

BRIEF DESCRIPTION OF DRAWINGS

The above-described aspects and implementations will now be explained in relation to the enclosed drawings.

The features of these implementations may be combined with each other unless specified otherwise.

The drawings are to be regarded as being schematic representations, and elements illustrated in the drawings are not necessarily shown to scale. Rather, the various elements are represented such that their function and general purpose become apparent to those skilled in the art.

FIG. 1 illustrates a flow diagram of a method in accordance with the present disclosure of operating a routing node.

FIG. 2 illustrates a flow diagram of a method in accordance with the present disclosure of operating a network manager node.

FIG. 3 illustrates a device in accordance with the present disclosure configured to operate as a network manager node.

FIG. 4 illustrates a device in accordance with the present disclosure configured to operate as a routing node. DETAILED DESCRIPTION OF EMBODIMENTS

FIG. 1 illustrates a flow diagram of a method 1 in accordance with the present disclosure of operating a routing node.

The method 1 is for operating a routing node v ∈ V in a network G = (V, A) formed of a plurality of nodes V and a plurality of links (or edges or arcs) fl and deploying an Interior Gateway Protocol (IGP).

A network as used herein may refer to a system comprising one or more nodes V interconnected by one or more links fl and thus forming a topology or graph G.

A node as used herein may refer to a cross-roads of a network.

A link (or edge, or arc) as used herein may refer to a communication medium through which nodes interconnected by the particular link may communicate.

The method 1 is represented as a sequence of steps in connection with a device 4 configured to operate as a routing node v. In other words, the device 4 is configured to perform the method 1 according to the present disclosure.

A routing node as used herein may refer to a node participating in an exchange of routing information in accordance with an IGP and making forwarding decisions.

An Interior Gateway Protocol (IGP) as used herein may refer to a communication protocol used for exchanging routing information between routing nodes under a same administrative authority.

A Link State advertisement (LSA) or IGP advertisement as used herein may refer to a particular message type used by an IGP for exchanging routing information between adjacent routing nodes. To the right of the method 1, FIG. 1 further indicates a plurality K of IGP instances in the network G representing IGP instances provided by IGP/MTR and being in communication with the device 4. That is to say, the device 4, when operating as a routing node v, participates in routing in accordance with IGP/MTR. The plurality K of IGP instances advertises respective sets of IP prefixes, respective sets of link states and thus maintains respective topologies. FIG. 1 shows different exemplary topologies for K=3 indicated IGP instances.

To the right of the plurality K of IGP instances, FIG. 1 further indicates a device 3 configured to operate as a network manager node m e V. The device 3 is configured to perform a method 2 according to the present disclosure which is explained in more detail in connection with FIG. 2 below.

A network manager node as used herein may refer to a node making traffic engineering decisions.

Traffic Engineering (TE) as used herein may refer to assigning the services (i.e., traffic demands) of a network to particular routes through the network such that a network performance is optimized.

A service (i.e., demand, or traffic demand) as used herein may refer to a communications relation between endpoints of a network. Particular services may be associated with particular requirements in terms of Quality of Service (QoS), such as bandwidth/capacity, end-to-end delay, and/or protection against failures, such as end-to-end availability.

Turning to the actual flow diagram in FIG. 1, it may be appreciated that steps 101 - 104 are highlighted in grey to indicate that these are the basic steps to be performed by the virtual metric updater 401 of the device 4 (see FIG. 4) configured as the routing node v so as to generate virtual IGP topologies from actual IGP topologies. All other steps 105 - 106 are optional.

In a nutshell, the method 1 creates virtual IGP topologies from existing IGP topologies by applying multipliers to link metrics of existing IGP topologies and combining them linearly. For instance, a virtual IGP topology may be created from a delay topology and a hop count topology by creating virtual link metrics in accordance with 2 • delay + 3 • hop count. The multipliers 2 and 3 may be called “lambda multipliers”, and collectively a “lambda vector”. In this manner, each routing node updates its Link State Data Base and creates virtual link states/metrics from existing ones. For example, a lambda vector = (2, 3, 1) for K=3 topologies applied to a link having associated link cost/metrics (10, 5, 7) yields a virtual link cost/metric of Once all the virtual links are created in the LSDB, the routing node applies a routing algorithm to obtain paths towards all destinations in the new virtual topology. To avoid routing loops, it may be required to ensure that the resulting virtual metrics are positive.

A link metric as used herein may refer to an administrative cost, such as price, delay, etc., associated with a particular link.

A virtual topology as used herein may refer to a topology formed by one or more nodes interconnected by one or more virtual links.

A virtual link as used herein may refer to a link having a virtual link metric.

A virtual link metric as used herein may refer to an administrative cost formed by combination of link metrics of one or more other links and associated with a particular virtual link.

The method 1 comprises a step of receiving 101 a plurality of IGP advertisements. Each of the plurality of received IGP advertisements comprises a (i.e., at least one) link metric of a respective one a ∈ A of the plurality of links A of the network G in a respective one k ∈ K of the plurality K of IGP topologies in the network G . In other words, an IGP advertisement (or LSA message) may comprise the link metrics of one or more a G A of the plurality of links A in a particular one k ∈ K of the plurality K of IGP topologies.

The method 1 further comprises a step of receiving 102 a number of (i.e., one or more) scalar vectors . The number of received scalar vectors is associated with a corresponding number N of virtual IGP topologies in the network G. That is to say, each scalar vector provided relates to a corresponding virtual IGP topology. The number of received scalar vectors respectively comprises a plurality of scalar components associated with the corresponding plurality K of IGP topologies in the network G. In other words, each scalar vector provided comprises a number of scalar components in accordance with the plurality K of IGP topologies.

The method 1 further comprises a step of computing 103 a virtual link metric of a respective one a E A of the plurality of links A of the network G in a respective one n E N of the number N of virtual IGP topologies in the network G.

On the one hand, the virtual link metric depends on the plurality K of link metrics of the respective one a E A of the plurality of links A of the network G in the corresponding plurality K of IGP topologies in the network G . On the other hand, the virtual link metric depends on the scalar vector associated with the respective one n E N of the number N of virtual IGP topologies in the network G.

The computing 103 of the virtual link metric may be implemented in a number of alternative ways.

As a first alternative, the computing 103 of the virtual link metric may comprise a step of adding 103 A the plurality K of link metrics of the respective one a E A of the plurality of links A of the network G in the corresponding plurality K of IGP topologies in the network G respectively multiplied by the corresponding one of the plurality of scalar components associated with the plurality K of IGP topologies in the network G. In other words, the resulting virtual link metric may be a sum of the link metrics multiplied by their corresponding scalar components

Beyond multiplication, other algebraic operations can be used.

Thus, as a second alternative, the computing 103 of the virtual link metric may comprise a step of adding 103B the plurality K of link metrics of the respective one a E A of the plurality of links A of the network G in the corresponding plurality K of IGP topologies in the network G respectively raised to the corresponding one of the plurality of scalar components associated with the plurality K of IGP topologies in the network G. That is to say, the resulting virtual link metric may be a sum of the link metrics raised to their corresponding scalar components

As a third alternative, the computing 103 of the virtual link metric may comprise a step of adding 103C the plurality K of link metrics of the respective one a G A of the plurality of links A of the network G in the corresponding plurality K of IGP topologies in the network G and the corresponding one of the plurality of scalar components A„ associated with the plurality K of IGP topologies in the network G. In other terms, the resulting virtual link metric may simply be a sum of the link metrics and their corresponding scalar components

The method 1 further comprises a step of computing 104 a tree covering the respective one n G N of the number N of virtual IGP topologies.

In particular, the tree may comprise a shortest path tree rooted in the routing node v.

From the tree covering the respective one n G N of the number N of virtual IGP topologies, a FIB may be derived for forwarding within the particular virtual IGP topology. This is no different than deriving FIBs for forwarding within respective existing IGP/MTR topologies.

In summary, the above-described method 1 generates virtual IGP topologies as described by virtual link metrics Ca +n based on actual IGP topologies as described by their respective link metrics and on scalar components As will be explained in more detail in connection with FIG. 2, the scalar components are provided by a virtual metric designer 301 of the device 3 configured to operate as the network manager node m ∈ V.

Evidently, the computed virtual IGP topologies are accurate as long as the underlying link metrics and scalar vectors remain unchanged. However, once the routing information changes, this is re-advertised within the network G. Upon reception of an updated link metric c„, the affected virtual link metrics may need to be re-computed. The same applies if the afore- mentioned virtual metric designer 301 provides updated scalar vectors

As such, the virtual metric updater 401 of a respective device 4 is in charge of updating the LSDB when a new scalar vector is received from the virtual metric designer 301 of the device 3 or when link metrics of an existing IGP/MTR topology change.

As virtual IGP topologies are based on existing IGP/MTR topologies, no specific LSAs need to be advertised, so that the virtual IGP topologies may be called silent topologies. A virtual IGP topology may be available almost immediately once the corresponding lambda vector is received by the involved routing nodes. When a metric in an existing IGP/MTR topology is updated, all the virtual metrics and shortest path trees based on this metric are updated locally by the routing nodes.

Accordingly, the method 1 may further comprise: triggering the computing 103 of the virtual link metric responsive to receiving an updated one of the number of scalar vectors associated with the corresponding number N of virtual IGP topologies in the network G, and/or triggering the computing 103 of the virtual link metric responsive to receiving an IGP advertisement comprising an updated link metric of a respective one a ∈ A of the plurality of links A of the network G in a respective one k ∈ K of the plurality K of IGP topologies in the network G.

In the data plane, each IGP/MTR topology has its own FIB, as already mentioned. The same applies to virtual IGP topologies once they are computed. IP forwarding over a specific tree (topology) may be achieved by mapping traffic to the specific tree (topology). This may be realized by furnishing the individual packets forming the traffic with relevant mapping information. The method 1 may thus further comprise: receiving 105 a locator information associated with a respective one i of the number of services (s i , d i ) to the routing node v ∈ V of the network G and forwarding 106, based on the received locator information, the respective one i of the number of services (s i , d i ) over a respective one of the plurality N of virtual IGP topologies in the network G.

More specifically, the mapping of packets to virtual IGP topologies may be implemented in a number of alternative ways.

As a first alternative, the locator information may comprise a DSCP/ToS field in an IP packet header.

A Differentiated Services Code Point (DSCP) / Type of Service (ToS) field as used herein may refer to a particular bit field in an IPv4 packet header.

The solution may also be used with Segment Routing when a new SRv6 locator specifies which virtual IGP topology should be used at each segment (SR extension).

Thus, as a second alternative, the locator information may comprise a locator field in an SRv6 packet header. In SRv6, each packet comprises one or more Segment Identifiers (SID), and each SID in turn comprises function information (which points to any possible function, such as moving forward in the ordered list of segments), locator information (which identifies the router performing the function), and argument information (i.e., immutable parameters passed to the function). Within this locator information, a number of bits (“Topology ID”) may be designated to identify at the router specified by the locator information which topology needs to be used for forwarding the particular packet.

FIG. 2 illustrates a flow diagram of a method 2 in accordance with the present disclosure of operating a network manager node.

The method 2 is for operating a network manager node m G V in a network G = (V, ) formed of a plurality of nodes V and a plurality of links (or edges or arcs) A and deploying an Interior Gateway Protocol (IGP). The method 2 is represented as a sequence of steps in connection with a device 3 configured to operate as a network manager node m. In other words, the device 3 is configured to perform the method 2 according to the present disclosure.

To the left of the method 2, FIG. 2 further indicates the plurality K of IGP instances in the network G already introduced in connection with FIG. 1. The plurality K of IGP instances represents IGP instances provided by IGP/MTR and being in communication with the device 3. That is to say, the device 3, when operating as a network manager node m, participates in routing in accordance with IGP/MTR. The plurality K of IGP instances advertises respective sets of IP prefixes, respective sets of link states and thus maintains respective topologies. FIG. 1 shows different exemplary topologies for K=3 indicated IGP instances.

To the left of the plurality K of IGP instances, FIG. 2 further indicates the device 4 configured to operate as a network manager node m ∈ V already introduced and explained in more detail in connection with FIG. 1.

Turning to the actual flow diagram in FIG. 2, it may be appreciated that steps 201, 204 and 205 are highlighted in grey to indicate that these are the basic steps to be performed by the virtual metric designer 301 of the device 3 (see FIG. 3) configured as the network manager node m so as to generate scalar vectors for generating virtual IGP topologies from actual IGP topologies. All other steps 202 - 203 and 206 - 208 are optional.

In a nutshell, to create and manage virtual IGP topologies, the virtual metric designer 301 residing in a device 3 configured as the network manager node m or in the network man- agement system (NMS) takes as input a network topology and the link metrics used by the existing IGP topologies. Based on this, the virtual metric designer 301 may calculate lambda multipliers and transmit them to devices 3 configured as routing nodes v.

The method 2 comprises a step of receiving 201 a plurality of IGP advertisements respectively comprising a link metric of a respective one a ∈ A of a plurality of links A of the network G in a respective one k ∈ K of a plurality K of IGP topologies in the network G. This corresponds to the step of receiving 101 of the method 1. The method 2 further comprises a step of computing 204 a number of (i.e., one or more) scalar vectors The number of computed scalar vectors is associated with a corresponding number N of virtual IGP topologies in the network G. That is to say, each scalar vector provided relates to a corresponding virtual IGP topology. Each of the number of computed scalar vectors comprises a plurality of scalar components associated with the corresponding plurality K of IGP topologies in the network G. In other words, each scalar vector provided comprises a number of scalar components in accordance with the plurality K of IGP topologies. To create a set of virtual IGP topologies based on a diverse set of scalar vectors the following exemplary algorithm based on permutations may be used. Its general idea is to begin with a uniform scalar vector (having components of value 1) and generate N virtual metrics by adding 1 sequentially to each position of the vector and avoiding those vectors where all component values are equals. This procedure generates enough diversity according to the number of virtual topologies for any possible traffic matrices.

It will be appreciated that if the design of virtual metrics should take into account protection, load balancing and SLA requirements, the algorithm needs information about the traffic matrix with SLA requirements per service. The method 2 further comprises a step of sending 205 the number of scalar vectors to at least one routing node v ∈ V of the network G. This corresponds to the step of receiving 102 of the method 1.

In case existing IGP instances are not a good base, then existing IGP instances may be modified and/or new IGP instances may be added (smallest as possible).

Thus, the method 2 may further comprise adding 202 one or more IGP topologies to the plurality K of IGP topologies in the network G, and/or modifying 203 one or more of the plurality K of IGP topologies in the network G.

A design of scalar vectors (i. e. , virtual link metrics) may take into account SRLG failures.

A Shared Risk Link Group (SLRG) as used herein may refer to a set of network resources (here: links) which share a risk of suffering from a common failure. For example, a plurality of communication lines crossing a river via a same bridge share a risk of suffering from the common failure of breakdown of the bridge.

As such, the computing 204 the number of scalar vectors may comprise: computing 204A the number of scalar vectors in accordance with a number of SLRG constraints respectively associated with a subset of the plurality of links A of the network G.

In case a traffic matrix is provided to the virtual metric designer 301, it can either assign/map services to created virtual IGP topologies or jointly optimize the creation of virtual topologies and the assignment.

To assign/map services to a virtual IGP topology, the following mathematical model may be solved using mathematical programming.

In this model, the objective function minimizes a cost induced by each assignment. If a shortest-path tree is invalid (according to an SLA requirement) for a demand then the associated cost is set to a large value (as a penalty). The constraints ensure that a demand can be assigned to only one shortest-path tree and the link capacity is respected for each link.

Fault protection mechanisms compatible with IGP/MTR topologies may also be deployed in connection with virtual IGP topologies.

As such, the method 2 may further comprise: receiving 206 a traffic matrix defining a number of services (s i , d i ) between respective source and destination nodes s i , d i ∈ V of the network G and mapping 207 each i of the number of services (s i , d i ) to a respective one of the plurality K + N of IGP topologies or virtual IGP topologies in the network G in accordance with the traffic matrix.

As a first fault protection example, the mapping 207 of each i of the number of services (s i , d i ) may comprise: duplicating 207A a respective one i of the number of services (s i , d i ) and mapping 207B each i1, i2 of the duplicate services (s i , d i ) to disjoint ones of the plurality K + N of IGP topologies or virtual IGP topologies in the network G in accordance with a FRER scheme.

This implements 1+1 protection by duplicating the traffic with FRER over two disjoint shortest-path trees (existing or virtual). Frame Replication and Elimination for Reliability (FRER) as used herein may refer to an arrangement of two nodes of a network involving an exchange of duplicate traffic between the nodes on disjoint routes.

As a second fault protection example, the mapping 207 of each i of the number of services (s i , d i ) may comprise: mapping 207C a respective one i of the number of services (s i , d i ) to an operative one of the plurality K + N of IGP topologies or virtual IGP topologies in the network G in accordance with a LFA FRR scheme.

This implements a failover by rerouting traffic on another tree (existing or virtual) until convergence, similar to LFA.

Loop-Free Alternate (LFA) as used herein may refer to an arrangement (see RFC 5286) that allows a router whose local link has failed to forward traffic via a pre-computed alternate next-hop routing node until the IGP has recomputed new primary next-hop routing nodes for all affected IP prefixes.

Fast Reroute (FRR) as used herein may refer to a framework (see RFC5714) that provides protection against link or node failure by invoking locally determined repair paths.

In order to provide a FRR mechanism with SR in case of failure, the Maximally Redundant Trees (MRT) extension has been proposed. Routers create for each destination two shortest path trees called MRT-Blue and MRT-Red. The two shortest path trees can be computed in O(e + n log n). MRT-FRR may be realized by using multi -topology (MT) forwarding. There is an MRT-Blue forwarding topology and an MRT-Red forwarding topology. The MRTs are computed locally without exchanging link metrics.

This may be applied to virtual IGP topologies by tailoring the virtual topologies for FRR and hard-coding the algorithm into the routers. This offers more flexibility by building a number of silent virtual topologies, not limited to 2, which can be used for various purposes (such as meeting SLAs, load balancing traffic, protecting against failures, etc...) and which may be updated dynamically. As a third fault protection example, the mapping 207 of each i of the number of services (s i , d i ) may comprise: mapping 207D a respective one i of the number of services (s i , d i ) to disjoint ones of the plurality K + N of IGP topologies or virtual IGP topologies in the network G in accordance with a load balancing scheme.

This implements a load balancing of traffic over multiple shortest-path trees (existing or virtual).

Load balancing as used herein may refer to an arrangement between two nodes of a network involving a concurrent exchange of traffic on a plurality of disjoint routes.

In case a traffic matrix with a list of services and their attributes in terms of source, destination, protection requirements (e.g., list of SRLGs), QoS requirements (e.g. bandwidth, end-to-end latency, loss) is provided, the virtual metric designer 301 may output, in addition to the scalar vectors, a mapping of the different services to the different topologies (existing or virtual). This information may be used to classify traffic and route services using the appropriate topology (i.e., FIB).

As such, the mapping 207 of each i of the number of services (s i , d i ) may further comprise: sending 208 a locator information associated with a respective one i of the number of services (s i , d i ) to the routing node v ∈ V of the network G for indicating a forwarding of the respective one i of the number of services (s i , d i ) over a respective one of the plurality N of virtual IGP topologies in the network G via the routing node v ∈ V of the network G.

The locator information may comprise a DSCP/ToS field in an IP packet header, or a locator field in an SRv6 packet header.

FIG. 3 illustrates a device 3 in accordance with the present disclosure configured to oper- ate as a network manager node m e V.

The network manager node m is a node of the network G deploying an IGP. The device 3 comprises a virtual metric designer 301 and is configured to perform the method 2 according to the present disclosure or any of its implementations. In particular, the virtual metric designer 301 is configured to send 205 a number of scalar vectors n to at least one device 4 configured to operate as a routing node v ∈ V of the network G.

FIG. 4 illustrates such a device 4 in accordance with the present disclosure that is config- ured to operate as a routing node v ∈ V.

The routing node v is a node of a network G deploying an IGP. The device 4 comprises a virtual metric updater 401 and is configured to perform the method 1 according to the present disclosure or any of its implementations.

The device 4 may further comprise a LSDB 402, in which the device 4 may store link metrics when receiving LSA messages (indicated by an arrow approaching the LSDB 402 from the left in FIG. 4) from the plurality K of IGP topologies in the network G.

In addition, the virtual metric updater 401 may compute 103 and store virtual link metrics for a number N of virtual IGP topologies in the network G when receiving 102 a corre- sponding number of scalar vectors as mentioned in connection with FIG. 3 above (indi- cated by an arrow approaching the virtual metric updater 401 from above in FIG. 4).

The device 4 may further comprise a processing entity 403 configured to execute a shortest path algorithm based on the (virtual) link metrics stored in the LDSB 402, in order to obtain an SPT 404 towards all the routing nodes in the respective (virtual) IGP topology. As an outcome of this computation, a next hop router is identified for all possible destinations. This information is stored in a FIB 405 and used to forward incoming packets (indicated by an arrow approaching the FIB 405 from the left in FIG. 4).

The respective device 3, 4 may comprise a processor or processing circuitry (not shown) configured to perform, conduct or initiate the various operations of the respectively corre- sponding method 2, 1 described herein. The processing circuitry may comprise hardware and/or the processing circuitry may be controlled by software. The hardware may comprise analog circuitry or digital circuitry, or both analog and digital circuitry. The digital circuitry may comprise components such as application-specific integrated circuits (ASICs), field- programmable arrays (FPGAs), digital signal processors (DSPs), or multi-purpose proces- sors.

The respective device 3, 4 may further comprise memory circuitry (not shown) configured to store one or more instruction(s) that can be executed by the processor or by the pro- cessing circuitry, in particular under control of the software. For instance, the memory cir- cuitry may comprise a non-transitory storage medium storing executable software code which, when executed by the processor or the processing circuitry, causes the various op- erations of the respectively corresponding method 2, 1 to be performed.

In an implementation, the processing circuitry may comprise one or more processors and a non-transitory memory connected to the one or more processors. The non-transitory memory may carry executable program code which, when executed by the one or more processors, causes the respective device 3, 4 to perform, conduct or initiate the operations or methods described herein.

The present disclosure has been described in conjunction with various embodiments as examples as well as implementations. However, other variations can be understood and effected by those persons skilled in the art and practicing the claimed subject-matter, from the studies of the drawings, this disclosure and the independent claims. In the claims as well as in the description the word “comprising” does not exclude other elements or steps and the indefinite article “a” or “an” does not exclude a plurality. A single element or other unit may fulfill the functions of several entities or items recited in the claims. The mere fact that certain measures are recited in the mutual different dependent claims does not indicate that a combination of these measures cannot be used in an advantageous implementation.