Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
PATH SCOPE FLOODING IN IS-IS NETWORKS
Document Type and Number:
WIPO Patent Application WO/2021/035013
Kind Code:
A1
Abstract:
The disclosure relates to distributing link-state information over a network. One aspect includes a node comprising network interfaces and one or more processors. The one or more processors are configured to receive an LSP on a first network interface. The LSP indicates a flooding scope of explicit path and contains explicit path information. The explicit path information includes a path identifier of an explicit path and sequentially ordered topological path description elements (PDEs) that describe the explicit path. The one or more processors are configured to select, responsive to a determination that the node is on the explicit path, one or more of the network interfaces other than the first interface to distribute the LSP based on the sequentially ordered topological PDEs. The one or more processors are configured to distribute the LSP on the selected one or more of the network interfaces.

Inventors:
CHUNDURI UMA S (US)
LI RENWEI (US)
RETANA ALVARO (US)
Application Number:
PCT/US2020/047116
Publication Date:
February 25, 2021
Filing Date:
August 20, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FUTUREWEI TECHNOLOGIES INC (US)
International Classes:
H04L45/42
Foreign References:
US201962890434P2019-08-22
Other References:
ECKERT TOERLESS ET AL: "Preferred Path Routing (PPR) Graphs - Beyond Signaling Of Paths To Networks", 2018 14TH INTERNATIONAL CONFERENCE ON NETWORK AND SERVICE MANAGEMENT (CNSM), IFIP, 5 November 2018 (2018-11-05), pages 384 - 390, XP033483368
GINSBERG S PREVIDI Y YANG CISCO SYSTEMS L: "IS-IS Flooding Scope LSPs; draft-ietf-isis-fs-lsp-02.txt", IS-IS FLOODING SCOPE LSPS; DRAFT-IETF-ISIS-FS-LSP-02.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH- 1205 GENEVA, SWITZERLAND, 4 June 2014 (2014-06-04), pages 1 - 22, XP015099404
Attorney, Agent or Firm:
POMERENKE, Ronald M. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A node for distributing link-state information over a network using protocol data units (PDUs), comprising: a plurality of network interfaces; and one or more processors in communication with the plurality of network interfaces, wherein the one or more processors are configured to cause the node to: receive a link-state PDU (LSP) on a first interface of the plurality of network interfaces, wherein the LSP indicates a flooding scope of explicit path and contains explicit path information, wherein the explicit path information includes a path identifier of an explicit path comprising nodes in the network, wherein the explicit path information further includes sequentially ordered topological path description elements (PDEs) that describe the explicit path; determine that the node is on the explicit path and, in response, select one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the sequentially ordered topological PDEs that describe the explicit path; and distribute the LSP on the selected one or more of the plurality of network interfaces.

2. The node of claim 1 , wherein the one or more processors are further configured to cause the node to select the one or more of the plurality of network interfaces that correspond to a next node or next nodes in the sequentially ordered topological PDEs that describe the explicit path.

3. The node of claim 2, wherein the one or more processors are further configured to cause the node to select the one or more of the plurality of network interfaces that correspond to only a next node or only next nodes in the sequentially ordered topological PDEs that describe the explicit path.

4. The node of any of claims 1 to 3, wherein: the sequentially ordered topological PDEs describe a path of three or more nodes in the network; and the one or more processors are further configured to cause the node to select the one or more of the plurality of network interfaces that correspond to a next node in the path described by the path identifier.

5. The node of any of claims 1 to 3, wherein: the sequentially ordered topological PDEs describe a graph of three or more nodes in the network; and the one or more processors are further configured to cause the node to select the one or more of the plurality of network interfaces to distribute the LSP to a next node or next nodes in the graph described by the path identifier.

6. The node of any of claims 1 to 5, wherein the one or more processors are further configured to cause the node to: store the path identifier and the sequentially ordered topological PDEs in a path scope link-state database (PS-LSDB).

7. The node of any of claims 1 to 6, wherein the sequentially ordered topological PDEs comprise: a preferred path routing (PPR) of a preferred path for routing packets through the network.

8. The node of claim 7, wherein the preferred path for routing packets represents a data path from a source node to a destination node in the network, wherein the data path has at least one intermediate node between the source node and the destination node.

9. The node of claim 8, wherein the preferred path for routing packets represents a data path from a plurality of source nodes in the network to at least one destination node in the network, wherein the data path has at least one intermediate node between the plurality of source nodes and the destination node.

10. The node of any of claims 8 and 9, wherein the preferred path for routing packets represents a data path from at least one source node in the network to a plurality of destination nodes in the network, wherein the data path has at least one intermediate node between the source node and the plurality of destination nodes.

11. The node of any of claims 7 to 10, wherein the one or more processors are further configured to cause the node to forward packets that contain the path identifier on the preferred path to a next PDE in the sequentially ordered topological PDEs.

12. The node of any of claims 7 to 11 , wherein the preferred path for routing packets comprises a strict path that contains all nodes on which a packet that contains the path identifier is to be forwarded.

13. The node of any of claims 1 to 6, wherein the sequentially ordered topological PDEs identified by the path identifier describe: a traffic engineering (TE) path through the network.

14. The node of any of claims 1 to 3, wherein the one or more processors are further configured to cause the node to: advertise that the node supports the flooding scope of explicit path.

15. The node of any of claims 1 to 14, wherein the one or more processors are further configured to cause the node to: advertise path identifiers for sequentially ordered topological PDEs having a flooding scope of explicit path, wherein the node is one of the PDEs.

16. The node of any of claims 1 to 15, wherein the one or more processors are further configured to cause the node to: send an intermediate system (IS) to IS hello message that advertises that the node supports the flooding scope of explicit path.

17. The node of any of claims 1 to 16, wherein the flooding scope of explicit path is a level-1 explicit path flooding scope that includes only intermediate system (IS) to IS level-1 nodes.

18. The node of any of claims 1 to 16, wherein the flooding scope of explicit path is a level-2 explicit path flooding scope that includes only intermediate system (IS) to IS level 2 nodes.

19. The node of any of claims 1 to 16, wherein the flooding scope of explicit path is an extended domain path flooding scope that includes both intermediate system (IS) to IS level-1 and level 2 nodes.

20. The node of any of claims 1 to 19, wherein the one or more processors are further configured to cause the node to: participate in synchronization of the sequentially ordered topological PDEs stored in a first scope qualified flooding (SQF) LSP at the node with sequentially ordered topological PDEs identified by the path identifier stored in a second SQF-LSP by an adjacent node in the network.

21. The node of any of claims 1 to 5 or 7 to 19, wherein the node comprises a plurality of path scope link-state databases (PS-LSDB), wherein the one or more processors are further configured to cause the node to: store the path identifier and the sequentially ordered topological PDEs in a first PS-LSDB of the PS-LSDBs; store identifiers of other paths associated with the flooding scope of explicit path along with sequentially ordered topological PDEs identified by other path identifiers in other PS-LSDBs of the plurality of PS-LSDBs; and participate in synchronization of all sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs in one or more of the PS-LSDBs that are shared by the node and an adjacent node.

22. The node of any of claims 1 to 5 or 7 to 19, wherein the node comprises a plurality of path scope link-state databases (PS-LSDB), wherein the one or more processors are further configured to cause the node to: store the path identifier and the sequentially ordered topological PDEs in a first PS-LSDB of the plurality of PS-LSDBs; store identifiers of other paths associated with the flooding scope of explicit path along with sequentially ordered topological PDEs identified by other path identifiers in other PS-LSDBs of the plurality of PS-LSDBs; participate in synchronization of zero or more of the sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the one or more of the plurality of PS-LSDBs with a designated intermediate system (DIS); and advertise any path identifiers of the sequentially ordered topological PDEs stored in the plurality of PS-LSDBs that were not synchronized with the DIS.

23. The node of any of claims 1 to 5 or 7 to 19, wherein the node comprises a plurality of path scope link-state databases (PS-LSDB), wherein the one or more processors are further configured to cause the node to: store the path identifier and the sequentially ordered topological PDEs in a first of the link-state databases; store identifiers of other paths associated with the flooding scope of explicit path along with sequentially ordered topological PDEs identified by other path identifiers in others of the link-state databases; participate in synchronization of zero or more of the sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the plurality of PS- LSDBs with a designated intermediate system (DIS); and participate in synchronization of any sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the plurality of PS-LSDBs that were not synchronized with the DIS.

24. The node of any of claims 1 to 23, wherein the LSP is a first LSP containing first explicit path information, wherein the path identifier is a first path identifier of a first explicit path, the one or more processors are further configured to cause the node to: receive a second LSP on an interface of the plurality of network interfaces, wherein the second LSP specifies the flooding scope of explicit path and contains second explicit path information, wherein the second explicit path information includes a second path identifier of a second explicit path and sequentially ordered topological path description elements (PDEs) that describe the second explicit path; and drop the second LSP responsive to a determination that the node is not on the second explicit path.

25. A method for distributing link-state information over a network using protocol data units (PDUs), comprising: receiving, at a first network interface of a plurality of network interfaces of a node in the network, a link-state PDU (LSP) that indicates a flooding scope of explicit path and contains explicit path information, wherein the explicit path information includes a path identifier of an explicit path comprising nodes in the network, wherein the explicit path information includes sequentially ordered topological path description elements (PDEs) that describe the explicit path; determining whether the node is on the explicit path; selecting, responsive to a determination that the node is on the explicit path, one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the sequentially ordered topological PDEs that describe the explicit path; and distributing the LSP on the selected one or more of the plurality of network interfaces.

26. The method of claim 25, wherein selecting the one or more of the plurality of network interfaces comprises: selecting the one or more of the plurality of network interfaces that correspond to a next node or next nodes in the sequentially ordered topological PDEs for the path identifier.

27. The method of claim 26, wherein selecting the one or more of the plurality of network interfaces comprises: selecting the one or more of the plurality of network interfaces that correspond to only a next node or only next nodes in the sequentially ordered topological PDEs for the path identifier.

28. The method of any of claims 25 to 27, wherein: the sequentially ordered topological PDEs describe a path of nodes in the network; and selecting the one or more of the plurality of network interfaces comprises selecting the one or more of the plurality of network interfaces that correspond to a next node in the path described by the path identifier.

29. The method of any of claims 25 to 28, wherein: the sequentially ordered topological PDEs describe a graph of nodes in the network; and selecting the one or more of the plurality of network interfaces comprises selecting the one or more of the plurality of network interfaces to distribute the LSP to a next node or next nodes in the graph described by the path identifier.

30. The method of any of claims 25 to 29, further comprising: storing the path identifier and the sequentially ordered topological PDEs in a path scope link-state database (PS-LSDB).

31. The method of any of claims 25 to 30, wherein the sequentially ordered topological PDEs comprise a preferred path routing (PPR) of a preferred path for routing packets through the network, and further comprising: forwarding packets from the node to a next node on the explicit path.

32. The method of any of claims 25 to 31 , further comprising: advertising that the node supports the flooding scope of explicit path.

33. The method of any of claims 25 to 32, further comprising: sending, by the node, a packet in a hello message that advertises that the node supports the flooding scope of explicit path and that contains the path identifier.

34. The method of any of claims 25 to 33, further comprising: participating in synchronization of the sequentially ordered topological PDEs with sequentially ordered topological PDEs identified by the path identifier of an adjacent node in the network.

35. The method of any of claims 25 to 29 or 31 to 34, further comprising: storing the path identifier and the sequentially ordered topological PDEs in a first of a plurality of link-state databases; storing identifiers of other paths associated with the flooding scope of explicit path along with sequentially ordered topological PDEs identified by other path identifiers in others of the plurality of link-state databases; and participating in synchronization of all sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs in the plurality of link-state databases that are shared by the node and an adjacent node.

36. The method of any of claims 25 to 29 or 31 to 34, further comprising: storing the path identifier and the sequentially ordered topological PDEs in a first path scope link-state database (PS-LSDB) of a plurality of PS-LSDBs; storing identifiers of other paths associated with the flooding scope of explicit path along with sequentially ordered topological PDEs identified by other path identifiers in others of the plurality of PS-LSDBs; participating in synchronization of zero or more of the sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the plurality of PS- LSDBs with a designated intermediate system (DIS); and advertising any sequentially ordered topological PDEs stored in the plurality of PS-LSDBs that were not synchronized with the DIS.

37. The method of any of claims 25 to 29 or 31 to 34, further comprising: storing the path identifier and the sequentially ordered topological PDEs in a first path scope link-state database (PS-LSDB) of a plurality of PS-LSDBs; storing identifiers of other paths associated with the flooding scope of explicit path along with sequentially ordered topological PDEs identified by other path identifiers in others of the plurality of PS-LSDBs; participating in synchronization of zero or more of the sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the plurality of PS- LSDBs with a designated intermediate system (DIS); and participating in synchronization of, with other nodes, any sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the plurality of PS- LSDBs that were not synchronized with the DIS.

38. The method of any of claims 25 to 37, wherein the LSP is a first LSP containing first explicit path information, wherein the path identifier is a first path identifier of a first explicit path, and further comprising: receiving a second LSP on an interface of the plurality of network interfaces, wherein the second LSP specifies the flooding scope of explicit path and contains second explicit path information, wherein the second explicit path information includes a second path identifier of a second explicit path and sequentially ordered topological path description elements (PDEs) that describe the second explicit path in the second LSP; and dropping the second LSP responsive to a determination that the node is not on the second explicit path in the second LSP.

39. A non-transitory computer-readable medium storing computer instructions for distributing link-state information in a network using protocol data units (PDUs), that when executed by one or more processors, cause the one or more processors to perform the steps of: receiving, at a first network interface of a plurality of network interfaces of a node in the network, a link-state PDU (LSP) that indicates a flooding scope of explicit path and contains explicit path information, wherein the explicit path information includes a path identifier of an explicit path comprising nodes in the network, wherein the explicit path information further includes sequentially ordered topological path description elements (PDEs) that describe the explicit path; determining whether the node is on the explicit path; selecting, responsive to a determination that the node is on the explicit path, one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the sequentially ordered topological PDEs that describe the explicit path; and distributing the LSP on the selected one or more of the plurality of network interfaces.

40. The non-transitory computer-readable medium of claim 39, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: selecting the one or more of the plurality of network interfaces that correspond to a next node or next nodes in the sequentially ordered topological PDEs for the path identifier.

41. The non-transitory computer-readable medium of claim 40, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: selecting the one or more of the plurality of network interfaces that correspond to only a next node or only next nodes in the sequentially ordered topological PDEs for the path identifier.

42. The non-transitory computer-readable medium of claims 39 to 41 , wherein the sequentially ordered topological PDEs describe a path of nodes in the network, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: selecting the one or more of the plurality of network interfaces that correspond to only a next node or only next nodes in the sequentially ordered topological PDEs for the path identifier.

43. The non-transitory computer-readable medium of claims 39 to 42, wherein the sequentially ordered topological PDEs describe a graph of nodes in the network, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: selecting the one or more of the plurality of network interfaces comprises selecting the one or more of the plurality of network interfaces to distribute the LSP to a next node or next nodes in the graph described by the path identifier.

44. The non-transitory computer-readable medium of any of claims 39 to 43, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: storing the path identifier and the sequentially ordered topological PDEs in a link-state database.

45. The non-transitory computer-readable medium of claim 39 to 44, wherein the sequentially ordered topological PDEs comprise a preferred path routing (PPR) of a preferred path for routing packets through the network, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: forward ing packets from the node to a next node on the explicit path.

46. The non-transitory computer-readable medium of any of claims 39 to 45, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: advertising that the node supports the flooding scope of explicit path.

47. The non-transitory computer-readable medium of any of claims 39 to 46, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: participating in synchronization of the sequentially ordered topological PDEs identified by the path identifier with a sequentially ordered topological PDEs identified by the path identifier of an adjacent node in the network.

48. The non-transitory computer-readable medium of any of claims 39 to 47, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: storing the path identifier and the sequentially ordered topological PDEs in a first path state link-state database (PS-LSDB) of a plurality of PS-LSDBs; storing identifiers of other paths associated with the flooding scope of explicit path along with sequentially ordered topological PDEs identified by other path identifiers in other PS-LSDBs of the plurality of PS-LSDBs; and participating in synchronization of all sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs in the plurality of PS-LSDBs that are shared by the node and an adjacent node.

49. The non-transitory computer-readable medium of any of claims 39 to 47, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: storing the path identifier and the sequentially ordered topological PDEs in a first path state link-state database (PS-LSDB) of a plurality of PS-LSDBs; storing identifiers of other paths associated with the flooding scope of explicit path along with sequentially ordered topological PDEs identified by other path identifiers in other PS-LSDBs of the plurality of PS-LSDBs; participating in synchronization of zero or more of the sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the plurality of PS- LSDBs with a designated intermediate system (DIS); and advertising any sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the plurality of PS-LSDBs that were not synchronized with the DIS.

50. The non-transitory computer-readable medium of any of claims 39 to 47, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: storing the path identifier and the sequentially ordered topological PDEs in a first path state link-state database (PS-LSDB) of a plurality of PS-LSDBs; storing identifiers of other paths associated with the flooding scope of explicit path along with sequentially ordered topological PDEs for other path identifiers in other PS-LSDBs of the plurality of PS-LSDBs; participating in synchronization of zero or more of the sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the plurality of PS- LSDBs with a designated intermediate system (DIS); and participating in synchronization of, with nodes other than the DIS, any sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the plurality of PS-LSDBs that were not synchronized with the DIS.

51. The non-transitory computer-readable medium of any of claims 39 to 50, wherein the LSP is a first LSP containing first explicit path information, wherein the path identifier is a first path identifier of a first explicit path, wherein the instructions, that when executed by one or more processors, further cause the one or more processors to perform the steps of: receiving a second LSP on an interface of the plurality of network interfaces, wherein the second LSP specifies the flooding scope of explicit path and contains second explicit path information, wherein the second explicit path information includes a second path identifier of a second explicit path and sequentially ordered topological path description elements (PDEs) that describe the second explicit path; and dropping the second LSP responsive to a determination that the node is not on the second explicit path.

Description:
PATH SCOPE FLOODING IN IS-IS NETWORKS

Inventors:

Lima S. Chunduri Renwei Li Alvaro Retana

CLAIM FOR PRIORITY

[0001] This application claims the benefit of priority to U.S. Provisional App. 62/890,434, filed August 22, 2019, the entire contents of which are hereby incorporated by reference.

FIELD

[0002] The present disclosure relates to the field of routing in a network, and in particular, to path scope flooding in an Intermediate System to Intermediate System (IS-IS).

BACKGROUND

[0003] In a network comprising a single autonomous system (AS), each node needs to be aware of the topological relationships (i.e. , adjacencies) of all other nodes, such that all nodes may build a topological map (topology) of the AS. Nodes may learn about one another's adjacencies by distributing (e.g., flooding) link-state information throughout the network according to an Interior Gateway Protocol (IGP), such as intermediate system (IS) to IS (IS-IS).

[0004] The IS-IS protocol uses several different types of protocol data units (PDUs). A PDU may also be referred to herein as a packet ora message. Hello PDUs are exchanged periodically between nodes (e.g., routers) to establish adjacency. An IS-IS adjacency can be either point-to-point (P2P) or broadcast (also referred to herein as Local Area Network (LAN). A link-state PDU (LSP) contains the link-state information. LSPs are flooded periodically throughout the AS to distribute the link- state information. A node that receives the LSPs constructs and maintains a link-state database (LSDB) based on the link-state information. A complete sequence number PDU (CSNP) contains a complete list of all LSPs in the LSDB. CSNPs are sent periodically by a node on all interfaces. The receiving nodes use the information in the CSNP to update and synchronize their LSDBs. Partial sequence number PDUs (PSNPs) are sent by a receiver when it detects that it is missing a link-state PDU (e.g., when its LSDB is out of date). The receiver sends the PSNP to a node that transmitted the CSNP.

[0005] IS-IS is a link-state protocol that uses a reliable flooding mechanism to distribute the link-state information across the entire area or domain. Each IS-IS router distributes information about its local state (e.g., usable interfaces and reachable neighbors, and the cost of using each interface) to other routers using an LSP. Each router uses the received link-state information to build up identical link-state databases (LSDBs) that describes the topology of the AS. From its LSDB, each router calculates its own routing table using a Shortest Path First (SPF) or Dijkstra algorithm. This routing table contains all the destinations the routing protocol learns, associated with a next hop node. The protocol recalculates routes when the network topology changes, using the SPF or Dijkstra algorithm, and minimizes the routing protocol traffic that it generates.

[0006] Thus, every node in the network is to have the same LSDB to have a consistent decision process. The reliable flooding mechanism of the IS-IS protocol uses a simple rule to distribute the link-state information. Specifically, the reliable flooding mechanism sends or floods the link-state information on aii the interfaces except the interface from where the link-state information was received. Though this is inefficient, it ensures the consistent decision process. BRIEF SUMMARY

[0007] According to one aspect of the present disclosure, there is provided a node for distributing link-state information over a network using protocol data units (PDUs). The node comprises a plurality of network interfaces, and one or more processors in communication with the plurality of network interfaces. The one or more processors are configured to cause the node to receive a link-state PDU (LSP) on a first interface of the plurality of network interfaces. The LSP specifies a flooding scope of explicit path and contains explicit path information. The explicit path information includes a path identifier of an explicit path comprising nodes in the network. The explicit path information further includes sequentially ordered topological path description elements (PDEs) that describe the explicit path. The one or more processors are configured to cause the node to determine that the node is on the explicit path and, in response, select one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the sequentially ordered topological PDEs that describe the explicit path. The one or more processors are configured to cause the node to distribute the LSP on the selected one or more of the plurality of network interfaces. [0008] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to select the one or more of the plurality of network interfaces that correspond to a next node or next nodes in the sequentially ordered topological PDEs that describe the explicit path.

[0009] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to select the one or more of the plurality of network interfaces that correspond to only a next node or only next nodes in the sequentially ordered topological PDEs that describe the explicit path.

[0010] Optionally, in any of the preceding aspects, the sequentially ordered topological PDEs for the path identifier describe a path of three or more nodes in the network. The one or more processors are further configured to cause the node to select the one or more of the plurality of network interfaces that correspond to a next node in the path described by the path identifier.

[0011] Optionally, in any of the preceding aspects, the sequentially ordered topological PDEs for the path identifier describe a graph of three or more nodes in the network. The one or more processors are further configured to cause the node to select the one or more of the plurality of network interfaces to distribute the LSP to a next node or next nodes in the graph described by the path identifier.

[0012] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to store the path identifier and the sequentially ordered topological PDEs in a path scope link-state database (PS-LSDB).

[0013] Optionally, in any of the preceding aspects, the sequentially ordered topological PDEs for the path identifier comprise a preferred path routing (PPR) of a preferred path for routing packets through the network.

[0014] Optionally, in any of the preceding aspects, the preferred path for routing packets represents a data path from a source node to a destination node in the network, wherein the data path has at least one intermediate node between the source node and the destination node.

[0015] Optionally, in any of the preceding aspects, the preferred path for routing packets represents a data path from a plurality of source nodes in the network to at least one destination node in the network, wherein the data path has at least one intermediate node between the plurality of source nodes and the destination node. [0016] Optionally, in any of the preceding aspects, the preferred path for routing packets represents a data path from at least one source node in the network to a plurality of destination nodes in the network, wherein the data path has at least one intermediate node between the source node and the plurality of destination nodes. [0017] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to forward packets that contain the path identifier on the preferred path to a next PDE in the sequentially ordered topological PDEs. [0018] Optionally, in any of the preceding aspects, the preferred path for routing packets comprises a strict path that contains all nodes on which a packet that contains the path identifier is to be forwarded.

[0019] Optionally, in any of the preceding aspects, the sequentially ordered topological PDEs identified by the path identifier describe a traffic engineering (TE) path through the network. [0020] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to advertise that the node supports the flooding scope of explicit path.

[0021] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to advertise path identifiers for sequentially ordered topological PDEs having a flooding scope of explicit path, wherein the node is one of the PDEs.

[0022] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to send an intermediate system (IS) to IS hello message that advertises that the node supports the flooding scope of explicit path. [0023] Optionally, in any of the preceding aspects, the flooding scope of explicit path is a level-1 explicit path flooding scope that includes only intermediate system (IS) to IS level-1 nodes.

[0024] Optionally, in any of the preceding aspects, the flooding scope of explicit path is a level-2 explicit path flooding scope that includes only intermediate system (IS) to IS level 2 nodes.

[0025] Optionally, in any of the preceding aspects, the flooding scope of explicit path is an extended domain path flooding scope that includes both intermediate system (IS) to IS level-1 and level 2 nodes.

[0026] Optionally, in any of the preceding aspects, the one or more processors are further configured to cause the node to participate in synchronization of the sequentially ordered topological PDEs stored in a first scope qualified flooding (SQF) LSP at the node with sequentially ordered topological PDEs identified by the path identifier stored in a second SQF-LSP by an adjacent node in the network.

[0027] Optionally, in any of the preceding aspects, the node comprises a plurality of path scope link-state databases (PS-LSDB), wherein the one or more processors are further configured to cause the node to store the path identifier and the sequentially ordered topological PDEs in a first PS-LSDB of the PS-LSDBs. The one or more processors are further configured to cause the node to store identifiers of other paths associated with the flooding scope of explicit path along with sequentially ordered topological PDEs identified by other path identifiers in other PS-LSDBs of the plurality of PS-LSDBs. The one or more processors are further configured to cause the node to participate in synchronization of all sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs in one or more of the PS-LSDBs that are shared by the node and an adjacent node.

[0028] Optionally, in any of the preceding aspects, the node comprises a plurality of path scope link-state databases (PS-LSDB), wherein the one or more processors are further configured to cause the node to store the path identifier and the sequentially ordered topological PDEs in a first PS-LSDB of the plurality of PS-LSDBs. The one or more processors are further configured to cause the node to store identifiers of other paths associated with the flooding scope of explicit path along with sequentially ordered topological PDEs identified by other path identifiers in other PS-LSDBs of the plurality of PS-LSDBs. The one or more processors are further configured to cause the node to participate in synchronization of zero or more of the sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the one or more of the plurality of PS-LSDBs with a designated intermediate system (DIS). The one or more processors are further configured to cause the node to advertise any path identifiers of the sequentially ordered topological PDEs stored in the plurality of PS- LSDBs that were not synchronized with the DIS.

[0029] Optionally, in any of the preceding aspects, the node comprises a plurality of path scope link-state databases (PS-LSDB), wherein the one or more processors are further configured to cause the node to store the path identifier and the sequentially ordered topological PDEs in a first of the link-state databases. The one or more processors are further configured to cause the node to store identifiers of other paths associated with the flooding scope of explicit path along with sequentially ordered topological PDEs identified by other path identifiers in others of the link-state databases. The one or more processors are further configured to cause the node to participate in synchronization of zero or more of the sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the plurality of PS-LSDBs with a designated intermediate system (DIS). The one or more processors are further configured to cause the node to participate in synchronization of any sequentially ordered topological PDEs in scope qualified flooding (SQF) LSPs stored in the plurality of PS-LSDBs that were not synchronized with the DIS.

[0030] Optionally, in any of the preceding aspects, the LSP is a first LSP containing first explicit path information. The path identifier is a first path identifier of a first explicit path. The one or more processors are further configured to cause the node to receive a second LSP on an interface of the plurality of network interfaces, wherein the second LSP specifies the flooding scope of explicit path and contains second explicit path information, wherein the second explicit path information includes a second path identifier of a second explicit path and sequentially ordered topological path description elements (PDEs) that describe the second explicit path. The one or more processors are further configured to cause the drop the second LSP responsive to a determination that the node is not on the second explicit path.

[0031] According to yet another aspect of the disclosure, there is provided a method for distributing link-state information over a network using protocol data units (PDUs). The method comprises receiving, at a first network interface of a plurality of network interfaces of a node in the network, a link-state PDU (LSP) that indicates a flooding scope of explicit path and contains explicit path information. The explicit path information includes a path identifier of an explicit path comprising nodes in the network. The explicit path information includes sequentially ordered topological path description elements (PDEs) that describe the explicit path. The method comprises determining whether the node is on the explicit path. The method comprises selecting, responsive to a determination that the node is on the explicit path, one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the sequentially ordered topological PDEs that describe the explicit path. The method comprises distributing the LSP on the selected one or more of the plurality of network interfaces.

[0032] According to still one other aspect of the disclosure, there is a non-transitory computer-readable medium storing computer instructions for distributing link-state information in a network using protocol data units (PDUs), that when executed by one or more processors, cause the one or more processors to perform the steps of: receiving, at a first network interface of a plurality of network interfaces of a node in the network, a link-state PDU (LSP) that indicates a flooding scope of explicit path and contains explicit path information. The explicit path information includes a path identifier of an explicit path comprising nodes in the network. The explicit path information further includes sequentially ordered topological path description elements (PDEs) that describe the explicit path. The method further comprises determining whether the node is on the explicit path. The method further comprises selecting, responsive to a determination that the node is on the explicit path, one or more of the plurality of network interfaces other than the first interface to distribute the LSP based on the sequentially ordered topological PDEs that describe the explicit path. The method further comprises distributing the LSP on the selected one or more of the plurality of network interfaces.

[0033] This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter. The claimed subject matter is not limited to implementations that solve any or all disadvantages noted in the Background.

BRIEF DESCRIPTION OF THE DRAWINGS

[0034] Aspects of the present disclosure are illustrated by way of example and are not limited by the accompanying figures for which like references indicate elements. [0035] FIG. 1 illustrates a network configured to implement an embodiment of path scope flooding.

[0036] FIG. 2A depicts a TLV for announcing scope flooding support.

[0037] FIG. 2B depicts a table that contains information for a number of different types of path scope flooding, in accordance with one embodiment.

[0038] FIG. 3 is a flowchart of one embodiment of a process of nodes exchanging hello PDUs to advertise path scope flooding capability.

[0039] FIG. 4 depicts a flowchart of one embodiment of a process of path scope flooding. [0040] FIG. 4A depicts a flowchart of one embodiment of a process of path scope flooding to a next node in a path.

[0041] FIG. 4B depicts a flowchart of one embodiment of a process of path scope flooding to a next node in a graph.

[0042] FIG. 5 depicts two nodes and link-state databases maintained by the nodes, in accordance with one embodiment.

[0043] FIG. 6A depicts a flowchart of one embodiment of a process of two nodes synchronizing PS-LSDBs.

[0044] FIG. 6B provides further details of one embodiment of synchronizing PS- LSDBs between two nodes.

[0045] FIG. 6C depicts a format for an embodiment of an FS-CSNP.

[0046] FIG. 6D depicts a format for one embodiment of an FS-PSNP.

[0047] FIG. 7 depicts a flowchart of one embodiment of a process of synchronizing

PS-LSDBs for two nodes having a P2P adjacency.

[0048] FIG. 8A depicts nodes in a LAN, as well as link-state databases maintained by the nodes in the LAN, in accordance with one embodiment.

[0049] FIG. 8B depicts a flowchart of one embodiment of a process of synchronizing PS-LSDBs in a LAN.

[0050] FIG. 9A depicts one embodiment of a network showing a state of link-state databases immediately after a node restarts.

[0051] FIG. 9B depicts a flowchart of one embodiment of a process of a graceful restart of a node having a PS-LSDB.

[0052] FIG. 9C depicts a flowchart of one embodiment of a process of re establishing a PS-LSDB during a graceful restart of a node.

[0053] FIG. 10 illustrates an embodiment of a node.

[0054] FIG. 11 illustrates a schematic diagram of a general-purpose network component or computer system.

DETAILED DESCRIPTION

[0055] The present disclosure will now be described with reference to the figures, which generally relates to distributing link-state information in a network. In an embodiment of path scope flooding, link-state information associated with an explicit path is sent to nodes that are part of the explicit path in the network. An explicit path is identified by a path identifier and may be described by sequentially ordered topological path description elements (PDEs). A PDE represents a segment of the explicit path. A PDE can be, but is not limited to, a node, a backup node, a link, etc. An explicit path comprises at least three nodes, but includes a subset of fewer than all of the nodes in the network. In an embodiment of path scope flooding, link-state information associated with the explicit path is distributed to nodes based on whether the node is on the explicit path. In one embodiment, the link-state information associated with the explicit path is distributed to only nodes that are part of the explicit path in the network. Hence, by distributing the link-state information based on whether the nodes are on the explicit path, the link-state information need not be sent to all nodes in the network. For example, by distributing the link-state information to only nodes on the explicit path, the link-state information need not be sent to all nodes in the network.

[0056] It is understood that the present embodiments of the disclosure may be implemented in many different forms and that claim scope should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the inventive embodiment concepts to those skilled in the art. Indeed, the disclosure is intended to cover alternatives, modifications and equivalents of these embodiments, which are included within the scope and spirit of the disclosure as defined by the appended claims. Furthermore, in the following detailed description of the present embodiments of the disclosure, numerous specific details are set forth in order to provide a thorough understanding. However, it will be clear to those of ordinary skill in the art that the present embodiments of the disclosure may be practiced without such specific details.

[0057] FIG. 1 illustrates a network 100 configured to implement an embodiment of path scope flooding. In an embodiment of path scope flooding, link-state information associated with an explicit path is only sent to nodes that are part of the explicit path. [0058] The network 100 includes a number of nodes R1 - R12, which may also be referred to herein as “network nodes” or “network elements.” In one embodiment, the network 100 includes one or more areas, as the term is used in the IS-IS protocol. In one embodiment, a node could be an IS-IS level-1 router, an IS-IS level-2 router, or an IS-IS level 1/2 router.

[0059] The nodes R1 - R12 are connected by links 106 (the links between R2/R3, R5/R7 and R6/R7 are labeled with reference numeral 106). Each of the nodes R1 - R12 may be a physical device, such as a router, a bridge, a virtual machine, a network switch, or a logical device configured to perform switching, routing, as well as distributing and maintaining link-state information as described herein. In an embodiment, any of nodes R1 - R12 may be headend nodes positioned at an edge of the network 100, an egress node from which traffic is transmitted, or an intermediate node or any other type of node. The links 106 may be wired or wireless links or interfaces interconnecting respective pairs of the nodes R1 - R12 together. The network 100 may include any number of nodes. In an embodiment, the nodes R1 - R12 are configured to implement various packet forwarding protocols, such as, but not limited to, MPLS, IPv4, IPv6, and Big Packet Protocol.

[0060] Two explicit paths 108-1 , 108-2 are depicted. Explicit path 108-1 includes nodes R1 , R2, R3 and R7. Thus, explicit path 108-1 contains a subset of the nodes R1 - R12. Herein, the term “subset” means fewer than all elements in the set. Thus, by the subset of the nodes R1 - R12 will not include every one of nodes R1 - R12. Moreover, there is a sequential order of the nodes in the explicit path 108-1. In one embodiment, the sequential order is bi-directional. Explicit path 108-2 includes nodes R1 , R2, R6, R10, R11 , R7, R8 and R4. Thus, explicit path 108-2 contains a subset of the nodes R1 - R12. Moreover, there is a sequential order of the nodes in the explicit path 108-2. In one embodiment, an explicit path 108 includes at least three nodes. In one embodiment, an explicit path 108 includes at least four nodes. In one embodiment, an explicit path 108 includes at least five nodes. In one embodiment, a path (e.g., path 108-1 , 108-2) is identified by a path identifier and is described by sequentially ordered topological path description elements (PDEs). A PDE represents a segment of the path. A PDE can be, but is not limited to, a node, a backup node, a link 106, etc. Each path is assigned a unique path identifier.

[0061] The explicit paths 108-1 , 108-2 are what are referred to herein as “strict paths”. A strict path explicitly lists all nodes from one end of the path to the other end of the path. This is in contrast to a “loose path”, which does not explicitly lists all nodes from one end of the path to the other end of the path. An example of a loose path is R1-R2-R7-R8-R4. This loose path does not list all nodes on a path between R1 and R4. In one embodiment, path scope flooding is used to provide link-state information for strict paths.

[0062] In an embodiment, link-state information about the explicit paths is only distributed by a node that received the link-state information to a next node on the explicit path. The next node on the explicit path refers to the node that is next in the sequence from the node that received the link-state information. The node that is next on the path will be relative to the node from which the link-state information was received. For example, if node R2 receives an LSP from node R1 containing link-state information for path 108-1 , node R2 will identify node R3 as the next node on path 108-1 and send an LSP containing the link-state information for path 108-1 to node R3. However, node R2 will not send such an LSP to any other node. For example, node R2 will not send an LSP containing link-state information for path 108-1 to node R6, as node R6 is not on path 108-1 . On the other hand, if node R2 receives a LSP from node R3 that contains link-state information for path 108-1 , node R2 will identify node R1 as the next node on path 108-1 and send a LSP having link-state information for path 108-1 to node R1. Thus, note that a node does not necessarily distribute the link-state information on all of its interfaces (e.g., links 106) to nodes other than the node from which the link-state information was received. Moreover, the decision on which of interface(s) to distribute the link-state information is based on the explicit path. [0063] Note that there may be many more nodes in the network 100. Also, each node may have interfaces with many additional nodes. For example, a node could have interfaces with ten or even more nodes. Hence, by limiting the sending of the LSPs having link-state information for an explicit path based on the explicit path (e.g., to only nodes on that explicit path), unnecessary link-state information need not be transferred in the network 100. Moreover, since nodes such as R6 need not have link- state information for path 108-1 , the fact that all nodes do not have the exact same link-state information does not present problems. In otherwords, a consistent decision process with respect to, for example, routing packets is still achieved.

[0064] In one embodiment, each node maintains a conventional LSDB 102. Conventional LSDBs 102 are depicted at each node R1 to R12. Thus, each node maintains a conventional LSDB 102. Each node could be an IS-IS level-1 router, an IS-IS level-2 router, or an IS-IS level 1/2 router. If the node is an IS-IS level-1 router it may maintain a conventional IS-IS level-1 LSDB. If the node is an IS-IS level-2 router it may maintain a conventional IS-IS level 2 LSDB. If the node is an IS-IS level 1/2 router it may maintain a conventional IS-IS level-1 LSDB and a conventional IS-IS level-2 LSDB.

[0065] In one embodiment, link-state information for a path is maintained separately from the link-state information in the conventional level-1 LSDBs and the conventional level-2 LSDBs. In one embodiment, if a node participates in a path, it maintains a path scope (PS) LSDB for that path. The PS-LSDB fora given path stores link-state information for that path. In one embodiment, the PS-LSDB for a given path contains the topology for that path. For example, a node that is on path 108-1 maintains a PS-LSDB 104-1 for path 108-1. Thus, PS-LSDB 104-1 contains link-state information for path 108-1. A node that is on path 108-2 maintains a PS-LSDB 104- 2. Thus, PS-LSDB 104-2 contains link-state information for path 108-2. The reference numeral 104 is used herein to refer to a PS-LSDB in general, without reference to a specific PS-LSDB. The reference numeral 108 is used herein to refer to an explicit path in general, without reference to a specific explicit path. In some embodiments, a PS-LSDB 104 contains scope qualified flooding (SQF) LSPs for an explicit path 104. The term scope qualified flooding, as the term is used herein, refers to flooding of link- state information based on an explicit path. In one embodiment, the link-state information is flooded to only nodes on the explicit path. Each SQF-LSP contains link- state information for an explicit path. For example, PS-LSDB 104-1 contains one or more SQF-LSPs for explicit path 108-1 , and PS-LSDB 104-2 contains one or more SQF-LSPs for explicit path 108-2. In one embodiment, the SQF-LSP for an explicit path 108 describes the topology of that explicit path. In one embodiment, the SQF- LSP for an explicit path 108 contains PDEs that describe that explicit path.

[0066] The link-state information in the PS-LSDB 104 for a given explicit path may be used to forward packets on that explicit path. In one embodiment, the SQF-LSP for an explicit path describes a route for forwarding data packets that contain a path identifier for the explicit path. In one embodiment, the sequentially ordered topological PDEs comprise a preferred path routing (PPR) of a preferred path for routing packets containing the PPRJD through the network. A PPR indicates a preferred path over which packets containing the PPRJD should be forwarded. In one embodiment, a node updates a locally stored forwarding database to indicate that data packets including this particular PPRJD should be routed along the path identified by the PPR information instead of the predetermined shortest path, calculated using SPF. In one embodiment, the PPR represents a data path from a source node to a destination node in the network 100, and includes at least one intermediate node. In one embodiment, the PPR represents a data path from multiple source nodes to a single destination node in the network 100, and includes at least one intermediate node. In one embodiment, the PPR represents a data path from a single source node to a multiple destination nodes in the network 100, and includes at least one intermediate node. In one embodiment, the PPR represents a data path from at least one source node to a multiple destination nodes in the network 100, and includes at least one intermediate node. In one embodiment, the PPR represents a data path from multiple source nodes to at least one destination node in the network 100, and includes at least one intermediate node.

[0067] In one embodiment, when a node receives a data packet, the node inspects the data packet to determine whether a PPRJD is included in the data packet. In an embodiment, the PPRJD may be included in a header of the data packet. If a PPRJD is included in the data packet, the node performs a lookup on the locally stored forwarding database to determine the next PDE associated with the PPRJD identified in the data packet. The PDE in the locally stored forwarding database indicates a next hop (another network element, link, or segment) by which to forward the data packet. The node forwards the data packet to the next hop based on the PDE indicated in the locally stored forwarding database. In this way, the nodes in the network are configured to transmit data packets via the PPR instead of the shortest path. Further details of PPR are described in the link state routing (LSR) Working Group Draft Document entitled “Preferred Path Routing (PPR) in IS-IS,” dated July 8, 2019, by U. Chunduri, et al. (hereinafter, “Chunduri 2019”); link state routing (LSR) Working Group Draft Document entitled “Preferred Path Routing (PPR) in IS-IS,” dated March 8, 2020, by U. Chunduri, et al. (hereinafter, “Chunduri PPR 2020”); and link state routing (LSR) Working Group Draft Document entitled “Preferred Path Route Graph Structure,” dated March 8, 2020, by U. Chunduri, et al. (hereinafter, “Chunduri Graph”, all of which are incorporated by reference herein in their entirety.

[0068] In one embodiment, an explicit path corresponds to a traffic engineering (TE) path. Thus, in one embodiment, the sequentially ordered topological PDEs describe a traffic engineering (TE) path. Thus, the explicit path may be used to distribute TE characteristics of the links and nodes. The TE information is not required to be on all nodes in the network 100. In one embodiment, the explicit path includes nodes that act as a TE path computation element (PCE). The TE PCE may implement a decentralized path computation model used by a Resource Reservation Protocol (RSVP-TE). Further details of traffic engineering (TE) are described in the Internet Engineering Task Force (IETF) Request for Comments (RFC) 7810, entitled “IS-IS, Traffic Engineering (TE) Metric Extensions” dated May, 2016, by S. Previdi, et al.; and the Internet Engineering Task Force (IETF) Request for Comments (RFC) 8570, entitled “IS-IS, Traffic Engineering (TE) Metric Extensions” dated March, 2019, by L. Ginsberg, et al., both of which are incorporated by reference herein in their entirety. [0069] As noted above, in the IS-IS protocol, nodes may exchange a hello PDU periodically to establish adjacency. In one embodiment, a node sends a hello PDU in order to inform its neighbors that the node is configured to perform path scope flooding. The node may also receive hello PDUs from its neighbors to learn what neighbors support path scope flooding. As will be described more fully below, in some embodiments, there are multiple types of path scope flooding to cover different IS-IS levels (e.g., IS-IS level-1 router, IS-IS level 2 router, IS-IS level 1/2 router), as well as to cover standard Type Length Value (TLV) versus extended TLV. Thus, in some embodiments, the hello PDU advertises the type of path scope flooding supported by the node.

[0070] In one embodiment, the node sends an IS-IS Hello Packet (IIH) to advertise the type of path scope flooding supported by the node. In one embodiment, the hello message contains a Type Length Value (TLV) to indicate the supported flooding scope(s). In one embodiment, the TLV is an extension of a TLV described in Internet Engineering Task Force (IETF), Request for Comments (RFC): 7356, entitled “IS-IS Flooding Scope Link State PDUs (LSPs),” by L. Ginsberg et al. (hereinafter RFC 7356), dated September 2014, which is incorporated herein in its entirety. RFC 7356 describes a TLV (TLV type 243) for announcing scope flooding support.

[0071] FIG. 2A depicts a TLV that is described in RFC 7246 for announcing scope flooding support. The TLV 200 has a number of fields 202a - 202n for specifying a supported flooding scope. A path scope identifier may be placed into one of the fields 202 to indicate a path scope that is supported by a node. There is a reserved field (R) 204a - 204n associated with each corresponding flooding scope field 202a - 202n. [0072] FIG. 2B depicts a table 220 that contains information for a number of different types of path scope flooding, in accordance with one embodiment. Each row in table 220 is for a different type of path scope flooding, in accordance with one embodiment. In one embodiment, there are six different types of path scope flooding, as depicted in table 220. In one embodiment, there are six types of path scope flooding in order to cover each of IS-IS level-1 , IS-IS level-2, and IS-IS level 1/2 for both a standard length TLV and an extended length TLV.

[0073] Table 220 has an LSP Scope Identifier column 222. The table 220 has example values for the entries in the LSP Scope Identifier column 222. However, other values may be used. The values in column 222 may be assigned by the Internet Assigned Numbers Authority (IANA). The LSP Scope Identifier is a unique identifier (e.g., a unique number) for the type of path scope for that row. In one embodiment, the unique identifier is inserted in a supported scope field 202 in the TLV 200 depicted in FIG. 2A.

[0074] Table 220 contains a description column 224, which contains a description of the path scope flooding for each row. PS-Level-1 Explicit Path SQF-LSP refers to path scope flooding for IS-IS level-1. PS-Level 2 Explicit Path SQF-LSP refers to path scope flooding for IS-IS level-2. PS-Level 2 Extended Domain Explicit Path SQF-LSP refers to path scope flooding for IS-IS level 1/2.

[0075] Table 220 contains an LSPID Format/TLV Format column 226. The LSPID Format for each entry is extended. The TLV format for the top three table entries is Standard, whereas the TLV format for the bottom three table entries is Extended. The difference is the length of the TLV entry. As noted above, example values are provided for the LSP Scope Identifiers in table 220. RFC 7356 indicates that Scope Identifiers between 1 - 63 are reserved for flooding scopes with standard TLVs. RFC 7356 indicates that Scope Identifiers between 64 - 127 are reserved for flooding scopes with extended TLVs.

[0076] FIG. 3 is a flowchart of one embodiment of a process 300 of nodes exchanging hello PDUs to advertise path scope flooding capability. The process 300 may be used to establish adjacencies, as well as to inform neighbors about support for path scope flooding. Step 310 includes a node sending a hello message to its neighbor nodes. The hello message advertises that the node supports path scope flooding. In one embodiment, the hello message contains a TLV such as depicted in FIG. 2A. However, the hello message is not limited to the TLV depicted in FIG. 2A. In one embodiment, the TLV is an extension of TLV 243 described in RFC 3756. However, the TLV is not limited to being an extension of TLV 243 described in RFC 3756. Note that the node may specify more than one type of supported path scope flooding. With reference to table 220 in FIG. 2B, in one embodiment, the hello message specifies one or more of the types of path scope flooding. Thus, with reference to FIG. 2A, one or more of the supported scope fields 202 contains an LSP Scope Identifier that identifies a type of path scope flooding that is supported by the node.

[0077] Step 320 includes the node receiving Hello messages from neighbor nodes. The Hello messages indicate what path scope flooding is supported by each neighbor. These Hello messages may be similar to the Hello message sent by the node in step 310. [0078] Step 330 includes the node storing the type of path scope flooding supported by the respective neighbor nodes. This information may be recorded in a “neighbor database.” The neighbor nodes will also record similar information.

[0079] Path scope flooding, as described herein, floods link-state information based on nodes on the explicit path. FIG. 4 depicts a flowchart of one embodiment of a process 400 of path scope flooding. In some embodiments, the process 400 is performed in a network that uses the IS-IS protocol.

[0080] Step 410 includes a node receiving an LSP on a first network interface. The LSP specifies a flooding scope of explicit path. In one embodiment, the flooding scope is one of the six path scope flooding scopes depicted in table 220 of FIG. 2B. The LSP also contains explicit path information for a particular path. The explicit path information may include a path identifier of an explicit path and sequentially ordered topological path description elements (PDEs) that describe the explicit path. In one embodiment, the sequentially ordered topological PDEs include a list of nodes that describe the explicit path. However, the PDEs are not required to be nodes. In one embodiment, the LSP contains explicit path information for only one explicit path. [0081] Step 420 includes selecting one or more network interfaces to distribute the LSP on based on the explicit path information in the LSP. In one embodiment, the node selects the one or more network interfaces that correspond to a next node or nodes in the explicit path. For example, the node selects one or more network interfaces that correspond to a next node or next nodes in the sequentially ordered topological PDEs for the path identifier. In one embodiment, the node selects the one or more of the plurality of network interfaces that correspond to only a next node or only next nodes in the sequentially ordered topological PDEs for the path identifier. The one or more interfaces do not include the interface on which the LSP was received. In one embodiment, the one or more interfaces will be fewer than all of the remaining interfaces of the node. For example, the node may have ten interfaces, which for the sake of discussion will be referred to as interfaces 1 - 10. If the LSP is received on interface 1 , then the node will select one or more of interfaces 2 - 9 based on the explicit path information. Typically, the one or more interfaces that correspond to the next node or nodes on the path will be fewer than all of interfaces 2 - 9. Hence, typically the node will not distribute the LSP on all interfaces (in the set of interfaces not including the receiving interface). In one embodiment, the node verifies that it is in the explicit path. If the node that received the LSP in step 410 is not on the explicit path, then the node drops the LSP. That is, the node does not distribute the LSP if the node (that received the LSP in step 410) is not on the explicit path. Also, in the event that the node is the last node on the explicit path, then the node will not distribute the LSP.

[0082] Step 430 includes the node distributing the LSP on the one or more network interfaces that was/were selected step 420. Step 430 includes flooding the link-state information in the received LSP based on the nodes in the path. In one embodiment, the node sends an LSP that specifies the flooding scope of explicit path and contains the explicit path information to only a next PDE or next PDEs in the explicit path information. In one embodiment, the node distributes the link-state information to only a next node in the explicit path. In some embodiments, the explicit path comprises a graph in which case there is one or more next nodes in the graph. A graph may also be referred to as a tree. In some embodiments, the tree has a root node and branches. The last node on a branch (that is not connected to another other node) is referred to as a leaf node. In one embodiment, the root node is a destination node and leaf nodes are source nodes. In one embodiment, the root node is a source node and leaf nodes are destination nodes. Hence, the node floods the link-state information to the next node or the next nodes in the graph.

[0083] FIGs. 4A and 4B depict further details of a node distributing link-state information in accordance with embodiments of path scope flooding. FIG. 4A depicts a flowchart of one embodiment of a process 440 of path scope flooding to a next node in a path. Step 450 includes the node examining the explicit path information in the LSP to identify the network interface to the next node in the path. The next node on the path refers to the node that is next in the sequence from the node that received the LSP. The node that is next on the path will be relative to the node from which the LSP was received. In one embodiment, the path includes at least three nodes. In one embodiment, the path includes at least four nodes. In one embodiment, the path 108 includes at least five nodes. [0084] Step 460 includes the node sending the LSP on only the interface to the next node in the path. For example, the node might have ten interfaces to a corresponding ten other nodes. However, only one of those other nodes is on the path. Hence, the node sends the LSP only on the interface to the node on the path. However, the node does not send the LSP on any of the other interfaces.

[0085] FIG. 4B depicts an embodiment in which the explicit path information describes a graph. FIG. 4B depicts a flowchart of one embodiment of a process 470 of path scope flooding to a next node in a graph. Step 480 includes the node examining the explicit path information in the LSP to identify the interface(s) to the next node(s) in the graph. In one embodiment, the graph includes at least three nodes. In one embodiment, the graph includes at least four nodes. In one embodiment, the graph 108 includes at least five nodes. Step 490 includes the node sending the LSP on only the interface(s) the next node(s) in the graph. For example, the node might have ten interfaces to a corresponding ten other nodes. However, only two of those other nodes are next nodes on the graph. Hence, the node sends the LSP only on the two interfaces to the nodes on the graph. However, the node does not send the LSP on any of the other eight interfaces.

[0086] As noted above, a node that participates in, or is a member of, a path maintains a table or database of link-state information for that path. In some embodiments, a node maintains a PS-LSDB for each path in which it participates. These PS-LSDBs are separate and independent from conventional LSDBs maintained by the nodes. In some embodiments, the PS-LSDBs are synchronized independently from the conventional LSDBs.

[0087] FIG. 5 depicts two nodes and link-state databases maintained by the nodes, in accordance with one embodiment. The two nodes Ra, Rb have a point-to-point (P2P) adjacency. The nodes Ra, Rb could be two of the nodes in network 100 (see FIG. 1), but are not limited to that example. Node Ra has a conventional IS-IS link- state database (LSBD) 102a, and four path state link-state databases (PS-LSDB) 104- 3a, 104-4a, 104-5a, and 104-6a. Node Rb has a conventional IS-IS link-state database (LSBD) 102b, and six PS-LSDBs 104-3b, 104-4b, 104-5b, 104-6b, 104-7b, and 104-8b. The notation being used for the PS-LSDBs is for the number following “104” to refer to a path number, and for the letter to refer to the node. Hence, both nodes Ra, Rb have a PS-LSDB for paths 3, 4, 5, and 6, which are four different paths in the network. Node Rb has PS-LSDBs for paths 7 and 8. The paths are not depicted in FIG. 5.

[0088] Each PS-LSDB 104 contains one or more SQF-LSPs. In one embodiment, each SQF-LSP contains a description of one path in the network. For example, PS- LSDB 104-3a contains SQF-LSP 504-3; PS-LSDB 104-4a contains SQF-LSP 504-4; PS-LSDB 104-5a contains SQF-LSP 504-5 and PS-LSDB 104-6a contains SQF-LSP 504-6. PS-LSDB 104-3b contains SQF-LSP 504-3; PS-LSDB 104-4b contains SQF- LSP 504-4; PS-LSDB 104-5b contains SQF-LSP 504-5; PS-LSDB 104-6b contains SQF-LSP 504-6; PS-LSDB 104-7b contains SQF-LSP 504-7; and PS-LSDB 104-8b contains SQF-LSP 504-8. When nodes Ra and Rb synchronize their link-state databases, LSDB 102a and LSDB 102b are synchronized with each other; PS-LSDB 104-3a and PS-LSDB 104-3b are synchronized with each other; PS-LSDB 104-4a and PS-LSDB 104-4b are synchronized with each other; PS-LSDB 104-5a and PS-LSDB 104-5b are synchronized with each other; and PS-LSDB 104-6a and PS-LSDB 104- 6b are synchronized with each other. Synchronizing the PS-LSDBs 104 will synchronize the link-state information of the SQF-LSPs 504.

[0089] FIG. 6A depicts a flowchart of one embodiment of a process 600 of two nodes synchronizing PS-LSDBs. The process is described with respect to the example of FIG. 5 for purpose of illustration. Step 602 includes node Ra informing node Rb of all SQF-LSPs with explicit paths/graphs it supports. With respect to the example of FIG. 5, node Ra supports SQF-LSPs with explicit paths for paths 3, 4, 5, and 6. In one embodiment, node Ra sends node Rb one or more complete sequence number PDUs (CSNPs) to list the SQF-LSPs with explicit paths that it supports. [0090] Step 604 includes node Rb accessing the identifier of the next SQF-LSP supported by Ra, which can begin with the first of the SQF-LSPs supported by node Ra. Step 606 includes node Rb determining whether it also supports the SQF-LSP for this explicit path. For the sake of example, Rb determines, in step 606, that it supports SQF-LSP 504-3 (with explicit path 3) from the example of FIG. 5. Thus, in step 608, the explicit path information for this path is synchronized in the respective PS-LSDBs. For example, PS-LSDB 104-3a is synchronized with PS-LSDB 104-3b. Synchronizing the respective PS-LSDBs means to harmonize the SQF-LSPs 504 in the respective PS-LSDBs. Thus, SQF-LSP 504-3 in PS-LSDB 104-3a is harmonized with SQF-LSP 504-3 in PS-LSDB 104-3. If one node has a newer version of an SQF- LSP 504, then the newer version will replace the older version. For any given node to synchronize its SQF-LSP 504 with that of another node, the given node may receive an SQF-LSP 504 from another node, and use the received SQF-LSP 504 to update the given node’s version of the SQF-LSP 504. Alternatively, the given node may send its version of the SQF-LSP 504 to the other node, such that the other node may replace its version of the SQF-LSP 504 with the received SQF-LSP 504. For example, if the version of SQF-LSP 504-3 in PS-LSDB 104-3a is older than the version of SQF-LSP 504-3 in PS-LSDB 104-3b, then node Rb sends the newer version to node Ra. Ra then replaces its older version of SQF-LSP 504-3 in PS-LSDB 104-3a with the newer version from node Rb. Alternatively, if the version of SQF-LSP 504-3 in PS-LSDB 104-3b is older than the version of SQF-LSP 504-3 in PS-LSDB 104-3a, then node Ra sends the newer version to node Rb. Rb then replaces its older version of SQF-LSP 504-3 in PS-LSDB 104-3b with the newer version from node Ra. Step 610 includes a determination if there are more SQF-LSPs 504 for other explicit paths to consider. For the sake of example, there are three more SQF-LSPs 504 to consider. Hence, steps 604 - 610 are performed for SQF-LSPs 504-4, 504-5, and 504-6.

[0091] The process 600 could also be performed with node Rb sending the list of all SQF-LSPs 504 to node Ra in step 602. In this case, node Ra would determine that it does not support either SQF-LSPs 504-7 or 504-8. Hence, step 608 would not be performed for SQF-LSPs 504-7 or 504-8. In other words, PS-LSDB 104-7b of node Rb would not be synchronized with any database at node Ra. Likewise, PS-LSDB 104-8b of node 2 would not be synchronized with any database at node Ra.

[0092] FIG. 6B provides further details of one embodiment of synchronizing PS- LSDBs between two nodes. Again, an example with respect to nodes Ra and Rb in FIG. 5 will be referenced. FIG. 6B provides further details for one embodiment of the process 600 depicted in FIG. 6A. Step 620 includes node Ra sending one or more flooding scope complete sequence number PDUs (FS-CSNPs) to node Rb. The one or more FS-CSNPs contain a complete list of all SQF-LSPs 504 in all PS-LSDBs 104 maintained by node Ra. The one or more FS-CSNPs may be sent periodically by node Ra to node Rb. In one embodiment, the FS-CSNP is an extension to a PDU as defined in RFC 7356. FIG. 6C depicts a format for an embodiment of an FS-CSNP 640. The format has a number of fields, which will now be briefly discussed. The first field 642 may have a value of 0x83, as defined in ISO/I EC 10589:2002, Second Edition, "Information technology - Telecommunications and information exchange between systems - Intermediate System to Intermediate System intradomain routeing information exchange protocol for use in conjunction with the protocol for providing the connectionless-mode network service (ISO 8473)", 2002 (hereinafter, “ISO 8473”), which is hereby incorporated by reference.

[0093] The length indicator 644 is the length of the fixed header in octets. The version/protocol ID Extension 646 may be set to 1. The ID length 648 may be as defined in ISO 8473. The PDU type 650 may be 11 , as defined in ISO 8473. There are also several reserved bits 652. The version 654 may be 1. Next is a reserved field 656. The scope 658 is used to specify the flooding scope. Referring back to FIG. 2B, an LSP Scope Identifier is placed in the scope field 658 of the FS-CSNP 640. In one embodiment, the LSP Scope Identifier will identify one of the six path scopes in table 220. There may be more or fewer than six path scopes. Note that if a node supports more than one type of path scope flooding, then the node may send a separate FS-CSNP 640 for each type of supported path scope flooding.

[0094] There is a reserved bit “A” 660. The PDU length 662 indicate the entire length of the FS-CSNP 640 (including the header). The Source ID 664 indicates the system ID of the node that generated and sent the FS-CSNP 640. The variable-length fields 669 that are allowed in an FS-CSNP are limited to those TLVs that are supported by standard CSNP.

[009S] The SQF-LSP IDs 666, 668 are used to identity SQF-LSPs 504 that a node maintains in a PS-LSDB 104. Stated another way, the SQF-LSP IDs serve to identify the paths that the node is on. Stated still another way, the SQF-LSP IDs 666, 668 are used to identify explicit paths for which the node has link-state information. In an embodiment, the SQF-LSP IDs 666, 668 specify a range of path identifiers (IDs). The Start SQF-LSP ID 666 is the SQF-LSP ID of the first SQF-LSP 504 having the specified scope (in scope field 658) in the range covered by this FS-CSNP. The End SQF-LSP ID 668 is the SQF-LSP ID of the last SQF-LSP 504 having the specified scope in the range covered by this FS-CSNP. With reference to the example of FIG. 5, node Ra maintains PS-LSDBs 104 containing SQF-LSPs 504-3, 504-4, 504-5, and 504-6, which correspond to explicit paths 3, 4, 5, and 6. These paths (as well as SQF- LSPs) correspond to the four PS-LSDBs 104-3a, 104-4a, 104-5a, and 104-6a. In one embodiment, the FS-CSNP 640 specifies a range of path IDs. Hence, if the paths IDs supported by node Ra have a gap in the ID numbers, then node Ra may send more than one FS-CSNP 640.

[0096] Step 622 in FIG. 6B includes node Rb setting a Send Receive Message (SRM) bit (also referred as the SRM flag) for all SQF-LSPs 504 that it supports but that are absent in the FS-CSNP 640. The SRM bit indicates what LSPs are to be transmitted, and on what interfaces. For example, node Rb checks the SQF-LSP IDs 666, 668 to determine if there are any SQF-LSPs 504 supported by node Rb that are not listed in the SQF-LSP IDs 666, 668. For each such SQF-LSPs 504, node Rb sets the SRM bit. With reference to the example of FIG. 5, node Rb determines that SQF- LSP 504-5 and SQF-LSP 504-8 are absent from the FS-CSNP 640. Hence, the SRM bit is set for SQF-LSP 504-7 and SQF-LSP 504-8.

[0097] Step 622 also includes setting the SRM bit each SQF-LSP 504 for which node Rb has a newer version. For example, if node Rb has a newer version of any of SQF-LSP 504-3, SQF-LSP 504-4, SQF-LSP 504-5, or SQF-LSP 504-6, then node Rb sets the SRM bit for the respective SQF-LSP 504. For the purpose of illustration, it will be assumed that node Rb has a newer version of SQF-LSP 504-3. Hence, the SRM bit for SQF-LSP 504-3 is set.

[0098] Step 624 includes node Rb sending LSP updates to node Ra in accordance with the SRM bits that were set. In accordance with the example provided in step 622, node Rb would send LSP updates for SQF-LSP 504-3, SQF-LSP 504-7 and SQF-LSP 504-8.

[0099] Step 626 includes node Ra installing the SQF-LSPs 504 of which it participates. In the present example, node Ra would install the updates to SQF-LSP 504-3. However, because node Ra does not participate in SQF-LSP 504-7 or SQF- LSP 504-8 node Ra would not install SQF-LSP 504-7 or SQF-LSP 504-8.

[0100] Step 628 include node Ra setting the Send Sequence Number (SSN) bit (also referred as the SSN flag). The SSN bit is set to acknowledge the SQF-LSP. Step 630 includes node Ra sending a partial sequence number PDUs (PSNPs) to acknowledge the SQF-LSPs that it received. The PSNP will differ depending on whether node Ra is on the path. Stated another way, the PSNP will differ depending on whether node Ra maintains an SQF-LSP 504 for the path. In the present example, node Ra maintains SQF-LSP 504-3 in PS-LSDB 104-3a, but does not maintain SQF- LSP 504-7 or SQF-LSP 504-8 in any PS-LSDB 104.

[0101] FIG. 6D depicts a format for one embodiment of an FS-PSNP 670. In one embodiment, FS-PSNP 670 is an extension of an FS-PSNP described in RFC 7356. The format of FS-PSNP 670 has a number of fields, which will now be briefly discussed. The first field 672 may have a value of 0x83, as defined in ISO 8473. The length indicator 674 is the length of the fixed header in octets. The version/protocol ID Extension 676 may be set to 1. The ID length 678 may be as defined in ISO 8473. The PDU type 680 may be 12, as defined in ISO 8473. There are also several reserved bits 682. The version 684 may be 1. Next is a reserved field 686. The SQF- LSP ID 688 is used to specify the path ID.

[0102] The U bit 690 is used to indicate whether or not the node that sends the FS- PSNP 670 supports the SQF-LSP 504 that is specified by SQF-LSP ID 688. In one embodiment, a value of “0” for the U bit 690 indicates that the node supports the SQF- LSP 504 that is identified in SQF-LSP ID, and a value of “1” indicates that the node does not support the SQF-LSP that is identified in SQF-LSP ID. In another embodiment, the value for the U bit 690 is reversed such that a value of “1” for the U bit 690 indicates that the node supports the SQF-LSP 504 that is identified in SQF- LSP ID, and a value of “0” indicates that the node does not support the SQF-LSP 504 that is identified in SQF-LSP ID.

[0103] The PDU length 692 indicate the entire length of the FS-PSNP 670 (including the header). The Source ID 694 is the system ID of the node that generated and sent the FS-PSNP 670. The variable-length fields 696 that are allowed in an FS- PSNP are limited to those TLVs that are supported by standard CSNP.

[0104] Step 632 includes node Rb removing the SRM bit upon receiving the PSNP ACK from node Ra, if the U bit 690 is set. Thus, if node Ra does not maintain an SQF- LSP (e.g., SQF-LSP 504-7 and SQF-LSP 504-8), the U bit would be set to 1 such that node Rb removes the SRM bit for SQF-LSP 504-7 and SQF-LSP 504-8.

[0105] As noted in the discussion of process 600, in some embodiments, a node informs another node of all SQF-LSPs 504 that it supports when synchronizing PS- LSDBs. Thus, the node will inform the other nodes of the explicit paths it is on, and for which link-state information is maintained. In other embodiments, a node informs another node of SQF-LSPs 504 that are in common with the node and an adjacent node when synchronizing their PS-LSDBs 104. FIG. 7 depicts a flowchart of one embodiment of a process 700 of synchronizing PS-LSDBs for two nodes having a P2P adjacency. The process 700 will be described with respect to the example depicted in FIG. 5.

[0106] Step 702 includes node Ra sending to node Rb a list of only SQF-LSPs 504 that node Ra supports that have explicit paths/graphs in common between nodes Ra, Rb. For example, in the example in FIG 5, node Ra has SQF-LSP 504-3, 504-4, 504- 5, and 504-6 in common with node Rb. In one embodiment, node Ra sends one or more FS-CSNPs to provide the list of SQF-LSP 504 in common. In one embodiment, the FS-CSNPs have the format depicted in FIG. 6C.

[0107] Step 704 includes node Rb accessing the next SQF-LSP 504 on the list. Step 706 includes synchronizing the link-state information in for this SQF-LSP 504 in the PS-LSDBs 104 of the two nodes Ra, Rb.

[0108] Step 708 is a determination of whether there are more SQF-LSPs 504 on the list. If so, the process continues at step 704 to process the next SQF-LSP 504. Process 700 continues until the link-state information for all SQF-LSP 504 with common explicit paths/graphs have been synchronized.

[0109] The process depicted in FIG. 6B can be modified to cover an embodiment of process 700. To cover an embodiment of process 700, there would be no need to set the U bit 690 in step 630. A reason for this is that node Rb would not send any EP SQF-LSPs to node Ra (in step 624) that node Ra does not support.

[0110] As noted above in connection with the discussion of FIG. 5, in some embodiments nodes have a P2P adjacency. However, in other embodiments, nodes have a broadcast (or LAN) adjacency. FIG. 8A depicts nodes in a LAN 800, as well as link-state databases maintained by the nodes, in accordance with one embodiment. [0111] Node Rc maintains a conventional IS-IS link-state database (LSBD) 102c, and four path state link-state databases (PS-LSDB) 104-9c (containing SQF-LSP 504- 9), 104-10C (containing SQF-LSP 504-10), 104-11c (containing SQF-LSP 504-11), and 104-12 (containing SQF-LSP 504-12). Node Rd maintains a conventional IS-IS link-state database (LSBD) 102d, and six PS-LSDBs 104-9d (containing SQF-LSP 504-9), 104-1 Od (containing SQF-LSP 504-10), 104-11d (containing SQF-LSP 504- 11), 104-12d (containing SQF-LSP 504-12), 104-13d (containing SQF-LSP 504-13), and 104-14d (containing SQF-LSP 504-14). Node Re maintains a conventional IS-IS link-state database (LSBD) 102e, and two PS-LSDBs 104-13e (containing SQF-LSP 504-13) and 104-14e (containing SQF-LSP 504-14).

[0112] In one embodiment of synchronizing PS-LSDBs of nodes on a LAN, one of the nodes serves as a designated intermediate system (DIS). In one embodiment, the DIS serves as a coordinator to synchronize all PS-LSDBs that the DIS has with at least one other node in the LAN 800. For purpose of discussion, an example in which node Rc is the DIS will be discussed. Hence, node Rc would coordinate the synchronizing of: PS-LSDB 104-9c with PS-LSDB 9d; PS-LSDB 104-10c with PS-LSDB 10d; PS- LSDB 104-11c with PS-LSDB 11d; and PS-LSDB 104-12c with PS-LSDB 12d. However, this would leave some of the PS-LSDBs un-synchronized. In particular, any type of PS-LSDB in the LAN that node Rc does not maintain would at this point remain un-synchronized. Hence, in one embodiment, other nodes in the LAN 800 advertise their un-synchronized PS-LSDBs 104. Stated another way, the other nodes may announce the path ID associated with the un-synchronized PS-LSDBs 104. Another node that maintains the type of PS-LSDB for which such an advertisement is received may then synchronize its PS-LSDB with the advertiser. [0113] FIG. 8B depicts a flowchart of one embodiment of a process 820 of synchronizing PS-LSDBs in a LAN. The process 820 will be described with respect to the LAN 800 of FIG. 8A. Step 822 includes synchronizing the SQF-LSPs 504 for all explicit paths (EPs) that the DIS has in common with at least one other node in the LAN. In one embodiment, the DIS sends one or more FS-CSNPs to each node to which the DIS shares an SQF-LSPs 504. For example, the node Rc may send one or more FS-CSNPs to node Rd for SQF-LSP 504-9, SQF-LSP 504-10, SQF-LSP 504- 11 , and SQF-LSP 504-12. The other nodes then respond accordingly to synchronize the PS-LSDBs 104.

[0114] Step 824 includes the other nodes in the LAN advertising an SQF-LSP for explicit paths/graphs that were not synchronized with the DIS. In one embodiment, the node sends an FS-CNSP 640 having the format depicted in FIG. 6C. In one embodiment, the A bit 660 is used to advertise the SQF-LSPs that were not synchronized with the DIS. For example, node Rd has PS-LSDB 104-13d (corresponding to SQF-LSP 504-13) and PS-LSDB 104-14d (corresponding to SQF- LSP 504-14), which were not synchronized with node Rc (the DIS in this example). Hence, node Rd may send an FS-CNSP 640 with the A bit 660 set to “1”. SQF-LSP 504-13 and SQF-LSP 504-14 may be specified in fields 666 and 668. Node Re may also send one or more FS-CNSPs 640 with the A bit 660 set to advertise its unsynchronized PS-LSDBs.

[0115] Step 826 includes the other nodes in the LAN that received the advertisements in step 824 synchronizing any PS-LSDBs 104 that were not synchronized with the DIS. Thus, nodes Rd and Re would synchronize PS-LSDB 104- 13d with PS-LSDBs 104-13e, as well as synchronize PS-LSDB 104-14d with PS- LSDB 104-14e.

[0116] From time to time, a node may go down. Some embodiments include performing a graceful restart after a node restarts. FIG. 9A depicts one embodiment of a network 900 showing a state of link-state databases immediately after a node restarts. In this example, node Rf is just being re-started after having gone down. As such the conventional LSDB 102f, and the PS-LSDBs 104-15f, 104-16f, 104-17f, and 104-18f all need to be re-built. The link-state information from the databases of nodes Rg and Rh may be used for the re-build. Node Rg has conventional LSDB 102g and PS-LSDBs 104-15g, 104-16g, 104-17g, 104-18g, 104-19g, and 104-20g. Node Rh has conventional LSDB 102h and PS-LSDBs 104-18h and 104-20h. Note that the node being re-started may have a P2P connection or LAN connection with the other nodes. In FIG. 9, node Rf has a P2P connection with node Rh and a LAN connection with node Rg.

[0117] FIG. 9B depicts a flowchart of one embodiment of a process 910 of a graceful restart of a node having a PS-LSDB. For purpose of illustration, reference will be made to FIG. 9A. Step 912 of process 910 includes the node re-starting. Step 914 includes re-establishing a conventional LSDB 102 at the node. The conventional LSDB 102 may be re-established by synchronizing the LSDB 102 with the other conventional LSDBs 102 in the network 900.

[0118] Step 916 includes re-establishing each PS-LSDB 104 at the node. Further details of one embodiment of re-establishing one PS-LSDB 104 at the node are described in connection with FIG. 9C.

[0119] FIG. 9C depicts a flowchart of one embodiment of a process 920 of re establishing a PS-LSDB 104 during a graceful restart of a node. The process 920 will use node Rf as an example of the node being re-started. Step 922 includes node Rf entering GR. Step 923 includes synchronizing the conventional LSDBs 102. In one embodiment, the conventional LSDBs 102 are synchronized per procedures in Network Working Group, Request for Comments 5306, entitled “Restart Signaling for IS-IS,” dated October 2019, by M. Shand, et al. , which is incorporated by reference herein in its entirety.

[0120] Step 924 include starting a T2 timer for this PS-LSDB. Steps 926 includes initiating synchronization of this PS-LSDB. Steps 928-930 include a determination of whether the T2 timer has expired (step 928) prior to the synchronization of this PS- LSDB being complete (930). If the synchronization of this PS-LSDB completes prior to the T2 timer expiring, then a determination is made whether there is another PS- LSDB to synchronize (step 932). If there is another PS-LSDB to synchronize, then the process returns to step 924 upon the node entering GR for the next PS-LSDB to synchronize. If the T2 timer expires prior to a PS-LSDB completing synchronizing (step 928 is yes), then the adjacency is refreshed, in step 934.

[0121] FIG. 10 illustrates an embodiment of a node. The node 1000 may be configured to implement and/or support the path scope flooding, PS-LSDB synchronization, etc. described herein. The node 1000 may be implemented in a single node or the functionality of node 1000 may be implemented in a plurality of nodes. One skilled in the art will recognize that the term NE encompasses a broad range of devices of which node 1000 is merely an example. While node 1000 is described as a physical device, such as a router or gateway, the node 1000 may also be a virtual device implemented as a router or gateway running on a server or a generic routing hardware (whitebox).

[0122] The node 1000 may comprise a plurality of network interfaces 1010/1030. The network interfaces 1010/1030 may be configured to receive/transit wirelessly or via wirelines. Hence, the network interfaces 1010/1030 may connect to wired or wireless links. The network interfaces 1010/1030 may include input/output ports. The node 1000 may comprise receivers (Rx) 1012 and transmitters (Tx) 1032 for receiving and transmitting data from other nodes. The node 1000 may comprise a processor 1020 to process data and determine which node to send the data to and a memory. The node 1000 may also generate and distribute LSPs to describe and flood the various topologies and/or area of a network. Although illustrated as a single processor, the processor 1020 is not so limited and may comprise multiple processors. The processor 1020 may be implemented as one or more central processing unit (CPU) chips, cores (e.g., a multi-core processor), field-programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), and/or digital signal processors (DSPs), and/or may be part of one or more ASICs. Moreover, the processor 1020 may be implemented using hardware, software, or both.

[0123] The processor 1020 includes a path scope engine 1024, which may perform processing functions of the nodes. The path scope engine 1024 may also be configured to perform the steps of the methods discussed herein. As such, the inclusion of the path scope engine 1024 and associated methods and systems provide improvements to the functionality of the node 1000. Further, the path scope engine 1024 effects a transformation of a particular article (e.g., the network) to a different state. In an alternative embodiment, path scope engine 1024 may be implemented as instructions stored in the memory 1022, which may be executed by the processor 1020. Alternatively, the memory 1022 stores the path scope engine 1024 as instructions, and the processor 1020 executes those instructions.

[0124] The memory 1022 may comprise a cache for temporarily storing content, e.g., a random-access memory (RAM). Additionally, the memory 1022 may comprise a long-term storage for storing content relatively longer, e.g., a read-only memory (ROM). For instance, the cache and the long-term storage may include dynamic RAMs (DRAMs), solid-state drives (SSDs), hard disks, or combinations thereof. The memory 1022 may be configured to store one or more PS-LSDBs 104 and one or more LSDPs 102. The memory 1022 may also be configured to store a neighbor database 1040, which stores the type of path scope flooding that is supported by neighbor nodes.

[0125] The schemes described above may be implemented on any general- purpose network component, such as a computer or network component with sufficient processing power, memory resources, and network throughput capability to handle the necessary workload placed upon it.

[0126] FIG. 11 illustrates a schematic diagram of a general-purpose network component or computer system. The general-purpose network component or computer system 1100 includes a processor 1102 (which may be referred to as a central processor unit or CPU) that is in communication with memory devices including secondary storage 1104, and memory, such as ROM 1106 and RAM 1108, input/output (I/O) devices 1110, and a network 1112, such as the Internet or any other well-known type of network, that may include network connectively devices, such as a network interface. Although illustrated as a single processor, the processor 1102 is not so limited and may comprise multiple processors. The processor 1102 may be implemented as one or more CPU chips, cores (e.g., a multi-core processor), FPGAs, ASICs, and/or DSPs, and/or may be part of one or more ASICs. The processor 1102 may be configured to implement any of the schemes described herein. The processor 1102 may be implemented using hardware, software, or both. [0127] The secondary storage 1104 is typically comprised of one or more disk drives or tape drives and is used for non-volatile storage of data and as an over-flow data storage device if the RAM 1108 is not large enough to hold all working data. The secondary storage 1104 may be used to store programs that are loaded into the RAM 1108 when such programs are selected for execution. The ROM 1106 is used to store instructions and perhaps data that are read during program execution. The ROM 1106 is a non-volatile memory device that typically has a small memory capacity relative to the larger memory capacity of the secondary storage 1104. The RAM 1108 is used to store volatile data and perhaps to store instructions. Access to both the ROM 1106 and the RAM 1108 is typically faster than to the secondary storage 1104. At least one of the secondary storage 1104 or RAM 1108 may be configured to store routing tables, forwarding tables, or other tables or information disclosed herein.

[0128] It is understood that by programming and/or loading executable instructions onto the node 1000 (see FIG. 10), at least one of the processor 1020 or the memory 1022 are changed, transforming the node 1000 in part into a particular machine or apparatus, e.g., a router, having the novel functionality taught by the present disclosure. Similarly, it is understood that by programming and/or loading executable instructions onto the computer system 1100, at least one of the processor 1102, the ROM 1106, and the RAM 1108 are changed, transforming the computer system 1100 in part into a particular machine or apparatus, e.g., a router, having the novel functionality taught by the present disclosure. It is fundamental to the electrical engineering and software engineering arts that functionality that can be implemented by loading executable software into a computer can be converted to a hardware implementation by well-known design rules. Decisions between implementing a concept in software versus hardware typically hinge on considerations of stability of the design and numbers of units to be produced rather than any issues involved in translating from the software domain to the hardware domain. Generally, a design that is still subject to frequent change may be preferred to be implemented in software, because re-spinning a hardware implementation is more expensive than re-spinning a software design. Generally, a design that is stable that will be produced in large volume may be preferred to be implemented in hardware, for example in an ASIC, because for large production runs the hardware implementation may be less expensive than the software implementation. Often a design may be developed and tested in a software form and later transformed, by well-known design rules, to an equivalent hardware implementation in an application specific integrated circuit that hardwires the instructions of the software. In the same manner as a machine controlled by a new ASIC is a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions may be viewed as a particular machine or apparatus.

[0129] The description of the present disclosure has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the disclosure in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the disclosure. The aspects of the disclosure herein were chosen and described in order to best explain the principles of the disclosure and the practical application, and to enable others of ordinary skill in the art to understand the disclosure with various modifications as are suited to the particular use contemplated.

[0130] For purposes of this document, each process associated with the disclosed technology may be performed continuously and by one or more computing devices. Each step in a process may be performed by the same or different computing devices as those used in other steps, and each step need not necessarily be performed by a single computing device.

[0131] Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.