Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
STATELESS MULTICAST PROTOCOL FOR LOW-POWER AND LOSSY NETWORKS
Document Type and Number:
WIPO Patent Application WO/2017/141076
Kind Code:
A1
Abstract:
Methods and apparatuses for enabling stateless multicast routing in Low-Power and Lossy Networks (LLN's) are described. A first network device of an LLN receives a destination advertisement object (DAO) message from a second network device from a plurality of other network devices of the LLN. The received DAO message includes a Bit Index Explicit Replication (BIER) header field including an identifier of a multicast domain. The first network device updates a forwarding table entry to include an identifier of the second network device from which the DAO message is received, where the forwarding table entry causes the first network device to forward, to the second network device, BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain.

Inventors:
PALANKAR GANESH PRASAD (IN)
MATHEW NOBIN (IN)
Application Number:
IB2016/050918
Publication Date:
August 24, 2017
Filing Date:
February 19, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (PUBL) (SE)
International Classes:
H04L12/701; H04L1/00; H04L12/18; H04L12/751; H04L12/761
Foreign References:
US20150131659A12015-05-14
US20150078379A12015-03-19
US20150304118A12015-10-22
Other References:
IJ WIJNANDS ET AL: "Multicast using Bit Index Explicit Replication; draft-ietf-bier-architecture-03.txt", MULTICAST USING BIT INDEX EXPLICIT REPLICATION; DRAFT-IETF-BIER-ARCHITECTURE-03.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH- 1205 GENEVA, SWITZERLAND, 19 January 2016 (2016-01-19), pages 1 - 36, XP015110735
MARK TOWNSLEY: "MPLS over IP-Tunnels", 21 February 2005 (2005-02-21), XP055313735, Retrieved from the Internet [retrieved on 20161025]
None
Attorney, Agent or Firm:
DE VOS, Daniel M. et al. (99 Almaden Boulevard Suite 71, San Jose California, US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method, in a first network device of a Low-Power and Lossy Network (LLN) to be communicatively coupled to a plurality of other network devices of the LLN, of enabling multicast routing, the method comprising:

receiving (402) a destination advertisement object (DAO) message from a second

network device from the plurality of other network devices, wherein the DAO message includes a Bit Index Explicit Replication (BIER) header field including an identifier of a multicast domain; and

updating (404) a forwarding table entry to include an identifier of the second network device from which the DAO message is received, wherein the forwarding table entry causes the first network device to forward, to the second network device, BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain.

2. The method of claim 1, wherein the DAO message further includes BIER information that identifies the second network device with respect to the BIER domain, and wherein updating the forwarding table entry is performed according to the BIER information.

3. The method of claim 1, wherein (406) the DAO message is received with an identifier of a third network device from the LLN, wherein the third network device is the closest BIER enabled network device of the LLN on a path coupling the first network device with one or more receivers of the multicast domain.

4. The method of claim 3, wherein the identifier of the third network device is a source address of an IP packet header encapsulating the DAO message.

5. The method of claim 3, wherein the first network device is BIER enabled and the second network device is not BIER enabled, and the method further comprises establishing an Internet Protocol (IP) tunnel between the first network device and the third network device.

6. The method of claim 5 further comprising forwarding one or more BIER encapsulated data packets for the multicast domain through the IP tunnel established between the first network device and the third network device to be forwarded towards the one or more receivers of the multicast domain.

7. The method of claim 5 further comprising transmitting a DAO acknowledgment message towards the third network device through the IP tunnel.

8. A first network device of a Low-Power and Lossy Network (LLN) to be communicatively coupled to a plurality of other network devices of the LLN, the first network device comprising: a non-transitory computer readable medium to store instructions; and

a processor coupled with the non-transitory computer readable medium to process the stored instructions to:

receive (402) a destination advertisement object (DAO) message from a second network device from the plurality of other network devices, wherein the DAO message includes a Bit Index Explicit Replication (BIER) header field including an identifier of a multicast domain, and update (404) a forwarding table entry to include an identifier of the second

network device from which the DAO message is received, wherein the forwarding table entry causes the first network device to forward, to the second network device, BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain.

9. The first network device of claim 8, wherein the DAO message further includes BIER information that identifies the second network device with respect to the BIER domain, and wherein updating the forwarding table entry is performed according to the BIER information.

10. The first network device of claim 8, wherein (406) the DAO message is received with an identifier of a third network device, wherein the third network device is the closest BIER enabled network device on a path coupling the first network device with one or more receivers of the multicast domain.

11. The first network device of claim 10, wherein the identifier of the third network device is a source address of an IP packet header encapsulating the DAO message.

12. The first network device of claim 10, wherein the first network device is BIER enabled and the second network device is not BIER enabled, and the processor is further to establish an Internet Protocol (IP) tunnel between the first network device and the third network device.

13. The first network device of claim 12, wherein the processor is further to forward one or more BIER encapsulated packets for the multicast domain through the IP tunnel established between the first network device and the third network device to be forwarded towards the one or more receivers of the multicast domain.

14. The first network device of claim 12, wherein the processor is further to transmit a DAO acknowledgment message towards the third network device through the IP tunnel.

15. A non-transitory computer readable storage medium that provides instructions, which when executed by a processor of a first network device of a Low-Power and Lossy Network (LLN) to be communicatively coupled to a plurality of other network devices of the LLN, cause said processor to perform operations comprising:

receiving (402) a destination advertisement object (DAO) message from a second

network device from the plurality of other network devices, wherein the DAO message includes a Bit Index Explicit Replication (BIER) header field including an identifier of a multicast domain; and

updating (404) a forwarding table entry to include an identifier of the second network device from which the DAO message is received, wherein the forwarding table entry causes the first network device to forward, to the second network device, BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain.

16. The non-transitory computer readable storage medium of claim 15, wherein the DAO message further includes BIER information that identifies the second network device with respect to the BIER domain, and wherein updating the forwarding table entry is performed according to the BIER information.

17. The non-transitory computer readable storage medium of claim 15, wherein (406) the DAO message is received with an identifier of a third network device, wherein the third network device is the closest BIER enabled network device on a path coupling the first network device with one or more receivers of the multicast domain.

18. The non-transitory computer readable storage medium of claim 17, wherein the identifier of the third network device is a source address of an IP packet header encapsulating the DAO message.

19. The non-transitory computer readable storage medium of claim 17, wherein the first network device is BIER enabled and the second network device is not BIER enabled, and the operations further include establishing an Internet Protocol (IP) tunnel between the first network device and the third network device.

20. The non-transitory computer readable storage medium of claim 19, wherein the operations further comprise forwarding one or more BIER encapsulated packets for the multicast domain through the IP tunnel established between the first network device and the third network device to be forwarded towards the one or more receivers of the multicast domain.

21. The non-transitory computer readable storage medium of claim 19, wherein the operations further comprise transmitting a DAO acknowledgment message towards the third network device through the IP tunnel.

Description:
STATELESS MULTICAST PROTOCOL FOR LOW-POWER AND LOSSY

NETWORKS

FIELD

[0001] Embodiments of the invention relate to the field of packet networks; and more specifically, to a stateless multicast protocol for Low-Power and Lossy Networks.

BACKGROUND

[0002] Low Power and Lossy Networks (LLN's) are a type of network optimized to save energy and to support traffic. In such networks, the network devices forming the network and their interconnects are constrained. The network devices are constrained with low battery power, memory and processing power. The links coupling the network devices are characterized by high loss rates, low data rates and instability. The Internet Engineering Task Force (IETF) Routing Over Low power and Lossy networks (ROLL) workgroup standardized Routing Protocol for Low-Power and Lossy Networks (RPL) as a routing protocol for such networks. RPL is an IPv6 routing protocol that builds a Destination Oriented Directed Acyclic Graph DODAG Topology rooted at a Low Power and Lossy Network Boundary Router (LBR) as defined in Request for Comments RFC 7102 "Terms Used in Routing for Low-Power and Lossy Networks." DODAG is a directed graph rooted at the LBR. RPL is used to build an IPv6 based routing topology over a mesh network using Objective Function (OF) with a set of constraints persisting on the network devices or the environment in which the network devices are operating. Each RPL instance contains one or more DODAG root that can be coupled to another network, which does not have the same constraint as the LLN.

[0003] Traditional IP multicast forwarding typically relies on topology maintenance mechanisms to forward multicast messages to all subscribers of a multicast group. However, maintaining such topologies in LLNs is costly and may not be feasible given the available resources.

[0004] Memory constraints may limit the network devices of an LLN to maintain links/routes to one or multiple neighbors. For this reason, the Routing Protocol for LLNs (RPL), described in the Internet Engineering Task Force (IETF) Request For Comment (RFC) 6550, specifies a storing and a non-storing mode. The non-storing mode allows RPL network devices (e.g., routers) to maintain only one or few default routes towards an LLN Border Router (LBR) and use source routing to forward packets away from the LBR. For the same reasons, an LLN device may not be able to maintain a multicast forwarding topology when operating with limited memory.

[0005] Furthermore, the dynamic properties of wireless networks, which are typical examples of LLN, can make the cost of maintaining a multicast forwarding topology prohibitively expensive. In wireless environments, topology maintenance may involve selecting a connected dominating set used to forward multicast messages to all network devices in an administrative domain.

[0006] However, existing mechanisms often require two-hop topology information and the cost of maintaining such information grows polynomially with network density. Multicast is one of the significant applications that is supported on multiple flavors of networks. With the constraints in LLN, running traditional multicast protocols on such networks may lead to exhaustion of nodal resources as they expect the nodes to store per-flow states.

SUMMARY

[0007] Methods and apparatuses for enabling stateless multicast routing in LLN are described.

[0008] In a general aspect, a method, in a first network device of a low-power and lossy network (LLN) to be communicatively coupled to a plurality of other network devices of the LLN, of enabling multicast routing, is described. The method includes receiving a destination advertisement object (DAO) message from a second network device from the plurality of other network devices, where the DAO message includes a bit index explicit replication (BIER) header field including an identifier of a multicast domain; and updating a forwarding table entry to include an identifier of the second network device from which the DAO message is received, where the forwarding table entry causes the first network device to forward, to the second network device, BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain.

[0009] In a general aspect, a first network device of a low-power and lossy network (LLN) to be communicatively coupled to a plurality of other network devices of the LLN is described. The first network device includes a non-transitory computer readable medium to store instructions; and a processor coupled with the non-transitory computer readable medium to process the stored instructions. The process is to receive a destination advertisement object (DAO) message from a second network device from the plurality of other network devices, where the DAO message includes a bit index explicit replication (BIER) header field including an identifier of a multicast domain and to update a forwarding table entry to include an identifier of the second network device from which the DAO message is received, where the forwarding table entry causes the first network device to forward, to the second network device, BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain.

[0010] In another general aspect, a non-transitory computer readable storage medium that provides instructions is described. The instructions when executed by a processor of a first network device of a low-power and lossy network (LLN) to be communicatively coupled to a plurality of other network devices of the LLN, cause said processor to perform operations including receiving a destination advertisement object (DAO) message from a second network device from the plurality of other network devices, where the DAO message includes a bit index explicit replication (BIER) header field including an identifier of a multicast domain; and updating a forwarding table entry to include an identifier of the second network device from which the DAO message is received, where the forwarding table entry causes the first network device to forward, to the second network device, BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] The invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. In the drawings:

[0012] Figure 1 illustrates an exemplary Low-Power and Lossy Network (LLN) including BIER-enabled network devices in accordance with some embodiments of the invention.

[0013] Figure 2A illustrates an exemplary DIO message including a BIER header field for enabling multicast routing in an LLN in accordance with some embodiments of the invention.

[0014] Figure 2B illustrates an exemplary DAO message including a BIER header field for enabling multicast routing in an LLN in accordance with some embodiments of the invention.

[0015]

[0016] Figure 3A-E illustrate block diagrams of an exemplary LLN 300 in which multicast routing is enabled in accordance with some embodiments of the invention.

[0017] Figure 4 illustrates exemplary operations performed at a first network device for enabling stateless multicast routing in an LLN in accordance with some embodiments of the invention.

[0018] Figure 5 A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention. [0019] Figure 5B illustrates an exemplary way to implement a special-purpose network device according to some embodiments of the invention.

[0020] Figure 5C illustrates various exemplary ways in which virtual network elements (VNEs) may be coupled according to some embodiments of the invention.

[0021 ] Figure 5D illustrates a network with a single network element (NE) on each of the NDs, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention.

[0022] Figure 5E illustrates the simple case of where each of the NDs implements a single NE, but a centralized control plane has abstracted multiple of the NEs in different NDs into (to represent) a single NE in one of the virtual network(s), according to some embodiments of the invention.

[0023] Figure 5F illustrates a case where multiple VNEs are implemented on different NDs and are coupled to each other, and where a centralized control plane has abstracted these multiple VNEs such that they appear as a single VNE within one of the virtual networks, according to some embodiments of the invention.

[0024] Figure 6 illustrates a general purpose control plane device with centralized control plane (CCP) software 650), according to some embodiments of the invention.

DESCRIPTION OF EMBODIMENTS

[0025] The following description describes methods and apparatuses for enabling stateless multicast in Low-Power and Lossy networks. In the following description, numerous specific details such as logic implementations, opcodes, means to specify operands, resource

partitioning/sharing/duplication implementations, types and interrelationships of system components, and logic partitioning/integration choices are set forth in order to provide a more thorough understanding of the present invention. It will be appreciated, however, by one skilled in the art that the invention may be practiced without such specific details. In other instances, control structures, gate level circuits and full software instruction sequences have not been shown in detail in order not to obscure the invention. Those of ordinary skill in the art, with the included descriptions, will be able to implement appropriate functionality without undue experimentation.

[0026] References in the specification to "one embodiment," "an embodiment," "an example embodiment," etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.

[0027] Bracketed text and blocks with dashed borders (e.g., large dashes, small dashes, dot- dash, and dots) may be used herein to illustrate optional operations that add additional features to embodiments of the invention. However, such notation should not be taken to mean that these are the only options or optional operations, and/or that blocks with solid borders are not optional in certain embodiments of the invention.

[0028] In the following description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. "Coupled" is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. "Connected" is used to indicate the establishment of communication between two or more elements that are coupled with each other.

[0029] The embodiments described herein present methods and apparatuses for enabling multicast routing in Low-Power and Lossy Networks (LLN's). According to some

embodiments, Routing Protocol for LLNs (RPL) is modified to include support for Bit Indexed Explicit Replication (BIER) and provide multicast routing in LLNs. In some embodiments, a first network device of an LLN receives a destination advertisement object (DAO) message from a second network device from the plurality of other network devices of the LLN. The received DAO message includes a Bit Index Explicit Replication (BIER) header field including an identifier of a multicast domain. The first network device updates a forwarding table entry to include an identifier of the second network device from which the DAO message is received, where the forwarding table entry causes the first network device to forward, to the second network device, BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain.

[0030] In some embodiments, methods and apparatuses are provided for enabling multicast routing in LLNs through RPL extended with the BIER protocol in a network including BIER- capable and non-BIER-capable network devices. In some embodiments, the DAO message received at the first network device further includes an identifier of a third network device from the LLN, wherein the third network device is the closest BIER enabled network device of the LLN on a path coupling the first network device with one or more receivers of the multicast domain. In some embodiments, when the second network device is not BIER enabled, the first network device establishes an Internet Protocol (IP) tunnel between the first network device and the third network device such that the first network device forwards BIER packets destined to the multicast domain through the IP tunnel established between the first network device and the third network device.

[0031 ] The embodiments described in the foregoing present clear advantages for multicast routing in LLNs. The embodiments avoid building an explicit multicast tree over the LLN network. Further in these embodiments, there is no need for maintaining per-flow multicast states at each network device thus conserving the network device's resources. The embodiments present a stateless multicast protocol that overcomes the large memory requirements of standard multicast protocols providing an efficient multicast mechanism for Low-Power and Lossy networks, which does not exhaust the limited physical resources of the network devices of an LLN (i.e., minimal memory, reduced CPU performance and limited power source).

[0032] Figure 1 illustrates an exemplary LLN 100 including BIER-enabled network devices in accordance with some embodiments. The LLN includes a first network device 102, a second network device 104 and a third network device 106. In some embodiments, each one of the network devices is implemented as described in further details with reference to Figures 5A-F below. The first network device (ND) 102 is coupled with the second network device (ND) 104, and with the third network device 106. The first network device can be a border network device coupling the LLN network to another network (not illustrated), which can be more stable (e.g., Internet). The second network device 104 may be coupled with one or more electronic devices (receivers 103) which are clients of a multicast group and are operative to request traffic destined for the multicast group. In alternative embodiments, the second network device is not directly coupled with receivers of the multicast group; instead it is coupled with another network device (e.g., ND 106) and is on a path coupling the receivers of the multicast group with the first network device 102. In the illustrated exemplary embodiment, ND 104 is a BIER-capable network device (i.e., ND 104 is operative to receive and forward BIER encapsulated packets).

[0033] In one embodiment, ND 104 receives a request for traffic of a multicast group from at least one of the multicast receivers (e.g., electronic device 103a) (e.g., the receivers 103 may use multicast group management to transmit the request for the multicast traffic (e.g., an Internet Group Management Protocol (IGMP) or Multicast Listener Discovery (MLD)). Upon receipt of the request for the multicast traffic, ND 104 initiates a process for creating a path for the multicast traffic from a multicast source to the receivers of the multicast group. The embodiments use the BIER protocol to provide multicast routing and provide an RPL extension for BIER to enable network devices of a BIER domain to exchange some BIER specific information among themselves. RPL is extended to support BIER and to perform the distribution of these information. In the following embodiments, extensions to RPL to distribute BIER specific information are described.

[0034] At operation 1, ND 104 transmits an RPL DODAG Information Solicitation (DIS) message. The DIS message may be used to solicit a DODAG Information Object from an RPL network device. Thus, ND 104 may use DIS to probe its neighborhood for nearby DODAGs. In this illustrated example, ND 104 may transmit the DIS message to probe network device 102 (and optionally network device 106) for a nearby multicast DODAG. In some embodiments, operation 1 is skipped and ND 104 does not transmit the DIS message; instead ND 104 receives at operation 3a, a DODAG Information Object (DIO) message from ND 102 without having transmitted a DIS message.

[0035] At operation 2, ND 102 constructs a DIO message including a BIER header field. The BIER header field included within the DIO message identifies a multicast domain (i.e., a BIER domain). In some embodiments, ND 102 is an LLN border network device (e.g., a border router), which is configured to act as a root of an RPL DODAG. In these embodiments, ND 102 may act as a gateway for communications between multiple multicast domains. In some embodiments, ND 102 can be operative to convert and translate between multiple multicast formats (e.g., between Protocol Independent Multicast (PIM) and BIER). In other embodiments, ND 102 is a BIER-enabled network device which is part of the RPL DODAG (other than the root of the DODAG). At operations 3a and 3b, ND 102 starts advertising DIO messages to neighboring network devices to mark its presence. The DIO messages include the BIER header field used to inform the neighbor devices (i.e., the children of ND 102 in the hierarchy of the DODAG) that ND 102 is part of the multicast domain identified in the BIER header field. As will be described in further detail below with reference to Figure 2A, the BIER header field includes an identifier which uniquely identifies the BIER domain.

[0036] Figure 2A illustrates an exemplary DIO message 220 including a BIER header field 224 for enabling multicast routing in LLN in accordance with some embodiments. The DODAG Information Object (DIO) message carries information that allows a network device to discover an RPL Instance, learn its configuration parameters, select a DODAG parent set, and maintain the DODAG. The DIO message 220 includes a first portion 222 and a second portion 224.

[0037] The first portion 222 of the DIO message 220 includes standard fields of a DIO message as defined in RFC 6550, which is hereby incorporated by reference. The fields included in portion 222 will be described below with respect to parameter values that enable multicast routing with RPL. "RPLInstancelD" is a field set by the DODAG root that indicates to which RPL Instance the DODAG belongs. "Version Number" is a field set by the DODAG root to indicate the version of the DODAG being formed. "Rank" is a field indicating the DODAG Rank of the network device sending the DIO message. The Grounded 'G' flag indicates whether the DODAG advertised can satisfy the application-defined goal. The Mode of Operation (illustrated as "MOP" in Figure 2A) field identifies the mode of operation of the RPL Instance as administratively provisioned and distributed by the DODAG root. For example, MOP is set to 3 (i.e., "storing mode with multicast support) to enable multicast routing using RPL.

"DODAGPreference" (illustrated as "Prf ' in Figure 2A) is a field that defines how preferable the root of this DODAG is compared to other DODAG roots within a same RPL instance.

Destination Advertisement Trigger Sequence Number (illustrated as DTSN in Figure 2A) is set by the network device issuing the DIO message and is used to maintain downward routes. "Flags" is a number of bits unused and reserved for flags. "Reserved" is a field of unused bits. "DODAGID" is an IPv6 address set by a DODAG root that uniquely identifies a DODAG.

[0038] DIO message 220 further includes a second portion 224, which includes a BIER header field. Thus in the present embodiments, a DIO message is extended to include BIER header information to identify the multicast domain to which the transmitting ND belongs (e.g., ND 102). The BIER header field is included in an optional portion of the standard DIO message defined in RFC 6550. The BIER header field 224 includes a "Type" field indicating the type of the packet, a "Capability" field that is a flag indicating BIER capabilities of the transmitting ND (here ND 102), a "Length" field indicating the length of the packet, a "Domain" field identifying the BIER domain to which the network device belongs (the domain field may identify a BIER domain or a BIER sub-domain), a "BFR-Prefix" field indicating the BIER Forwarding Router Prefix of the transmitting ND (ND 102), a BSL field indicating a Bit String Length supported by the transmitting network device, and a "BFR-ID" field indicating a BIER Forwarding Router ID assigned to the network device in this BIER domain. These fields represent BIER information which identifies the transmitting ND within the BIER domain and provide information with respect to this ND (e.g., whether the ND is BIER capable or not, and if it is BIER capable how it can be identified within the BIER domain). This information enables the receiving ND (e.g., 104) to update forwarding tables (e.g., Bit Index Routing Table (BIRT) and/or Bit Index Forwarding Table (BIFT)) for receiving and forwarding multicast data packets through the BIER domain.

[0039] Referring back to Figure 1, upon receipt of the DIO message including the BIER header field, ND 104 determines, at operation 4a, whether to join the DODAG based on the objective function included in the received DIO message. At operation 4b, ND 104 parses the BIER header field and extracts BIER information related to the multicast domain to update a forwarding table (e.g., a BIRT and/or a BIFT) associated with the BIER domain identified in the BIER header field of the DIO message. For example, ND 104 may use the values of the "BFR- Prefix" field and the "BFR-ID" field of the DIO message to populate its BIRT entry for ND 102. At operation 4c, ND 104 identifies a set of parent network devices according to information carried in the DIO message. In some embodiments, ND 104 may receive additional DIO messages including BIER header fields from other network devices (not shown in Figure 1). In this case, ND 104 then identifies from the plurality of network devices a set of parent network devices coupling ND 104 to the RPL DODAG and consequently to the BIER domain. In these embodiments, adjacencies of BIER-enabled network device (e.g., ND 102 and ND 104) belonging to a same BIER domain are determined by the adjacencies identified through the RPL protocol.

[0040] When ND 104 accepts to join the RPL DODAG, it constructs, at operation 4d, an RPL Destination Advertisement Object (DAO) message which includes a BIER header field to establish an upward route towards the root of the DODAG. The BIER header field includes BIER information related to ND 104 with respect to the BIER domain. This BIER information is to be used by ND 102 for forwarding multicast data packets towards ND 104 through the BIER protocol. At operation 5, the DAO message is then transmitted to ND 102 (which was identified as a parent of ND 104 in the RPL DODAG). In some embodiments, ND 102 and ND 104 are configured to operate in a "Storing" mode of operation, which is a fully stateful mode (e.g., a MOP is set to 3, in the RPL messages such that the NDs support multicast in the storing mode of operation). In a storing mode each network device stores routing tables for its DODAG. In this mode of operation, each hop on an upward route examines its routing table to decide on the next hop. In some embodiments, the DAO message is constructed as described in more details with reference to Figure 2B below.

[0041] Figure 2B illustrates an exemplary DAO message including a BIER header field for enabling multicast routing in LLN in accordance with some embodiments. The DAO message 230 includes a first portion 232 and a second portion 234. The DAO message 230 is used to propagate destination information upward along the multicast DODAG. In some embodiments, when the LLN operates in a storing mode, the DAO message is unicast by a child network device (e.g., ND 104) to the selected parent(s) (e.g., ND 102). In other embodiments, when the LLN operates in a non-storing mode, the DAO message is unicast to the DODAG root. The DAO message may optionally, upon explicit request or error, be acknowledged by its destination with a Destination Advertisement An acknowledgement (DAO-ACK) message is sent back to the sender of the DAO (e.g., DAO-ACK as described with reference to Figure 3E). The first portion 232 of the DAO message 230 includes standard fields of a DAO message as defined in RFC 6550. [0042] The first portion 232 of the DAO message 230 includes standard fields of a DAO message as defined in RFC 6550, which is hereby incorporated by reference. The fields included in portion 232 will be described below with respect to parameter values that enable multicast routing with RPL. "RPLInstancelD" is a header field indicating the topology instance associated with the DODAG, as learned from the DIO message (e.g., DIO message 220). The "K" field is a flag, which indicates that the recipient is expected to send a DAO-ACK back. The "D" field is a flag, which indicates that the DODAGID field is present in the DAO message. The "Flags" field represents remaining unused bits in the Flags field and, and the unused bits are reserved for flags. The "Reserved" field is an unused field. The DAOSequence field is a counter present in the DAO message to correlate the message with a DAO-ACK message. The DAOSequence number is locally significant to the ND that issues a DAO message for its own consumption to detect the loss of a DAO message and enable retries. "DODAGID" is an IPv6 address set by a DODAG root that uniquely identifies a DODAG. This field is optional in DAO messages and may not be included in some embodiments.

[0043] DAO message 230 further includes a second portion 234, which includes a BIER header field. Thus in the present embodiments, a DAO message is extended to include BIER header information to identify the BIER domain to which the transmitting ND belongs (e.g., ND 104). The BIER header field is included in an optional portion of the standard DAO message. The BIER header field 234 includes a "Type" field indicating the type of the packet, a

"Capability" field that is a flag indicating BIER capabilities of the transmitting ND (here ND 104), a "Length" field indicating the length of the packet, a "Domain" field identifying the BIER domain to which the network device belongs (the domain field may identify a BIER domain or a BIER sub-domain), a "BFR-Prefix" field indicating the BIER Forwarding Router Prefix of the transmitting ND (ND 103), a BSL field indicating a Bit String Length supported by the transmitting network device, and a "BFR-ID" field indicating a BIER Forwarding Router ID assigned to the network device in this BIER domain. These fields represent BIER information which identifies the transmitting ND within the BIER domain and provide information with respect to this ND (e.g., whether the ND is BIER capable or not, and if it is BIER capable how it can be identified within the BIER domain). This information enables the receiving ND (e.g., 102) to update forwarding tables (e.g., BIRT and/or BIFT) for receiving and forwarding multicast data packets through the BIER domain.

[0044] Referring back to Figure 1, at operation 6, upon receipt of the DAO message including the BIER header field, ND 102 parses the BIER header field and extracts BIER information related to the multicast domain to update one or more forwarding tables (e.g., a BIRT and/or a BIFT) associated with the BIER domain identified in the BIER header field of the DAO message. For example, ND 102 may use the values of the "BFR-Prefix" field and the "BFR-ID" field of the DAO message to populate its BIRT entry for ND 102. In addition, ND 102 may update Forwarding Bit Mast (F-BM) and BFR Neighbor (BFR-NH) in the BIFT entry associated with ND 104. The update of the forwarding table(s) causes BIER encapsulated packets received at ND 102 for the multicast domain to be forwarded toward ND 104. In some embodiments, in addition to updating the forwarding table, ND 102 may transmit, at operation 7, a D AO- Acknowledgment (DAO-ACK) to ND 104.

[0045] In some embodiments, when ND 102 is the root of the RPL DODAG coupled with ND 104, it may further act as a gateway network device for communications between multiple multicast domains. For example ND 102 may be a border router of an LLN. Following the update of the BIFT, ND 102 is now operative to forward BIER encapsulated multicast packets as described with reference to IETF Draft "Multicast using Bit Index Explicit Replication: draft- ietf-bier-architecture-03," which is hereby incorporated by reference. In some embodiments, ND 102 can be operative to convert and translate between multiple multicast formats (e.g., between PIM and BIER). ND 102 may receive multicast data packets and encapsulate the packets with a BIER Header destined to ND 104 to be forwarded to the receivers 103. In other embodiments, ND 102 is a network device on a multicast path between a root of a DODAG and ND 104 which is part of the DODAG. In these alternative embodiments, the root of the DODAG may receive multicast data packets and encapsulate the packets with a BIER Header destined to ND 104 to be forwarded to the receivers 103. The encapsulated packets may be routed through the intermediary ND 102 prior to being forwarded to ND 104.

[0046] In some embodiments, all network devices of an LLN form a separate RPL DODAG in which the network devices are all BIER capable. In these embodiments, the process described with reference to Figure 1 is performed by each ND of the LLN that needs to join the multicast domain. Once the NDs have joined the multicast domain (by exchanging BIER information through RPL DIO and RPL DAO messages), BIER encapsulated packets can be forwarded to the multicast clients served by each ND.

[0047] Figure 3A-E illustrate block diagrams of an exemplary LLN 300 in which multicast routing is enabled in accordance with some embodiments. Multicast routing is enabled by using an RPL extension for BIER. The LLN 300 includes multiple network devices 302R and 302A- 302G. In the embodiments described below, the LLN 300 includes BIER-enabled network devices (which can also be referred to as Bit-Forwarding Routers (BFR)) (e.g., 302R, 302A-C, 302F, and 302G) as well as non-BIER-enabled network devices (e.g., ND 302D and ND 302E). In this example, network devices 302R, 302A, 302B, 302C, 302F, and 302G are part of a same BIER domain. In some embodiments, the BIER domain may include more or less network devices than the ones illustrated herein in Figures 3A-E.

[0048] ND 302R is configured to operate as a root of an RPL DODAG that is operative to support BIER. ND 302R may be configured to act as a Bit-Forwarding Ingress Router (BFIR) of a BIER domain, which receives multicast data packets that enter the BIER domain. In some embodiments ND 302R is a gateway network device for communication between multiple multicast domains. For example, ND 302R may convert between multicast formats (e.g., from PIM to BIER). It is further operative to share the BIER capabilities of the other network devices in its DODAG with all the devices. ND 302G is a BIER-enabled network device coupled with the multicast receivers 303. ND 302G can be referred to as "Bit-Forwarding Egress Router" (BFER) and forwards the multicast data packets that leave the BIER domain. Intermediary BIER-enabled NDs (302A, 302C, 302F) which are part of the BIER domain may be referred to as "transit BFRs." The receivers 303 are multicast clients that have subscribed to receive traffic of a multicast group. ND 302R is operative to receive multicast data packets for the multicast group from the network 301 and transmit the multicast packets towards the receivers 303 through a path in the LLN.

[0049] The operations described with reference to Figures 3A-F are performed at several NDs of the LLN 300 to enable ND 302G to create a multicast path from ND 302R towards the receivers 303 passing through ND 302G using RPL, when the LLN includes BIER-enabled and non-BIER-enabled network devices. In these embodiments, an RPL DAO message extended to include BIER information of ND 302R is propagated upwards towards the root of the RPL DODAG to create the multicast path. The RPL DAO message is further propagated with an identifier of the latest traced BIER-enabled network device such that when it reaches a BIER- enabled network device, which has multicast receiver children that are non-BIER network devices, then this network device can build an IP2IP/P2MP/MP2MP tunnel to transmit BIER packets.

[0050] At operation 701, ND 302G transmits a DAO message including a BIER header field that includes an identifier of a BIER domain. In some embodiments, the DAO message is constructed as described with reference to Figure 2B. The DAO message is transmitted in response to the receipt of a DIO message advertised by the parent network device ND 302F. The DIO message advertised by ND 302F is caused by ND 302F joining the RPL DODAG. At operation 702, ND 302F determines whether the current ND (i.e., ND 302F) and the source ND (i.e., the ND from which the DAO message is received, here ND 302G) are BIER-enabled network devices. In some embodiments, ND 302F may determine that the source ND (ND 302G) is BIER-enabled by parsing the BIER header field of the received DAO message 351 and extracting BIER information related to ND 302G. In this case, the two devices ND 302G and 302F are BIER enabled. Upon receipt of the DAO message 351 from ND 302G, and determination that the source of the DAO message is BIER enabled, ND 302F adds, at operation 703, an entry in its forwarding table(s) for the multicast domain as well as an indication of the BIER capability of ND 302G. Thus, in contrast to standard forwarding tables, additional information is added in the forwarding table(s) for each entry associated with a network device. This additional information indicates for each ND whether the device is BIER-enabled or not. In an exemplary embodiment a 1-bit flag is added to the table and the flag may be set to a value of 1 for indicating that the ND is BIER-enabled and to a value of 0 for indicating that the ND is non-BIER-enabled. One may understand that different implementations can be used to include an indication that the device is BIER-enabled in the forwarding table without departing from the scope of the present invention.

[0051] Flow then moves to operation 704, at which ND 302F identifies the latest BIER- enabled ND encountered, which is ND 302F in this example. At operation 705, upon determination that the source ND is BIER-capable, ND 302F encapsulates the DAO message 351 into an IP packet in which the source address is the IP address of the current ND 302F (which is the latest BIER-enabled network device encountered by the DAO message), and forwards the encapsulated DAO message 353 to its RPL parent (here ND 302E).

[0052] The flow of operations then moves to operation 706 of Figure 3B, at which ND 302E determines, in response to the receipt of the updated DAO message 353, that the current ND (i.e., ND 302E) is Non-BIER enabled and that ND 302F, which is the source of the DAO message 353 is BIER enabled. Upon receipt of the DAO message 353 from ND 302F, and determination that the source of the DAO message is BIER enabled, ND 302E adds, at operation 707, an entry in its forwarding table(s) for the multicast domain as well as an indication of the BIER capability of ND 302F. Flow then moves to operation 708, at which ND 302E identifies the latest BIER-enabled ND encountered, which is ND 302F in this example. At operation 709, upon determination that the source ND is BIER-enabled and that the current ND is non-BIER- enabled, ND 302E encapsulates the received DAO message into an IP packet to form an IP encapsulated DAO message 355 in which the source address is the IP address of ND 302F (which is the latest BIER-enabled network device encountered by the DAO message), and forwards the IP encapsulated DAO message 355 to its RPL parents (here ND 302C and 302D).

[0053] The flow of operations then moves to operation 710 of Figure 3C, at which ND 302D determines, in response to the receipt of the updated DAO message 355, that the current ND (i.e., ND 302D) and the source ND 302E are non-BIER enabled. Upon receipt of the DAO message 355 from ND 302E, and determination that the source of the DAO message is non- BIER enabled, ND 302D adds, at operation 711, an entry in its forwarding table(s) for the multicast domain as well as an indication of the BIER capability of ND 302E. Flow then moves to operation 712, at which ND 302D identifies the latest BIER-enabled ND encountered, which is ND 302F in this example. At operation 713, upon determination that the source ND and the current ND are non-BIER-enabled, ND 302D encapsulates the received DAO message into an IP packet to form an IP encapsulated DAO message 357 in which the source address is the IP address of ND 302F (which is the latest BIER-enabled network device encountered by the DAO message), and forwards the IP encapsulated DAO message 357 to its RPL parent (here ND 302A).

[0054] The flow then moves to operation 714, at which ND 302A determines that the ND is BIER enabled and that ND 302D, which is the source of the DAO message 357 is non-BIER enabled. Upon receipt of the DAO message 357 from ND 302D, and determination that the source of the DAO message is non-BIER enabled, ND 302A adds, at operation 715, an entry in its forwarding table(s) for the multicast domain as well as an indication of the BIER capability of ND 302D. Flow then moves to operation 716, at which ND 302A identifies the latest BIER- enabled ND encountered, which is ND 302A in this example. At operation 717, upon determination that the source ND is non-BIER-enabled and that the current ND is BIER- enabled, ND 302A stores a flag indicating that an IP tunnel is to be established between the current node(ND 302A) and the previous BIER enabled ND encountered (ND 302F). Flow then moves to operation 718, at which ND 302A updates BIER forwarding tables according to the BIER information received in the BIER header field of the DAO message 357. At operation 719, ND 302A encapsulates the received DAO message 357 into an IP packet to form an IP encapsulated DAO message 359 in which the source address is the IP address of ND 302A (which is the latest BIER-enabled network device encountered by the DAO message), and forwards the IP encapsulated DAO message 359 to its RPL parent (here ND 302R) which is the root of the DODAG and the ingress network device of the BIER domain. Thus, in the present embodiments, prior to starting to forward multicast traffic to the multicast receivers 303, the control plane of the LLN is set-up dynamically. The intermediate network devices (e.g., ND 302A) maintain tunnel states until the multicast data packets are forwarded onto the tunnel. In some embodiments, the DAO messages can be used to keep the tunnel and multicast forwarding alive until there are hosts requesting the multicast traffic.

[0055] In some embodiments, in order to reduce the number of tunnels being established, DAO-ACK messages are used to initiate the tunnels. In these embodiments, in response to receiving the DAO message 359, a DAO-ACK message 361 is initiated by the ND 302R and is forwarded back to the source of the original DAO message 351 (i.e., ND 302G). In some embodiments, the DAO-ACK follows the best path from the root of the DODAG, ND 302R, to the receiver ND 302G as determined through the RPL protocol. Upon receiving this DAO-ACK message 361, all the intermediate network devices that have stored the tunnel states (i.e., the flag indicating that a tunnel is to be established) will initiate the tunnel to the respective previously observed BIER network device. For example, upon receipt of the DAO-ACK message 361, at operation 720, ND 302 A determines at operation 721, that the tunnel flag is set indicating that an IP tunnel is to be established between the current node and the previous BIER enabled ND encountered (e.g., ND 302F). ND 302A establishes an IP tunnel 310 at operation 722 between ND 302A and ND 302F, and forwards the DAO-ACK message 361 through the IP tunnel to the previous BIER enabled ND (e.g., toward ND 302F). In other words the DAO-ACK message is encapsulated within an IP header destined to the BIER-enabled ND 302F.

[0056] Once the tunnels are established and the DAO-ACK message 361 is received at ND 302G, multicast traffic can start. The multicast traffic is then forwarded from ND 302R to the receivers 303 passing through ND 302G according to the BIER protocol. In some embodiments, upon receipt of the multicast traffic, ND 302R encapsulate the data packets into a BIER header and points that the BFER is ND 302G. From looking up its BIER Forwarding table (BIFT), ND 302A knows that the data packets have to be routed through ND 302A. Upon receipt of the packets, ND 302A looks at the BitString of the packet and forwards the packet over the tunnel to 302F. ND 302F then forwards the packet to 302G by looking up its forwarding table (BIFT).

[0057] The previous embodiments were described with an exemplary scenario in which the gateway network device (ND 302G) directly connected to the multicast receivers is BIER capable. However the embodiments of the invention are not so limited and other

implementations fall within the scope of the present invention. For example, if the gateway network device ND 302G is not BIER capable, and it cannot initiate the DAO message as extended with BIER, in this case, any network device that is intermediate on the path of the DAO message can initiate the DAO message extended with the BIER header field to be forwarded to the root of the RPL DODAG.

[0058] The operations in the flow diagram of Figure 4 will be described with reference to the exemplary embodiments of the other figures. However, it should be understood that the operations of the flow diagram can be performed by embodiments of the invention other than those discussed with reference to the other figures, and the embodiments of the invention discussed with reference to these other figures can perform operations different than those discussed with reference to the flow diagram of Figure 4.

[0059] Figure 4 illustrates exemplary operations performed at a first network device for enabling stateless multicast routing in an LLN in accordance with some embodiments. At operation 402, a first network device (e.g., ND 102 or ND 302A) of an LLN receives a destination advertisement object (DAO) message from a second network device (e.g., ND 104, ND 302C, or ND 302D) from the plurality of other network devices of the LLN. The received DAO message includes a Bit Index Explicit Replication (BIER) header field including an identifier of a multicast domain. For example, the DAO message is constructed as described with reference to Figure 2B. The first network device updates (e.g., ND 102 or ND 302A) a forwarding table entry to include an identifier of the second network device from which the DAO message is received, where the forwarding table entry causes the first network device to forward, to the second network device (e.g., ND 104, ND 302C, or ND 302D), BIER encapsulated packets of the multicast domain which are to be forwarded at the second network device towards one or more receivers of the multicast domain (e.g., receivers 103 or 303).

[0060] In some embodiments, the DAO message is received with an identifier of a third network device (e.g., 302F) from the LLN, wherein the third network device (302F) is the closest BIER enabled network device of the LLN on a path coupling the first network device (302A) with one or more receivers of the multicast domain (303). In some embodiments, when the second network device (e.g., 302D) is not BIER enabled, the first network device (302A) establishes an Internet Protocol (IP) tunnel 310 between the first network device (302A) and the third network device (302F) such that the first network device forwards BIER packets of the multicast domain through the IP tunnel 310 established between the two network devices.

[0061] The embodiments described herein present clear advantages when compared to standard multicast routing protocols in LLNs. In some embodiments, the LLN avoids building an explicit multicast tree. In addition, there is no need to maintain per-flow multicast states at each network device thus conserving the network device's physical resources (e.g., memory, computing resources, and power source). The embodiments present a stateless multicast protocol that overcomes the large memory requirements of standard multicast protocols providing Low- Power and Lossy networks with an efficient multicast mechanism which does not exhaust the limited physical resources of the network devices of an LLN (i.e., minimal memory, reduced CPU performance and limited power source).

[0062] Architecture

[0063] An electronic device stores and transmits (internally and/or with other electronic devices over a network) code (which is composed of software instructions and which is sometimes referred to as computer program code or a computer program) and/or data using machine -readable media (also called computer-readable media), such as machine -readable storage media (e.g., magnetic disks, optical disks, read only memory (ROM), flash memory devices, phase change memory) and machine-readable transmission media (also called a carrier) (e.g., electrical, optical, radio, acoustical or other form of propagated signals - such as carrier waves, infrared signals). Thus, an electronic device (e.g., a computer) includes hardware and software, such as a set of one or more processors coupled to one or more machine -readable storage media to store code for execution on the set of processors and/or to store data. For instance, an electronic device may include non-volatile memory containing the code since the non-volatile memory can persist code/data even when the electronic device is turned off (when power is removed), and while the electronic device is turned on that part of the code that is to be executed by the processor(s) of that electronic device is typically copied from the slower nonvolatile memory into volatile memory (e.g., dynamic random access memory (DRAM), static random access memory (SRAM)) of that electronic device. Typical electronic devices also include a set or one or more physical network interface(s) to establish network connections (to transmit and/or receive code and/or data using propagating signals) with other electronic devices. One or more parts of an embodiment of the invention may be implemented using different combinations of software, firmware, and/or hardware.

[0064] While embodiments of the invention are described with respect to network devices, one of ordinary skill in the art would understand that each network device may include one or more network elements as described below with respect to Figure 5A. Each network device may perform the operations described with reference to Figures 1-3E and 4. In other words, each network device may include a plurality of network elements, where each network element is operative to receive and forward DIO and DAO messages extended to include BIER header fields and to propagate BIER domain information through the RPL protocol to enable a stateless multicast forwarding in LLNs.

[0065] A network device (ND) is an electronic device that communicatively interconnects other electronic devices on the network (e.g., other network devices, end-user devices). Some network devices are "multiple services network devices" that provide support for multiple networking functions (e.g., routing, bridging, switching, Layer 2 aggregation, session border control, Quality of Service, and/or subscriber management), and/or provide support for multiple application services (e.g., data, voice, and video).

[0066] Figure 5A illustrates connectivity between network devices (NDs) within an exemplary network, as well as three exemplary implementations of the NDs, according to some embodiments of the invention. Figure 5A shows NDs 500A-H, and their connectivity by way of lines between 500A-500B, 500B-500C, 500C-500D, 500D-500E, 500E-500F, 500F-500G, and 500A-500G, as well as between 500H and each of 500A, 500C, 500D, and 500G. These NDs are physical devices, and the connectivity between these NDs can be wireless or wired (often referred to as a link). An additional line extending from NDs 500A, 500E, and 500F illustrates that these NDs act as ingress and egress points for the network (and thus, these NDs are sometimes referred to as edge NDs; while the other NDs may be called core NDs).

[0067] Two of the exemplary ND implementations in Figure 5A are: 1) a special-purpose network device 502 that uses custom application-specific integrated-circuits (ASICs) and a special-purpose operating system (OS); and 2) a general purpose network device 504 that uses common off-the-shelf (COTS) processors and a standard OS.

[0068] The special-purpose network device 502 includes networking hardware 510 comprising compute resource(s) 512 (which typically include a set of one or more processors), forwarding resource(s) 514 (which typically include one or more ASICs and/or network processors), and physical network interfaces (NIs) 516 (sometimes called physical ports), as well as non- transitory machine readable storage media 518 having stored therein networking software 520. A physical NI is hardware in a ND through which a network connection (e.g., wirelessly through a wireless network interface controller (WNIC) or through plugging in a cable to a physical port connected to a network interface controller (NIC)) is made, such as those shown by the connectivity between NDs 500A-H. During operation, the networking software 520 may be executed by the networking hardware 510 to instantiate a set of one or more networking software instance(s) 522. The networking software 520 includes an RPL BIER Routing element 523 which when instantiated as the instance 533A is operative to enable stateless multicast routing as described with reference to Figures 1-4. Each of the networking software instance(s) 522, and that part of the networking hardware 510 that executes that network software instance (be it hardware dedicated to that networking software instance and/or time slices of hardware temporally shared by that networking software instance with others of the networking software instance(s) 522), form a separate virtual network element 530A-R. Each of the virtual network element(s) (VNEs) 530A-R includes a control communication and configuration module 532A- R (sometimes referred to as a local control module or control communication module) and forwarding table(s) 534A-R, such that a given virtual network element (e.g., 530A) includes the control communication and configuration module (e.g., 532A), a set of one or more forwarding table(s) (e.g., 534A), and that portion of the networking hardware 510 that executes the virtual network element (e.g., 530A).

[0069] The special-purpose network device 502 is often physically and/or logically considered to include: 1) a ND control plane 524 (sometimes referred to as a control plane) comprising the compute resource(s) 512 that execute the control communication and configuration module(s) 532A-R; and 2) a ND forwarding plane 526 (sometimes referred to as a forwarding plane, a data plane, or a media plane) comprising the forwarding resource(s) 514 that utilize the forwarding table(s) 534A-R and the physical NIs 516. By way of example, where the ND is a router (or is implementing routing functionality), the ND control plane 524 (the compute resource(s) 512 executing the control communication and configuration module(s) 532A-R) is typically responsible for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) and storing that routing information in the forwarding table(s) 534A-R, and the ND forwarding plane 526 is responsible for receiving that data on the physical NIs 516 and forwarding that data out the appropriate ones of the physical NIs 516 based on the forwarding table(s) 534A-R.

[0070] Figure 5B illustrates an exemplary way to implement the special-purpose network device 502 according to some embodiments of the invention. Figure 5B shows a special- purpose network device including cards 538 (typically hot pluggable). While in some embodiments the cards 538 are of two types (one or more that operate as the ND forwarding plane 526 (sometimes called line cards), and one or more that operate to implement the ND control plane 524 (sometimes called control cards)), alternative embodiments may combine functionality onto a single card and/or include additional card types (e.g., one additional type of card is called a service card, resource card, or multi-application card). A service card can provide specialized processing (e.g., Layer 4 to Layer 7 services (e.g., firewall, Internet Protocol Security (IPsec), Secure Sockets Layer (SSL) / Transport Layer Security (TLS), Intrusion Detection System (IDS), peer-to-peer (P2P), Voice over IP (VoIP) Session Border Controller, Mobile Wireless Gateways (Gateway General Packet Radio Service (GPRS) Support Node (GGSN), Evolved Packet Core (EPC) Gateway)). By way of example, a service card may be used to terminate IPsec tunnels and execute the attendant authentication and encryption algorithms. These cards are coupled together through one or more interconnect mechanisms illustrated as backplane 536 (e.g., a first full mesh coupling the line cards and a second full mesh coupling all of the cards).

[0071] Returning to Figure 5 A, the general purpose network device 504 includes hardware 540 comprising a set of one or more processor(s) 542 (which are often COTS processors) and network interface controller(s) 544 (NICs; also known as network interface cards) (which include physical NIs 546), as well as non-transitory machine readable storage media 548 having stored therein software 550. The software 550 includes an RPL BIER Routing element 523 which when instantiated is operative to enable stateless multicast routing as described with reference to Figures 1-4. During operation, the processor(s) 542 execute the software 550 to instantiate one or more sets of one or more applications 564A-R. While one embodiment does not implement virtualization, alternative embodiments may use different forms of virtualization. For example, in one such alternative embodiment the virtualization layer 554 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 562A-R called software containers that may each be used to execute one (or more) of the sets of applications 564A-R; where the multiple software containers (also called virtualization engines, virtual private servers, or jails) are user spaces (typically a virtual memory space) that are separate from each other and separate from the kernel space in which the operating system is run; and where the set of applications running in a given user space, unless explicitly allowed, cannot access the memory of the other processes. In another such alternative embodiment the virtualization layer 554 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and each of the sets of applications 564A-R is run on top of a guest operating system within an instance 562A-R called a virtual machine (which may in some cases be considered a tightly isolated form of software container) that is run on top of the hypervisor - the guest operating system and application may not know they are running on a virtual machine as opposed to running on a "bare metal" host electronic device, or through para-virtualization the operating system and/or application may be aware of the presence of virtualization for optimization purposes. In yet other alternative embodiments, one, some or all of the applications are implemented as unikernel(s), which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system. (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application. As a unikernel can be implemented to run directly on hardware 540, directly on a hypervisor (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container, embodiments can be implemented fully with unikernels running directly on a hypervisor represented by virtualization layer 554, unikernels running within software containers represented by instances 562A-R, or as a combination of unikernels and the above-described techniques (e.g., unikernels and virtual machines both run directly on a hypervisor, unikernels and sets of applications that are run in different software containers).

[0072] The instantiation of the one or more sets of one or more applications 564A-R, as well as virtualization if implemented, are collectively referred to as software instance(s) 552. Each set of applications 564A-R, corresponding virtualization construct (e.g., instance 562A-R) if implemented, and that part of the hardware 540 that executes them (be it hardware dedicated to that execution and/or time slices of hardware temporally shared), forms a separate virtual network element(s) 560A-R.

[0073] The virtual network element(s) 560A-R perform similar functionality to the virtual network element(s) 530A-R - e.g., similar to the control communication and configuration module(s) 532A and forwarding table(s) 534A (this virtualization of the hardware 540 is sometimes referred to as network function virtualization (NFV)). Thus, NFV may be used to consolidate many network equipment types onto industry standard high volume server hardware, physical switches, and physical storage, which could be located in Data centers, NDs, and customer premise equipment (CPE). While embodiments of the invention are illustrated with each instance 562A-R corresponding to one VNE 560A-R, alternative embodiments may implement this correspondence at a finer level granularity (e.g., line card virtual machines virtualize line cards, control card virtual machine virtualize control cards, etc.); it should be understood that the techniques described herein with reference to a correspondence of instances 562A-R to VNEs also apply to embodiments where such a finer level of granularity and/or unikernels are used.

[0074] In certain embodiments, the virtualization layer 554 includes a virtual switch that provides similar forwarding services as a physical Ethernet switch. Specifically, this virtual switch forwards traffic between instances 562A-R and the NIC(s) 544, as well as optionally between the instances 562A-R; in addition, this virtual switch may enforce network isolation between the VNEs 560A-R that by policy are not permitted to communicate with each other (e.g., by honoring virtual local area networks (VLANs)).

[0075] The third exemplary ND implementation in Figure 5A is a hybrid network device 506, which includes both custom ASICs/special-purpose OS and COTS processors/standard OS in a single ND or a single card within an ND. In certain embodiments of such a hybrid network device, a platform VM (i.e., a VM that that implements the functionality of the special-purpose network device 502) could provide for para-virtualization to the networking hardware present in the hybrid network device 506.

[0076] Regardless of the above exemplary implementations of an ND, when a single one of multiple VNEs implemented by an ND is being considered (e.g., only one of the VNEs is part of a given virtual network) or where only a single VNE is currently being implemented by an ND, the shortened term network element (NE) is sometimes used to refer to that VNE. Also in all of the above exemplary implementations, each of the VNEs (e.g., VNE(s) 530A-R, VNEs 560A-R, and those in the hybrid network device 506) receives data on the physical NIs (e.g., 516, 546) and forwards that data out the appropriate ones of the physical NIs (e.g., 516, 546). For example, a VNE implementing IP router functionality forwards IP packets on the basis of some of the IP header information in the IP packet; where IP header information includes source IP address, destination IP address, source port, destination port (where "source port" and

"destination port" refer herein to protocol ports, as opposed to physical ports of a ND), transport protocol (e.g., user datagram protocol (UDP), Transmission Control Protocol (TCP), and differentiated services (DSCP) values. [0077] Figure 5C illustrates various exemplary ways in which VNEs may be coupled according to some embodiments of the invention. Figure 5C shows VNEs 570A.1-570A.P (and optionally VNEs 570A.Q-570A.R) implemented in ND 500A and VNE 570H.1 in ND 500H. In Figure 5C, VNEs 570A.1-P are separate from each other in the sense that they can receive packets from outside ND 500A and forward packets outside of ND 500A; VNE 570A.1 is coupled with VNE 570H.1, and thus they communicate packets between their respective NDs; VNE 570A.2-570A.3 may optionally forward packets between themselves without forwarding them outside of the ND 500A; and VNE 570A.P may optionally be the first in a chain of VNEs that includes VNE 570A.Q followed by VNE 570A.R (this is sometimes referred to as dynamic service chaining, where each of the VNEs in the series of VNEs provides a different service - e.g., one or more layer 4-7 network services). While Figure 5C illustrates various exemplary relationships between the VNEs, alternative embodiments may support other relationships (e.g., more/fewer VNEs, more/fewer dynamic service chains, multiple different dynamic service chains with some common VNEs and some different VNEs).

[0078] The NDs of Figure 5A, for example, may form part of the Internet or a private network; and other electronic devices (not shown; such as end user devices including workstations, laptops, netbooks, tablets, palm tops, mobile phones, smartphones, phablets, multimedia phones, Voice Over Internet Protocol (VOIP) phones, terminals, portable media players, GPS units, wearable devices, gaming systems, set-top boxes, Internet enabled household appliances) may be coupled to the network (directly or through other networks such as access networks) to communicate over the network (e.g., the Internet or virtual private networks (VPNs) overlaid on (e.g., tunneled through) the Internet) with each other (directly or through servers) and/or access content and/or services. Such content and/or services are typically provided by one or more servers (not shown) belonging to a service/content provider or one or more end user devices (not shown) participating in a peer-to-peer (P2P) service, and may include, for example, public webpages (e.g., free content, store fronts, search services), private webpages (e.g.,

username/password accessed webpages providing email services), and/or corporate networks over VPNs. For instance, end user devices may be coupled (e.g., through customer premise equipment coupled to an access network (wired or wirelessly)) to edge NDs, which are coupled (e.g., through one or more core NDs) to other edge NDs, which are coupled to electronic devices acting as servers. However, through compute and storage virtualization, one or more of the electronic devices operating as the NDs in Figure 5A may also host one or more such servers (e.g., in the case of the general purpose network device 504, one or more of the software instances 562A-R may operate as servers; the same would be true for the hybrid network device 506; in the case of the special-purpose network device 502, one or more such servers could also be run on a virtualization layer executed by the compute resource(s) 512); in which case the servers are said to be co-located with the VNEs of that ND.

[0079] Fig. 5D illustrates a network with a single network element on each of the NDs of Figure 5A, and within this straight forward approach contrasts a traditional distributed approach (commonly used by traditional routers) with a centralized approach for maintaining reachability and forwarding information (also called network control), according to some embodiments of the invention. Specifically, Figure 5D illustrates network elements (NEs) 570A-H with the same connectivity as the NDs 500A-H of Figure 5 A.

[0080] Figure 5D illustrates that the distributed approach 572 distributes responsibility for generating the reachability and forwarding information across the NEs 570A-H; in other words, the process of neighbor discovery and topology discovery is distributed.

[0081] For example, where the special-purpose network device 502 is used, the control communication and configuration module(s) 532A-R of the ND control plane 524 typically include a reachability and forwarding information module to implement one or more routing protocols (e.g., an exterior gateway protocol such as Border Gateway Protocol (BGP), Interior Gateway Protocol(s) (IGP) (e.g., Open Shortest Path First (OSPF), Intermediate System to Intermediate System (IS-IS), Routing Information Protocol (RIP), Label Distribution Protocol (LDP), Resource Reservation Protocol (RSVP) (including RSVP-Traffic Engineering (TE): Extensions to RSVP for LSP Tunnels and Generalized Multi-Protocol Label Switching

(GMPLS) Signaling RSVP-TE)) that communicate with other NEs to exchange routes, and then selects those routes based on one or more routing metrics. Thus, the NEs 570A-H (e.g., the compute resource(s) 512 executing the control communication and configuration module(s) 532A-R) perform their responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by distributively determining the reachability within the network and calculating their respective forwarding information. Routes and adjacencies are stored in one or more routing structures (e.g., Routing Information Base (RIB), Label Information Base (LIB), one or more adjacency structures) on the ND control plane 524. The ND control plane 524 programs the ND forwarding plane 526 with information (e.g., adjacency and route information) based on the routing structure(s). For example, the ND control plane 524 programs the adjacency and route information into one or more forwarding table(s) 534A-R (e.g., Forwarding Information Base (FIB), Label Forwarding Information Base (LFIB), and one or more adjacency structures) on the ND forwarding plane 526. For layer 2 forwarding, the ND can store one or more bridging tables that are used to forward data based on the layer 2 information in that data. While the above example uses the special-purpose network device 502, the same distributed approach 572 can be implemented on the general purpose network device 504 and the hybrid network device 506.

[0082] Figure 5D illustrates that a centralized approach 574 (also known as software defined networking (SDN)) that decouples the system that makes decisions about where traffic is sent from the underlying systems that forwards traffic to the selected destination. The illustrated centralized approach 574 has the responsibility for the generation of reachability and forwarding information in a centralized control plane 576 (sometimes referred to as a SDN control module, controller, network controller, OpenFlow controller, SDN controller, control plane node, network virtualization authority, or management control entity), and thus the process of neighbor discovery and topology discovery is centralized. The centralized control plane 576 has a south bound interface 582 with a data plane 580 (sometime referred to the infrastructure layer, network forwarding plane, or forwarding plane (which should not be confused with a ND forwarding plane)) that includes the NEs 570A-H (sometimes referred to as switches, forwarding elements, data plane elements, or nodes). The centralized control plane 576 includes a network controller 578, which includes a centralized reachability and forwarding information module 579 that determines the reachability within the network and distributes the forwarding information to the NEs 570A-H of the data plane 580 over the south bound interface 582 (which may use the OpenFlow protocol). Thus, the network intelligence is centralized in the centralized control plane 576 executing on electronic devices that are typically separate from the NDs. In some embodiments, the network controller 578 includes RPL BIER Routing Control Element 581 which is operative to enable stateless multicast routing as described with reference to Figures 1-4.

[0083] For example, where the special-purpose network device 502 is used in the data plane 580, each of the control communication and configuration module(s) 532A-R of the ND control plane 524 typically include a control agent that provides the VNE side of the south bound interface 582. In this case, the ND control plane 524 (the compute resource(s) 512 executing the control communication and configuration module(s) 532A-R) performs its responsibility for participating in controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) through the control agent communicating with the centralized control plane 576 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 579 (it should be understood that in some embodiments of the invention, the control communication and configuration module(s) 532A-R, in addition to communicating with the centralized control plane 576, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach; such embodiments are generally considered to fall under the centralized approach 574, but may also be considered a hybrid approach).

[0084] While the above example uses the special-purpose network device 502, the same centralized approach 574 can be implemented with the general purpose network device 504 (e.g., each of the VNE 560A-R performs its responsibility for controlling how data (e.g., packets) is to be routed (e.g., the next hop for the data and the outgoing physical NI for that data) by communicating with the centralized control plane 576 to receive the forwarding information (and in some cases, the reachability information) from the centralized reachability and forwarding information module 579; it should be understood that in some embodiments of the invention, the VNEs 560A-R, in addition to communicating with the centralized control plane 576, may also play some role in determining reachability and/or calculating forwarding information - albeit less so than in the case of a distributed approach) and the hybrid network device 506. In fact, the use of SDN techniques can enhance the NFV techniques typically used in the general purpose network device 504 or hybrid network device 506 implementations as NFV is able to support SDN by providing an infrastructure upon which the SDN software can be run, and NFV and SDN both aim to make use of commodity server hardware and physical switches.

[0085] Figure 5D also shows that the centralized control plane 576 has a north bound interface 584 to an application layer 586, in which resides application(s) 588. The centralized control plane 576 has the ability to form virtual networks 592 (sometimes referred to as a logical forwarding plane, network services, or overlay networks (with the NEs 570A-H of the data plane 580 being the underlay network)) for the application(s) 588. Thus, the centralized control plane 576 maintains a global view of all NDs and configured NEs/VNEs, and it maps the virtual networks to the underlying NDs efficiently (including maintaining these mappings as the physical network changes either through hardware (ND, link, or ND component) failure, addition, or removal).

[0086] While Figure 5D shows the distributed approach 572 separate from the centralized approach 574, the effort of network control may be distributed differently or the two combined in certain embodiments of the invention. For example: 1) embodiments may generally use the centralized approach (SDN) 574, but have certain functions delegated to the NEs (e.g., the distributed approach may be used to implement one or more of fault monitoring, performance monitoring, protection switching, and primitives for neighbor and/or topology discovery); or 2) embodiments of the invention may perform neighbor discovery and topology discovery via both the centralized control plane and the distributed protocols, and the results compared to raise exceptions where they do not agree. Such embodiments are generally considered to fall under the centralized approach 574, but may also be considered a hybrid approach.

[0087] While Figure 5D illustrates the simple case where each of the NDs 500A-H implements a single NE 570A-H, it should be understood that the network control approaches described with reference to Figure 5D also work for networks where one or more of the NDs 500A-H implement multiple VNEs (e.g., VNEs 530A-R, VNEs 560A-R, those in the hybrid network device 506). Alternatively or in addition, the network controller 578 may also emulate the implementation of multiple VNEs in a single ND. Specifically, instead of (or in addition to) implementing multiple VNEs in a single ND, the network controller 578 may present the implementation of a VNE/NE in a single ND as multiple VNEs in the virtual networks 592 (all in the same one of the virtual network(s) 592, each in different ones of the virtual network(s) 592, or some combination). For example, the network controller 578 may cause an ND to implement a single VNE (a NE) in the underlay network, and then logically divide up the resources of that NE within the centralized control plane 576 to present different VNEs in the virtual network(s) 592 (where these different VNEs in the overlay networks are sharing the resources of the single VNE/NE implementation on the ND in the underlay network).

[0088] On the other hand, Figures 5E and 5F respectively illustrate exemplary abstractions of NEs and VNEs that the network controller 578 may present as part of different ones of the virtual networks 592. Figure 5E illustrates the simple case of where each of the NDs 500A-H implements a single NE 570A-H (see Figure 5D), but the centralized control plane 576 has abstracted multiple of the NEs in different NDs (the NEs 570A-C and G-H) into (to represent) a single NE 5701 in one of the virtual network(s) 592 of Figure 5D, according to some embodiments of the invention. Figure 5E shows that in this virtual network, the NE 5701 is coupled to NE 570D and 570F, which are both still coupled to NE 570E.

[0089] Figure 5F illustrates a case where multiple VNEs (VNE 570A.1 and VNE 570H.1) are implemented on different NDs (ND 500A and ND 500H) and are coupled to each other, and where the centralized control plane 576 has abstracted these multiple VNEs such that they appear as a single VNE 570T within one of the virtual networks 592 of Figure 5D, according to some embodiments of the invention. Thus, the abstraction of a NE or VNE can span multiple NDs.

[0090] While some embodiments of the invention implement the centralized control plane 576 as a single entity (e.g., a single instance of software running on a single electronic device), alternative embodiments may spread the functionality across multiple entities for redundancy and/or scalability purposes (e.g., multiple instances of software running on different electronic devices). [0091] Similar to the network device implementations, the electronic device(s) running the centralized control plane 576, and thus the network controller 578 including the centralized reachability and forwarding information module 579, may be implemented a variety of ways (e.g., a special purpose device, a general-purpose (e.g., COTS) device, or hybrid device). These electronic device(s) would similarly include compute resource(s), a set or one or more physical NICs, and a non-transitory machine-readable storage medium having stored thereon the centralized control plane software. For instance, Figure 6 illustrates, a general purpose control plane device 604 including hardware 640 comprising a set of one or more processor(s) 642 (which are often COTS processors) and network interface controller(s) 644 (NICs; also known as network interface cards) (which include physical NIs 646), as well as non-transitory machine readable storage media 648 having stored therein centralized control plane (CCP) software 650.

[0092] In embodiments that use compute virtualization, the processor(s) 642 typically execute software to instantiate a virtualization layer 654 (e.g., in one embodiment the virtualization layer 654 represents the kernel of an operating system (or a shim executing on a base operating system) that allows for the creation of multiple instances 662A-R called software containers (representing separate user spaces and also called virtualization engines, virtual private servers, or jails) that may each be used to execute a set of one or more applications; in another embodiment the virtualization layer 654 represents a hypervisor (sometimes referred to as a virtual machine monitor (VMM)) or a hypervisor executing on top of a host operating system, and an application is run on top of a guest operating system within an instance 662A-R called a virtual machine (which in some cases may be considered a tightly isolated form of software container) that is run by the hypervisor ; in another embodiment, an application is implemented as a unikernel, which can be generated by compiling directly with an application only a limited set of libraries (e.g., from a library operating system (LibOS) including drivers/libraries of OS services) that provide the particular OS services needed by the application, and the unikernel can run directly on hardware 640, directly on a hypervisor represented by virtualization layer 654 (in which case the unikernel is sometimes described as running within a LibOS virtual machine), or in a software container represented by one of instances 662A-R). Again, in embodiments where compute virtualization is used, during operation an instance of the CCP software 650 (illustrated as CCP instance 676A) is executed (e.g., within the instance 662A) on the virtualization layer 654. In embodiments where compute virtualization is not used, the CCP instance 676A is executed, as a unikernel or on top of a host operating system, on the "bare metal" general purpose control plane device 604. The instantiation of the CCP instance 676A, as well as the virtualization layer 654 and instances 662A-R if implemented, are collectively referred to as software instance(s) 652. [0093] In some embodiments, the CCP instance 676A includes a network controller instance 678. The network controller instance 678 includes a centralized reachability and forwarding information module instance 679 (which is a middleware layer providing the context of the network controller 578 to the operating system and communicating with the various NEs), and an CCP application layer 680 (sometimes referred to as an application layer) over the middleware layer (providing the intelligence required for various network operations such as protocols, network situational awareness, and user - interfaces). At a more abstract level, this CCP application layer 680 within the centralized control plane 576 works with virtual network view(s) (logical view(s) of the network) and the middleware layer provides the conversion from the virtual networks to the physical view.

[0094] The centralized control plane 576 transmits relevant messages to the data plane 580 based on CCP application layer 680 calculations and middleware layer mapping for each flow. A flow may be defined as a set of packets whose headers match a given pattern of bits; in this sense, traditional IP forwarding is also flow-based forwarding where the flows are defined by the destination IP address for example; however, in other implementations, the given pattern of bits used for a flow definition may include more fields (e.g., 10 or more) in the packet headers. Different NDs/NEs/VNEs of the data plane 580 may receive different messages, and thus different forwarding information. The data plane 580 processes these messages and programs the appropriate flow information and corresponding actions in the forwarding tables (sometime referred to as flow tables) of the appropriate NE/VNEs, and then the NEs/VNEs map incoming packets to flows represented in the forwarding tables and forward packets based on the matches in the forwarding tables.

[0095] Standards such as OpenFlow define the protocols used for the messages, as well as a model for processing the packets. The model for processing packets includes header parsing, packet classification, and making forwarding decisions. Header parsing describes how to interpret a packet based upon a well-known set of protocols. Some protocol fields are used to build a match structure (or key) that will be used in packet classification (e.g., a first key field could be a source media access control (MAC) address, and a second key field could be a destination MAC address).

[0096] Packet classification involves executing a lookup in memory to classify the packet by determining which entry (also referred to as a forwarding table entry or flow entry) in the forwarding tables best matches the packet based upon the match structure, or key, of the forwarding table entries. It is possible that many flows represented in the forwarding table entries can correspond/match to a packet; in this case the system is typically configured to determine one forwarding table entry from the many according to a defined scheme (e.g., selecting a first forwarding table entry that is matched). Forwarding table entries include both a specific set of match criteria (a set of values or wildcards, or an indication of what portions of a packet should be compared to a particular value/values/wildcards, as defined by the matching capabilities - for specific fields in the packet header, or for some other packet content), and a set of one or more actions for the data plane to take on receiving a matching packet. For example, an action may be to push a header onto the packet, for the packet using a particular port, flood the packet, or simply drop the packet. Thus, a forwarding table entry for IPv4/IPv6 packets with a particular transmission control protocol (TCP) destination port could contain an action specifying that these packets should be dropped.

[0097] Making forwarding decisions and performing actions occurs, based upon the forwarding table entry identified during packet classification, by executing the set of actions identified in the matched forwarding table entry on the packet.

[0098] However, when an unknown packet (for example, a "missed packet" or a "match-miss" as used in OpenFlow parlance) arrives at the data plane 580, the packet (or a subset of the packet header and content) is typically forwarded to the centralized control plane 576. The centralized control plane 576 will then program forwarding table entries into the data plane 580 to accommodate packets belonging to the flow of the unknown packet. Once a specific forwarding table entry has been programmed into the data plane 580 by the centralized control plane 576, the next packet with matching credentials will match that forwarding table entry and take the set of actions associated with that matched entry.

[0099] A network interface (NI) may be physical or virtual; and in the context of IP, an interface address is an IP address assigned to a NI, be it a physical NI or virtual NI. A virtual NI may be associated with a physical NI, with another virtual interface, or stand on its own (e.g., a loopback interface, a point-to-point protocol interface). A NI (physical or virtual) may be numbered (a NI with an IP address) or unnumbered (a NI without an IP address). A loopback interface (and its loopback address) is a specific type of virtual NI (and IP address) of a NE/VNE (physical or virtual) often used for management purposes; where such an IP address is referred to as the nodal loopback address. The IP address(es) assigned to the NI(s) of a ND are referred to as IP addresses of that ND; at a more granular level, the IP address(es) assigned to NI(s) assigned to a NE/VNE implemented on a ND can be referred to as IP addresses of that NE/VNE.

[00100] Some NDs include functionality for authentication, authorization, and accounting (AAA) protocols (e.g., RADIUS (Remote Authentication Dial-In User Service), Diameter, and/or TACACS+ (Terminal Access Controller Access Control System Plus). AAA can be provided through a client/server model, where the AAA client is implemented on a ND and the AAA server can be implemented either locally on the ND or on a remote electronic device coupled with the ND. Authentication is the process of identifying and verifying a subscriber. For instance, a subscriber might be identified by a combination of a username and a password or through a unique key. Authorization determines what a subscriber can do after being authenticated, such as gaining access to certain electronic device information resources (e.g., through the use of access control policies). Accounting is recording user activity. By way of a summary example, end user devices may be coupled (e.g., through an access network) through an edge ND (supporting AAA processing) coupled to core NDs coupled to electronic devices implementing servers of service/content providers. AAA processing is performed to identify for a subscriber the subscriber record stored in the AAA server for that subscriber. A subscriber record includes a set of attributes (e.g., subscriber name, password, authentication information, access control information, rate-limiting information, policing information) used during processing of that subscriber's traffic.

[00101] Certain NDs (e.g., certain edge NDs) internally represent end user devices (or sometimes customer premise equipment (CPE) such as a residential gateway (e.g., a router, modem)) using subscriber circuits. A subscriber circuit uniquely identifies within the ND a subscriber session and typically exists for the lifetime of the session. Thus, a ND typically allocates a subscriber circuit when the subscriber connects to that ND, and correspondingly deallocates that subscriber circuit when that subscriber disconnects. Each subscriber session represents a distinguishable flow of packets communicated between the ND and an end user device (or sometimes CPE such as a residential gateway or modem) using a protocol, such as the point-to-point protocol over another protocol (PPPoX) (e.g., where X is Ethernet or

Asynchronous Transfer Mode (ATM)), Ethernet, 802.1Q Virtual LAN (VLAN), Internet Protocol, or ATM). A subscriber session can be initiated using a variety of mechanisms (e.g., manual provisioning a dynamic host configuration protocol (DHCP), DHCP/client-less internet protocol service (CLIPS) or Media Access Control (MAC) address tracking). For example, the point-to-point protocol (PPP) is commonly used for digital subscriber line (DSL) services and requires installation of a PPP client that enables the subscriber to enter a username and a password, which in turn may be used to select a subscriber record. When DHCP is used (e.g., for cable modem services), a username typically is not provided; but in such situations other information (e.g., information that includes the MAC address of the hardware in the end user device (or CPE)) is provided. The use of DHCP and CLIPS on the ND captures the MAC addresses and uses these addresses to distinguish subscribers and access their subscriber records.

[00102] Each VNE (e.g., a virtual router, a virtual bridge (which may act as a virtual switch instance in a Virtual Private LAN Service (VPLS) is typically independently administrable. For example, in the case of multiple virtual routers, each of the virtual routers may share system resources but is separate from the other virtual routers regarding its management domain, AAA (authentication, authorization, and accounting) name space, IP address, and routing database(s). Multiple VNEs may be employed in an edge ND to provide direct network access and/or different classes of services for subscribers of service and/or content providers.

[00103] Within certain NDs, "interfaces" that are independent of physical NIs may be configured as part of the VNEs to provide higher-layer protocol and service information (e.g., Layer 3 addressing). The subscriber records in the AAA server identify, in addition to the other subscriber configuration requirements, to which context (e.g., which of the VNEs/NEs) the corresponding subscribers should be bound within the ND. As used herein, a binding forms an association between a physical entity (e.g., physical NI, channel) or a logical entity (e.g., circuit such as a subscriber circuit or logical circuit (a set of one or more subscriber circuits)) and a context's interface over which network protocols (e.g., routing protocols, bridging protocols) are configured for that context. Subscriber data flows on the physical entity when some higher- layer protocol interface is configured and associated with that physical entity.

[00104] While the flow diagrams in the figures show a particular order of operations performed by certain embodiments of the invention, it should be understood that such order is exemplary (e.g., alternative embodiments may perform the operations in a different order, combine certain operations, overlap certain operations, etc.).

[00105] While the invention has been described in terms of several embodiments, those skilled in the art will recognize that the invention is not limited to the embodiments described, can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.