Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD TO RESYNC FLOW RULES
Document Type and Number:
WIPO Patent Application WO/2022/085013
Kind Code:
A1
Abstract:
Apparatuses and methods for resyncing flow rules are provided. In one embodiment, a data path node is configured to obtain at least one flow rule from a software-defined network, SDN, controller and storing the at least one flow rule in at least one non-persistent storage device; store the at least one flow rule in at least one persistent local storage device associated with the data path node; and perform at least one synchronization action to synchronize flow rules stored in the at least one persistent local storage device with flow rules for the data path node at the SDN controller. In one embodiment, a SDN controller is configured to maintain a subset of flow rules for the data path node, the subset representing a difference between the flow rules stored at the SDN controller for the data path node and flow rules stored in at least one persistent local storage device associated with the data path node and the subset being associated with a first hash value.

Inventors:
MADANAGOPAL GANAPATHY RAMAN (IN)
PERUMALLA SIVA KUMAR V V K A (IN)
Application Number:
PCT/IN2020/050898
Publication Date:
April 28, 2022
Filing Date:
October 22, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
MADANAGOPAL GANAPATHY RAMAN (IN)
International Classes:
H04W28/02; H04W56/00; H04W80/00
Foreign References:
US20160261491A12016-09-08
US20130060736A12013-03-07
Attorney, Agent or Firm:
SINGH, Manisha (IN)
Download PDF:
Claims:
CLAIMS:

1. A method implemented in a data path node (12), the method comprising: obtaining (S100) at least one flow rule from a software-defined network, SDN, controller and storing the at least one flow rule in at least one non-persistent storage device; storing (S102) the at least one flow rule in at least one persistent local storage device associated with the data path node (12); and performing (S104) at least one synchronization action to synchronize flow rules stored in the at least one persistent local storage device with flow rules for the data path node (12) at the SDN controller (14).

2. The method of Claim 1, wherein the at least one synchronization action is performed by a synchronization daemon at the data path node (12) and the at least one synchronization action comprises: obtaining the flow rules stored in the at least one persistent local storage device; performing a hash function on the flow rules stored in the at least one persistent local storage device to generate a hash value; and sending the generated hash value to the SDN controller (14).

3. The method of Claim 2, wherein the generated hash value indicates to the SDN controller (14) whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node (12) at the SDN controller (14).

4. The method of any one of Claims 2 and 3, wherein: obtaining the at least one flow rule comprises at least one of: receiving, by a virtual switch at the data path node (12), the at least one flow rule from the SDN controller (14) via an in-band communication; and reading, by a daemon at the data path node (12), the at least one flow rule from the virtual switch; and sending the generated hash value to the SDN controller (14) comprises sending the generated hash value to the SDN controller (14) via an out-of-band communication.

36

5. The method of any one of Claims 1-4, further comprising: determining whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node (12) at the SDN controller (14), the determining comprising: a daemon periodically polling a virtual switch associated with the data path node (12); and when the flow rules at the virtual switch are a same as the flow rules in the at least one persistent local storage device for a predetermined number of consecutive polling instances, determining that the flow rules in the at least one persistent local storage device are synchronized to the flow rules at the SDN controller (14).

6. The method of any one of Claims 1-4, further comprising: determining whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node (12) at the SDN controller (14), the determining comprising: receiving, from the SDN controller (14), a message indicating whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node (12) at the SDN controller (14).

7. The method of any one of Claims 1-6, further comprising: when the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node (12) at the SDN controller (14), setting a current state of the flow rules in the at least one persistent local storage device as a synchronized state, the current state being associated with a hash value sent to the SDN controller (14).

8. The method of any one of Claims 1-7, further comprising: as a result of one of a reboot of a virtual switch associated with the data path node (12) and a connection flap associated with the data path node (12): determining that a flow rules table at the virtual switch is empty; performing a hash function on the flow rules stored in the at least one persistent local storage device to generate a hash value;

37 loading the flow rules from the at least one persistent local storage device to the flow rules table at the virtual switch; and sending the hash value to the SDN controller (14).

9. The method of any one of Claims 1-8, further comprising: as a result of one of a reboot of a virtual switch associated with the data path node (12) and a connection flap associated with the data path node (12): receiving a subset of the flow rules for the data path node (12) stored at the SDN controller (14), the subset corresponding to a difference between the flow rules stored at the SDN controller (14) for the data path node (12) and the flow rules stored in the at least one persistent local storage device associated with the data path node (12); and loading the flow rules from the at least one persistent local storage device associated with the data path node (12) to a flow rules table at the virtual switch.

10. The method of any one of Claims 1-9, wherein the at least one synchronization action is performed periodically.

11. A method implemented in a software-defined network, SDN, controller, the method comprising: maintaining (S106) a subset of flow rules for a data path node (12), the subset representing a difference between the flow rules stored at the SDN controller (14) for the data path node (12) and flow rules stored in at least one persistent local storage device associated with the data path node (12) and the subset being associated with a first hash value; receiving (S108) a second hash value from the data path node (12); comparing (SI 10) the second hash value to the first hash value associated with the subset; and based on the comparison, determining (SI 12) whether the flow rules stored at the SDN controller (14) for the data path node (12) are synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node (12).

12. The method of Claim 11, wherein maintaining and comparing comprises maintaining, via a plug-in at the SDN controller (14), the subset of flow rules and comparing, via the plug-in, the second hash value to the first hash value.

13. The method of any one of Claims 11 and 12, wherein the subset represents the difference between the flow rules stored at the SDN controller (14) and the flow rules stored in the at least one persistent local storage device from a previous synchronization state to a current expected synchronization state.

14. The method of any one of Claims 11-13, wherein the subset of flow rules representing the difference are stored in a separate storage location from a storage location at which a full set of the flow rules for the data path node (12) are stored.

15. The method of any one of Claims 11-14, wherein determining comprises: when the first hash value is a same as the second hash value, determining that the flow rules stored at the SDN controller (14) for the data path node (12) are synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node (12); and setting a current state of the flow rules for the data path node (12) as a first synchronized state associated with the first hash value and including any subsequent flow rules and modifications to existing flow rules for the data path node (12) within the subset until a next synchronized state is determined.

16. The method of Claim 15, wherein the next synchronized state is associated with a third hash value, the third hash value being different from the first hash value that is associated with the first synchronized state.

17. The method of any one of Claims 11-14, wherein determining comprises: when the first hash value is different from the second hash value, determining that the flow rules stored at the SDN controller (14) for the data path node (12) are not yet synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node (12).

18. The method of any one of Claims 11-17, further comprising: sending at least one flow rule to a virtual switch at the data path node (12) via an in-band communication; and wherein receiving the second hash value comprises receiving the second hash value from a daemon at the data path node (12) via an out-of-band communication.

19. The method of any one of Claims 11-18, further comprising: as a result of one of a reboot of a virtual switch associated with the data path node (12) and a connection flap associated with the data path node (12): receiving a fourth hash value from the data node; and when the fourth hash value is different from an expected hash value, sending only the subset of flow rules that are one of sent to the data path node (12) and modified between a most recent synchronization state and a current state.

20. The method of any one of Claims 11-19, further comprising: as a result of one of a reboot of a virtual switch associated with the data path node (12) and a connection flap associated with the data path node (12): receiving a fourth hash value from the data node; and when the fourth hash value is a same as an expected hash value, determining that the flow rules stored at the SDN controller (14) for the data path node (12) and the flow rules stored in the at least one persistent local storage device associated with the data path node (12) are a same and forgoing sending any flow rules to the data path node (12) until at least one of at least one new flow rule and at least one modification to at least one existing flow rule is determined for the data path node (12).

21. A data path node (12) comprising processing circuitry (22), the processing circuitry (22) configured to cause the data path node (12) to: obtain at least one flow rule from a software-defined network, SDN, controller and storing the at least one flow rule in at least one non-persistent storage device; store the at least one flow rule in at least one persistent local storage device associated with the data path node (12); and perform at least one synchronization action to synchronize flow rules stored in the at least one persistent local storage device with flow rules for the data path node (12) at the SDN controller (14).

22. The data path node (12) of Claim 21, wherein the at least one synchronization action is performed by a synchronization daemon at the data path node (12) and the at least one synchronization action comprises: obtaining the flow rules stored in the at least one persistent local storage device; performing a hash function on the flow rules stored in the at least one persistent local storage device to generate a hash value; and sending the generated hash value to the SDN controller (14).

23. The data path node (12) of Claim 22, wherein the generated hash value indicates to the SDN controller (14) whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node (12) at the SDN controller (14).

24. The data path node (12) of any one of Claims 22 and 23, wherein: the processing circuitry (22) is configured to cause the data path node (12) to obtain the at least one flow rule by being configured to cause the data path node (12) to at least one of: receive, by a virtual switch at the data path node (12), the at least one flow rule from the SDN controller (14) via an in-band communication; and read, by a daemon at the data path node (12), the at least one flow rule from the virtual switch; and send the generated hash value to the SDN controller (14) comprises sending the generated hash value to the SDN controller (14) via an out-of-band communication.

25. The data path node (12) of any one of Claims 21-24, wherein the processing circuitry (22) is further configured to cause the data path node (12) to: determine whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node (12) at the SDN controller (14) by being configured to cause the data path node (12) to:

41 a daemon periodically poll a virtual switch associated with the data path node (12); and when the flow rules at the virtual switch are a same as the flow rules in the at least one persistent local storage device for a predetermined number of consecutive polling instances, determine that the flow rules in the at least one persistent local storage device are synchronized to the flow rules at the SDN controller (14).

26. The data path node (12) of any one of Claims 21-24, wherein the processing circuitry (22) is further configured to cause the data path node (12) to: determine whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node (12) at the SDN controller (14) by being configured to cause the data path node (12) to: receive, from the SDN controller (14), a message indicating whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node (12) at the SDN controller (14).

27. The data path node (12) of any one of Claims 21-26, wherein the processing circuitry (22) is further configured to cause the data path node (12) to: when the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node (12) at the SDN controller (14), set a current state of the flow rules in the at least one persistent local storage device as a synchronized state, the current state being associated with a hash value sent to the SDN controller (14).

28. The data path node (12) of any one of Claims 21-27, wherein the processing circuitry (22) is further configured to cause the data path node (12) to: as a result of one of a reboot of a virtual switch associated with the data path node (12) and a connection flap associated with the data path node (12): determine that a flow rules table at the virtual switch is empty; perform a hash function on the flow rules stored in the at least one persistent local storage device to generate a hash value; load the flow rules from the at least one persistent local storage device to the flow rules table at the virtual switch; and

42 send the hash value to the SDN controller (14).

29. The data path node (12) of any one of Claims 21-28, wherein the processing circuitry (22) is further configured to cause the data path node (12) to: as a result of one of a reboot of a virtual switch associated with the data path node (12) and a connection flap associated with the data path node (12): receive a subset of the flow rules for the data path node (12) stored at the SDN controller (14), the subset corresponding to a difference between the flow rules stored at the SDN controller (14) for the data path node (12) and the flow rules stored in the at least one persistent local storage device associated with the data path node (12); and load the flow rules from the at least one persistent local storage device associated with the data path node (12) to a flow rules table at the virtual switch.

30. The data path node (12) of any one of Claims 21-29, wherein the at least one synchronization action is performed periodically.

31. A software-defined network, SDN, controller, comprising processing circuitry (30), the processing circuitry (30) configured to cause the SDN controller (14) to: maintain a subset of flow rules for a data path node (12), the subset representing a difference between the flow rules stored at the SDN controller (14) for the data path node (12) and flow rules stored in at least one persistent local storage device associated with the data path node (12) and the subset being associated with a first hash value; receive a second hash value from the data path node (12); compare the second hash value to the first hash value associated with the subset; and based on the comparison, determine whether the flow rules stored at the SDN controller (14) for the data path node (12) are synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node (12).

32. The SDN controller (14) of Claim 31, wherein the processing circuitry (30) is configured to cause the SDN controller (14) to maintain and compare by being configured to cause the SDN controller (14) to maintain, via a plug-in at the SDN

43 controller (14), the subset of flow rules and compare, via the plug-in, the second hash value to the first hash value.

33. The SDN controller (14) of any one of Claims 31 and 32, wherein the subset represents the difference between the flow rules stored at the SDN controller (14) and the flow rules stored in the at least one persistent local storage device from a previous synchronization state to a current expected synchronization state.

34. The SDN controller (14) of any one of Claims 31-33, wherein the subset of flow rules representing the difference are stored in a separate storage location from a storage location at which a full set of the flow rules for the data path node (12) are stored.

35. The SDN controller (14) of any one of Claims 31-34, wherein the processing circuitry (30) is configured to cause the SDN controller (14) to determine by being configured to cause the SDN controller (14) to: when the first hash value is a same as the second hash value, determine that the flow rules stored at the SDN controller (14) for the data path node (12) are synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node (12); and set a current state of the flow rules for the data path node (12) as a first synchronized state associated with the first hash value and including any subsequent flow rules and modifications to existing flow rules for the data path node (12) within the subset until a next synchronized state is determined.

36. The SDN controller (14) of Claim 35, wherein the next synchronized state is associated with a third hash value, the third hash value being different from the first hash value that is associated with the first synchronized state.

37. The SDN controller (14) of any one of Claims 31-34, wherein the processing circuitry (30) is configured to cause the SDN controller (14) to determine by being configured to cause the SDN controller (14) to: when the first hash value is different from the second hash value, determine that the flow rules stored at the SDN controller (14) for the data path node (12) are not yet synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node (12).

44

38. The SDN controller (14) of any one of Claims 31-37, wherein the processing circuitry (30) is configured to cause the SDN controller (14) to: send at least one flow rule to a virtual switch at the data path node (12) via an in-band communication; and receive the second hash value by being configured to cause the SDN controller (14) to receive the second hash value from a daemon at the data path node (12) via an out-of-band communication.

39. The SDN controller (14) of any one of Claims 31-38, wherein the processing circuitry (30) is further configured to cause the SDN controller (14) to: as a result of one of a reboot of a virtual switch associated with the data path node (12) and a connection flap associated with the data path node (12): receive a fourth hash value from the data node; and when the fourth hash value is different from an expected hash value, send only the subset of flow rules that are one of sent to the data path node (12) and modified between a most recent synchronization state and a current state.

40. The SDN controller (14) of any one of Claims 31-39, wherein the processing circuitry (30) is further configured to cause the SDN controller (14) to: as a result of one of a reboot of a virtual switch associated with the data path node (12) and a connection flap associated with the data path node (12): receiving a fourth hash value from the data node; and when the fourth hash value is a same as an expected hash value, determining that the flow rules stored at the SDN controller (14) for the data path node (12) and the flow rules stored in the at least one persistent local storage device associated with the data path node (12) are a same and forgoing sending any flow rules to the data path node (12) until at least one of at least one new flow rule and at least one modification to at least one existing flow rule is determined for the data path node (12).

45

Description:
SYSTEM AND METHOD TO RESYNC FLOW RULES

TECHNICAL FIELD

The present disclosure relates to wireless communication and in particular, methods and apparatuses for resynchronizing (resyncing) flow rules.

BACKGROUND

Software-defined networks (SDN) may facilitate rapid and open innovation at the network layer by providing a programmable network infrastructure. The OpenFlow switching specification may be considered an innovative standard for enabling dynamic programming of flow control policies in the production SDN based network.

When using SDN, the data-plane and the control plane are separated. The data path nodes (DPN or DP node), e.g., Open virtual Switch (OvS) may be considered forwarding engines that are programmed by the SDN Controller using standard south bound protocol rules like OpenFlow.

The flow rules are pushed by the SDN Controller to the corresponding OvS to steer the traffic flow.

For a SDN controller, re- synchronization of flow rules is a process, memory and a network intensive activity. In a clustered environment each SDN controller is responsible for managing a set of DP nodes (e.g., OVS/Compute Nodes). SDN controller (also referred to as “controller” herein) is responsible to keep OVS up to date with flow rules, so that data path will never get impacted.

In case of link-flap/reset of the SDN controller connection, the SDN controller publishes all the flow rules to DP node. Any new flow addition (or) modification waits until the existing flow rules are published. Time taken for publishing flow rules is proportional to number of flow rules for the DP node. During this time there could be a data-path impact, since all the flow rules are to be pushed from the SDN controller and installed into the corresponding DP node.

1. OvS Reboot:

As the DP node (e.g., OVS) does not persist flow rules, on reboot OVS has to relearn flow rules from the SDN controller. This re-learning process might take a few seconds, based on multiple parameters: number of flow rules, processor availability, network bandwidth, memory latency. During the restart/learning process there may be a data-path impact. There are scenarios, where the SDN controller pushes a hundred thousand flow rules onto a rebooting OvS. In Opensource-based OVS implementation, on every restart of OVS there is a datapath impact. The total time of data path impact is: “restart-time” +” flow-provision time”; it may range from few seconds to 100’s of seconds.

2. Controller Reboot

In case of the SDN controller reboot, there may not be any change in the Openflow rules and the Openflow connections to OvS and SDN controller will get reset.

Once the SDN controller is up, the OvS will be able establish a session with the SDN controller. As the OvS is not aware of the SDN controller restart, the usual state machine will get triggered and the SDN controller publishes all the flow rules required for that DP node/OvS. Even though there is (likely) no flow/state change, all the flow rules (also referred to herein as “flow rules”) get exchanged between the SDN controller and OvS.

However, existing arrangements are inefficient.

SUMMARY

Some embodiments advantageously provide methods and apparatuses for resyncing flow rules.

According to one aspect of the present disclosure, a method implemented in a data path node is provided. The method includes obtaining at least one flow rule from a software-defined network, SDN, controller and storing the at least one flow rule in at least one non-persistent storage device; storing the at least one flow rule in at least one persistent local storage device associated with the data path node; and performing at least one synchronization action to synchronize flow rules stored in the at least one persistent local storage device with flow rules for the data path node at the SDN controller.

In some embodiments of this aspect, the at least one synchronization action is performed by a synchronization daemon at the data path node and the at least one synchronization action includes obtaining the flow rules stored in the at least one persistent local storage device; performing a hash function on the flow rules stored in the at least one persistent local storage device to generate a hash value; and sending the generated hash value to the SDN controller. In some embodiments of this aspect, the generated hash value indicates to the SDN controller whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller.

In some embodiments of this aspect, obtaining the at least one flow rule comprises at least one of: receiving, by a virtual switch at the data path node, the at least one flow rule from the SDN controller via an in-band communication; and reading, by a daemon at the data path node, the at least one flow rule from the virtual switch; and sending the generated hash value to the SDN controller comprises sending the generated hash value to the SDN controller via an out-of-band communication.

In some embodiments of this aspect, the method further includes determining whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller, the determining including: a daemon periodically polling a virtual switch associated with the data path node; and when the flow rules at the virtual switch are a same as the flow rules in the at least one persistent local storage device for a predetermined number of consecutive polling instances, determining that the flow rules in the at least one persistent local storage device are synchronized to the flow rules at the SDN controller.

In some embodiments of this aspect, the method further includes determining whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller, the determining includes receiving, from the SDN controller, a message indicating whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller. In some embodiments of this aspect, the method further includes when the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller, setting a current state of the flow rules in the at least one persistent local storage device as a synchronized state, the current state being associated with a hash value sent to the SDN controller.

In some embodiments of this aspect, the method further includes as a result of one of a reboot of a virtual switch associated with the data path node and a connection flap associated with the data path node: determining that a flow rules table at the virtual switch is empty; performing a hash function on the flow rules stored in the at least one persistent local storage device to generate a hash value; loading the flow rules from the at least one persistent local storage device to the flow rules table at the virtual switch; and sending the hash value to the SDN controller.

In some embodiments of this aspect, the method further includes as a result of one of a reboot of a virtual switch associated with the data path node and a connection flap associated with the data path node: receiving a subset of the flow rules for the data path node stored at the SDN controller, the subset corresponding to a difference between the flow rules stored at the SDN controller for the data path node and the flow rules stored in the at least one persistent local storage device associated with the data path node; and loading the flow rules from the at least one persistent local storage device associated with the data path node to a flow rules table at the virtual switch. In some embodiments of this aspect, the at least one synchronization action is performed periodically.

According to another aspect of the present disclosure, a method implemented in a software-defined network, SDN, controller is provided. The method includes maintaining a subset of flow rules for the data path node, the subset representing a difference between the flow rules stored at the SDN controller for the data path node and flow rules stored in at least one persistent local storage device associated with the data path node and the subset being associated with a first hash value; receiving a second hash value from the data path node; comparing the second hash value to the first hash value associated with the subset; and based on the comparison, determining whether the flow rules stored at the SDN controller for the data path node are synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node.

In some embodiments of this aspect, maintaining and comparing comprises maintaining, via a plug-in at the SDN controller, the subset of flow rules and comparing, via the plug-in, the second hash value to the first hash value. In some embodiments of this aspect, the subset represents the difference between the flow rules stored at the SDN controller and the flow rules stored in the at least one persistent local storage device from a previous synchronization state to a current expected synchronization state. In some embodiments of this aspect, the subset of flow rules representing the difference are stored in a separate storage location from a storage location at which a full set of the flow rules for the data path node are stored.

In some embodiments of this aspect, determining includes when the first hash value is a same as the second hash value, determining that the flow rules stored at the SDN controller for the data path node are synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node; and setting a current state of the flow rules for the data path node as a first synchronized state associated with the first hash value and including any subsequent flow rules and modifications to existing flow rules for the data path node within the subset until a next synchronized state is determined.

In some embodiments of this aspect, the next synchronized state is associated with a third hash value, the third hash value being different from the first hash value that is associated with the first synchronized state. In some embodiments of this aspect, determining includes when the first hash value is different from the second hash value, determining that the flow rules stored at the SDN controller for the data path node are not yet synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node.

In some embodiments of this aspect, the method further includes sending at least one flow rule to a virtual switch at the data path node via an in-band communication; and wherein receiving the second hash value comprises receiving the second hash value from a daemon at the data path node via an out-of-band communication. In some embodiments of this aspect, the method further includes as a result of one of a reboot of a virtual switch associated with the data path node and a connection flap associated with the data path node: receiving a fourth hash value from the data node; and when the fourth hash value is different from an expected hash value, sending only the subset of flow rules that are one of sent to the data path node and modified between a most recent synchronization state and a current state.

In some embodiments of this aspect, the method further includes as a result of one of a reboot of a virtual switch associated with the data path node and a connection flap associated with the data path node: receiving a fourth hash value from the data node; and when the fourth hash value is a same as an expected hash value, determining that the flow rules stored at the SDN controller for the data path node and the flow rules stored in the at least one persistent local storage device associated with the data path node are a same and forgoing sending any flow rules to the data path node until at least one of at least one new flow rule and at least one modification to at least one existing flow rule is determined for the data path node.

In some embodiments of this aspect, a data path node includes processing circuitry. The processing circuitry is configured to cause the data path node to: obtain at least one flow rule from a software-defined network, SDN, controller and storing the at least one flow rule in at least one non-persistent storage device; store the at least one flow rule in at least one persistent local storage device associated with the data path node; and perform at least one synchronization action to synchronize flow rules stored in the at least one persistent local storage device with flow rules for the data path node at the SDN controller.

In some embodiments of this aspect, the at least one synchronization action is performed by a synchronization daemon at the data path node and the at least one synchronization action includes obtaining the flow rules stored in the at least one persistent local storage device; performing a hash function on the flow rules stored in the at least one persistent local storage device to generate a hash value; and sending the generated hash value to the SDN controller. In some embodiments of this aspect, the generated hash value indicates to the SDN controller whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller.

In some embodiments of this aspect, the processing circuitry is configured to cause the data path node to obtain the at least one flow rule by being configured to cause the data path node to at least one of: receive, by a virtual switch at the data path node, the at least one flow rule from the SDN controller via an in-band communication; and read, by a daemon at the data path node, the at least one flow rule from the virtual switch; and send the generated hash value to the SDN controller comprises sending the generated hash value to the SDN controller via an out-of-band communication.

In some embodiments of this aspect, the processing circuitry is further configured to cause the data path node to: determine whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller by being configured to cause the data path node to: a daemon periodically poll a virtual switch associated with the data path node; and when the flow rules at the virtual switch are a same as the flow rules in the at least one persistent local storage device for a predetermined number of consecutive polling instances, determine that the flow rules in the at least one persistent local storage device are synchronized to the flow rules at the SDN controller.

In some embodiments of this aspect, the processing circuitry is further configured to cause the data path node to: determine whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller by being configured to cause the data path node to: receive, from the SDN controller, a message indicating whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller. In some embodiments of this aspect, the processing circuitry is further configured to cause the data path node to when the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller, set a current state of the flow rules in the at least one persistent local storage device as a synchronized state, the current state being associated with a hash value sent to the SDN controller.

In some embodiments of this aspect, the processing circuitry is further configured to cause the data path node to: as a result of one of a reboot of a virtual switch associated with the data path node and a connection flap associated with the data path node: determine that a flow rules table at the virtual switch is empty; perform a hash function on the flow rules stored in the at least one persistent local storage device to generate a hash value; load the flow rules from the at least one persistent local storage device to the flow rules table at the virtual switch; and send the hash value to the SDN controller.

In some embodiments of this aspect, the processing circuitry is further configured to cause the data path node to: as a result of one of a reboot of a virtual switch associated with the data path node and a connection flap associated with the data path node: receive a subset of the flow rules for the data path node stored at the SDN controller, the subset corresponding to a difference between the flow rules stored at the SDN controller for the data path node and the flow rules stored in the at least one persistent local storage device associated with the data path node; and load the flow rules from the at least one persistent local storage device associated with the data path node to a flow rules table at the virtual switch. In some embodiments of this aspect, the at least one synchronization action is performed periodically.

According to yet another aspect of the present disclosure, a software-defined network, SDN, controller is provided. The SDN controller includes processing circuitry. The processing circuitry is configured to cause the SDN controller to: maintain a subset of flow rules for the data path node, the subset representing a difference between the flow rules stored at the SDN controller for the data path node and flow rules stored in at least one persistent local storage device associated with the data path node and the subset being associated with a first hash value; receive a second hash value from the data path node; compare the second hash value to the first hash value associated with the subset; and based on the comparison, determine whether the flow rules stored at the SDN controller for the data path node are synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node.

In some embodiments of this aspect, the processing circuitry is configured to cause the SDN controller to maintain and compare by being configured to cause the SDN controller to maintain, via a plug-in at the SDN controller, the subset of flow rules and compare, via the plug-in, the second hash value to the first hash value. In some embodiments of this aspect, the subset represents the difference between the flow rules stored at the SDN controller and the flow rules stored in the at least one persistent local storage device from a previous synchronization state to a current expected synchronization state. In some embodiments of this aspect, the subset of flow rules representing the difference are stored in a separate storage location from a storage location at which a full set of the flow rules for the data path node are stored.

In some embodiments of this aspect, the processing circuitry is configured to cause the SDN controller to determine by being configured to cause the SDN controller to: when the first hash value is a same as the second hash value, determine that the flow rules stored at the SDN controller for the data path node are synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node; and set a current state of the flow rules for the data path node as a first synchronized state associated with the first hash value and including any subsequent flow rules and modifications to existing flow rules for the data path node within the subset until a next synchronized state is determined.

In some embodiments of this aspect, the next synchronized state is associated with a third hash value, the third hash value being different from the first hash value that is associated with the first synchronized state. In some embodiments of this aspect, the processing circuitry is configured to cause the SDN controller to determine by being configured to cause the SDN controller to: when the first hash value is different from the second hash value, determine that the flow rules stored at the SDN controller for the data path node are not yet synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node. In some embodiments of this aspect, the processing circuitry is configured to cause the SDN controller to: send at least one flow rule to a virtual switch at the data path node via an in-band communication; and receive the second hash value by being configured to cause the SDN controller to receive the second hash value from a daemon at the data path node via an out-of-band communication.

In some embodiments of this aspect, the processing circuitry is further configured to cause the SDN controller to: as a result of one of a reboot of a virtual switch associated with the data path node and a connection flap associated with the data path node: receive a fourth hash value from the data node; and when the fourth hash value is different from an expected hash value, send only the subset of flow rules that are one of sent to the data path node and modified between a most recent synchronization state and a current state.

In some embodiments of this aspect, the processing circuitry is further configured to cause the SDN controller to: as a result of one of a reboot of a virtual switch associated with the data path node and a connection flap associated with the data path node: receive a third hash value from the data node; and when the fourth hash value is a same as an expected hash value, determine that the flow rules stored at the SDN controller for the data path node and the flow rules stored in the at least one persistent local storage device associated with the data path node are a same and forgo sending any flow rules to the data path node until at least one of at least one new flow rule and at least one modification to at least one existing flow rule is determined for the data path node.

BRIEF DESCRIPTION OF THE DRAWINGS

A more complete understanding of the present embodiments, and the attendant advantages and features thereof, will be more readily understood by reference to the following detailed description when considered in conjunction with the accompanying drawings wherein:

FIG. 1 illustrates an example system architecture according to some embodiments of the present disclosure;

FIG. 2 illustrates yet another example system architecture and example hardware arrangements for devices in the system, according to some embodiments of the present disclosure;

FIG. 3 is a flowchart of an exemplary process in a first network node according to some embodiments of the present disclosure;

FIG. 4 is a flowchart of an exemplary process in a second network node according to some embodiments of the present disclosure;

FIG. 5 illustrates yet another example system architecture according to some embodiments of the present disclosure; and

FIG. 6 is a schematic diagram showing an example illustration of states maintained by a SDN synchronizer (e.g., Sync Plugin), a data path (DP) node (e.g., open virtual switch, also called OvS) and a data path (DP) synchronizer (e.g., Sync Daemon) at a given time instance according to some embodiments of the present disclosure.

DETAIEED DESCRIPTION

For a controller, re- synchronization of flow rules is a process, memory and a network intensive activity. The SDN controller is responsible to keep OVS up to date with flow flows, so that the data path will not be negatively impacted. However, in case of link- flap/reset of SDN controller connection with OVS, the SDN controller generally publishes all the flow rules to the OVS. In real-time, the OvS will be provisioned with thousands or even millions of flow rules.

In case of SDN controller or OvS reboot, all the flow rules are pushed from SDN controller and resynced. Data traffic is impacted until all the flow rules are synced (syncing may take a few seconds or even a minutes). In the current architecture, even for a connection flap between the OvS and SDN controller, all flow rules are pushed from the SDN controller onto the OvS.

In one solution, when there is a known OVS reboot (triggered by procedural upgrade/restart), scripts in the cloud environment backup existing flow rules on to a storage medium. Once the OvS is rebooted, the script reloads all the backed-up flow rules from the storage onto the OvS. This method makes the restarting OvS to be up and forwarding faster other solutions by reloading the backed-up flow rules from a local storage. It should be noted that some of the flow rules reloaded from the storage could have become obsolete during the OvS reboot. Thus, the OvS is not loaded with 100% of the updated flow rules. Once restoration of flow rules is complete, then the OVS will be allowed to connect to the SDN controller to synchronize (“sync”) any changes in the flow rules that occurred during the reboot. On connection to the SDN controller a new bundle (new memory/new flow tables) is created. All the flow rules (old backed-up flow rules and new flow rules) pushed from the SDN controller are stored in a newly created bundle. Bundle-swap takes place once flow rules are provisioned. Even though this method may improve the resyncing time by bringing the OvS online through loading the backed-up flow rules from the local storage, it may still require that all the flow rules are to be push from the SDN controller to the OvS so that the OvS will be 100% synced with the latest forwarding flow rules.

Currently this approach is used only for pre-planned restarts and does not solve problems, such as an unplanned OvS crash (or) un-controlled reboots.

Some embodiments of the present disclosure include one or more of the following:

• A Sync daemon is deployed in all the compute nodes including OvS (i.e., DP node) and a Sync Plugin is added to the SDN controller.

• As the SDN controller pushes the new flow rules to a corresponding OvS in a DP node, the Sync plugin in the SDN controller calculates a hash (including the new flow rules) and stores new hash value for the corresponding OvS.

• On the other end, the Sync daemon on the DP node periodically/triggers reads the flow rules from the OvS and dump/persists them on to a local storage.

• Also, the Sync daemon periodically reads the persisted flow rules from the storage and calculates hash for the persisted flow rules and sends the hash to the SDN controller.

• On reception of the hash, the SDN controller compares the received hash with the hash computed by the SDN controller for the particular OvS. • If the both the hashes match, both the Sync Plugin and the Sync Daemon mark/set this state as a sync point/committed point.

• As the new flow rules are pushed (e.g., added/deleted/modified flow rules), new hashes are computed, and a new sync point is identified. When all the new flow rules are persisted, then the sync daemon may reach such new sync point. The new sync point will be used as a restoring point during OvS reboots.

• Until a sync point is reached, the SDN controller records and maintains the delta/difference in the flow rules (e.g., that are added/deleted/modified) between the previous Sync point to the current expected state.

• For each OvS, the SDN controller maintains sync points and the delta/difference in the new flow rules that are specific to an OvS.

• In case of an OvS reboot, the flow rules persisted in the local storage is loaded on the OvS flow tables from the local storage before connecting to the SDN controller. Then, a hash value is computed by the sync daemon at the DP node from the persisted flow rules and forwarded to the sync plugin in the SDN controller - which hash value computed by the DP node may be equal to the previous sync point’s hash value at the SDN controller.

• During the OvS reboot, if new flow rules were added/deleted/modified, the SDN controller pushes only the difference in the flow rules starting from the previous sync point to the current state.

• If no new flow rules were added/deleted/modified, the SDN controller will not resend the flow rules, since nothing has changed during the reboot.

Some embodiments of the present disclosure may provide one or more of the following performance- wise advantages:

• Loading flow rules from local persistent storage is faster than the sending the large number of flow rules across the network from the SDN controller.

• During OvS reboot, flow tables are quickly populated from the local storage and transitioned into the correct forwarding state. This may be faster when compared to the native implementation of starting with empty flow tables and then exchanging of all the flow rules.

• Provides faster resync of flow rules between OvS and the SDN controller during OvS reboot. Only the required (difference in) flow rules between the OvS and the SDN controller are pushed to the OvS, which may faster when compared to a bundle-swap mechanism. • Reduced load on SDN controller - minimal flow rules are pushed and processed instead of sending all the flow rules from the SDN controller.

Some embodiments of the present disclosure may provide one or more of the following architecture-wise advantages:

• No changes required in the OvS or in the existing communication between the OvS and the SDN controller, or in OpenFlow.

• Sync daemon and the sync plugin may be considered independent entities, e.g., independent from the main functionality of the OvS and the SDN controller. The main functionality of the OvS and the SDN controller may not be negatively affected even in case of a plugin or daemon failure.

• Simple out-of-band communication (using e.g., user datagram protocol (UDP)) can be used to exchange the calculate hash from the sync daemon on the DP node to the SDN controller.

• No additional memory may be required - unlike the bundle-swap method.

Some embodiments of the present disclosure may provide one of more of the following:

• resynchronizing arrangements in which all the flows rules are not sent again, i.e., only the flow rules which are not synced between the SDN controller and OvS are sent;

• when OvS and SDN controller are in sync all the flow rules are persisted in the corresponding DP node.

• mark a current state as a “sync point;”

• in case of an OvS crash or DP node reboot: during the reboot, if the flow rules have been modified:

- OvS loads the persisted rules from the local node storage;

- SDN controller sends only the flows rules that are modified/new; and

• during the reboot, if the flow rules have not been changed:

- OvS loads the persisted rules from the local node storage; and

- no flow rules are pushed from the SDN controller.

Before describing in detail exemplary embodiments, it is noted that the embodiments reside primarily in combinations of apparatus components and processing steps related to resyncing flow rules. Accordingly, components have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.

As used herein, relational terms, such as “first” and “second,” “top” and “bottom,” and the like, may be used solely to distinguish one entity or element from another entity or element without necessarily requiring or implying any physical or logical relationship or order between such entities or elements. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the concepts described herein. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes” and/or “including” when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

In embodiments described herein, the joining term, “in communication with” and the like, may be used to indicate electrical or data communication, which may be accomplished by physical contact, induction, electromagnetic radiation, radio signaling, infrared signaling or optical signaling, for example. One having ordinary skill in the art will appreciate that multiple components may interoperate and modifications and variations are possible of achieving the electrical and data communication.

In some embodiments described herein, the term “coupled,” “connected,” and the like, may be used herein to indicate a connection, although not necessarily directly, and may include wired and/or wireless connections.

In some embodiments, the term “node” is used herein and can be any kind of network node, such as, a SDN controller node, a data path node (e.g., a compute node including an open virtual switch), etc.

In some embodiments, the term “in-band is used and may indicate the existing (standardized e.g., per SDN standards, such as, for example, OpenFlow) method of communication between the SDN controller and the OVS (data path nodes) through which the flow rules are exchanged.

In some embodiments, the term “out-of-band” is used and may indicate any other form of communication other than the in-band communication between the SDN controller and the DP nodes for exchanging hash-values according to the synchronization techniques disclosed herein. In some embodiments, the term “connection flap” is used and may indicate a condition in which a communications link alternates between up and down states. Connection between two communicating entities may become disconnected for a brief period and then connection is restored. This could be because of e.g., disconnectivity in physical links or because of disconnect at protocols (such as transmission control protocol (TCP)) running between the communicating entities.

In some embodiments, the term “persist/persisting/persistence” is used. As known in the art, persistence refers to a characteristic of a state and/or data that outlives the process that created it. This may be generally achieved in practice by storing the state and/or data in computer data storage (rather than for example transitory memory like random access memory/RAM). Persisting data may involve a larger processing overhead, than merely storing and using data in transitory memory such as RAM. For example, a software process/thread or program persisting data may be required to transfer data to and from the storage device and also have to provide mappings from native programming-language data structures to the storage device data structures, which may be costly in terms of time and processing resources. Because of the dynamic nature of flow rules (i.e., flow rules typically change rapidly) in an SDN environment, persisting flow rules at the OVS has not been considered by standardization bodies.

A node may include physical components, such as processors, allocated processing elements, or other computing hardware, computer memory, communication interfaces, and other supporting computing hardware. The node may use dedicated physical components, or the node may be allocated use of the physical components of another device, such as a computing device or resources of a datacenter, in which case the node is said to be virtualized. A node may be associated with multiple physical components that may be located either in one location, or may be distributed across multiple locations.

Note that although terminology from one particular system, such as, for example, a software-defined networking system, may be used in this disclosure, the techniques disclosed herein may be applicable and beneficial for other types of systems to e.g., synchronize different elements within such systems. Other systems may also benefit from exploiting the ideas covered within this disclosure.

Note further, that functions described herein as being performed by a SDN controller and/or a DP node may be distributed over a plurality of such nodes. In other words, it is contemplated that the functions of the SDN controller and/or DP node described herein are not limited to performance by a single physical device and, in fact, can be distributed among several physical devices.

Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs. It will be further understood that terms used herein should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.

Referring now to the drawing figures, in which like elements are referred to by like reference numerals, there is shown in FIG. 1 a schematic diagram of a communication system 10, according to another embodiment, constructed in accordance with the principles of the present disclosure. The communication system 10 in FIG. 1 is a non-limiting example and other embodiments of the present disclosure may be implemented by one or more other systems and/or networks. Referring to FIG. 1, the system 10, which may be a software-defined network (SDN) system, includes one or more DP nodes, such as DP node 12a and DP node 12b (collectively, DP nodes 12). The system 10 includes an SDN controller 14. The SDN controller 14 may be configured to program and/or instruct the DP nodes 12 using standard south bound protocol rules such as, for example, in OpenFlow. The DP nodes 12 may be considered forwarding engines programmed/instructed according to the SDN controller 14. It should be understood that the system 10 may include numerous nodes of those shown in FIG. 1, as well as additional nodes not shown in FIG. 1. In addition, the system 10 may include many more connections/interfaces than those shown in FIG. 1.

The DP node 12 may include a DP synchronizer 16. In some embodiments, the DP synchronizer may be configured to cause the DP node 12 to obtain at least one flow rule from a software-defined network, SDN, controller and storing the at least one flow rule in at least one non-persistent storage device; store the at least one flow rule in at least one persistent local storage device associated with the data path node; and perform at least one synchronization action to synchronize flow rules stored in the at least one persistent local storage device with flow rules for the data path node at the SDN controller.

The SDN controller 14 may include a SDN synchronizer 18. The SDN synchronizer may be configured to cause the SDN controller 14 to maintain a subset of flow rules for the data path node, the subset representing a difference between the flow rules stored at the SDN controller for the data path node and flow rules stored in at least one persistent local storage device associated with the data path node and the subset being associated with a first hash value; receive a second hash value from the data path node; compare the second hash value to the first hash value associated with the subset; and based on the comparison, determine whether the flow rules stored at the SDN controller for the data path node are synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node.

Example implementations, in accordance with some embodiments, of DP node 12 and SDN controller 14 discussed in the preceding paragraphs will now be described with reference to FIG. 2.

The DP node 12 includes a communication interface 20, processing circuitry 22, and memory 24. The communication interface 20 may be configured to communicate with any of the nodes in the system 10 to send a request message including a downlink data measurement indicator to e.g., support configuration of downlink data measurement for MT- EDT procedures according to some embodiments of the present disclosure. In some embodiments, the communication interface 20 may be formed as or may include, for example, one or more radio frequency (RF) transmitters, one or more RF receivers, and/or one or more RF transceivers, and/or may be considered a radio interface. In some embodiments, the communication interface 20 may also include a wired interface.

The processing circuitry 22 may include one or more processors 26 and memory, such as, the memory 24. In particular, in addition to a traditional processor and memory, the processing circuitry 22 may comprise integrated circuitry for processing and/or control, e.g., one or more processors and/or processor cores and/or FPGAs (Field Programmable Gate Array) and/or ASICs (Application Specific Integrated Circuitry) adapted to execute instructions. The processor 26 may be configured to access (e.g., write to and/or read from) the memory 24, which may comprise any kind of volatile and/or nonvolatile memory, e.g., cache and/or buffer memory and/or RAM (Random Access Memory) and/or ROM (Read- Only Memory) and/or optical memory and/or EPROM (Erasable Programmable Read-Only Memory).

Thus, the DP node 12 may further include software stored internally in, for example, memory 24, or stored in external memory (e.g., database) accessible by the DP node 12 via an external connection. The software may be executable by the processing circuitry 22. The processing circuitry 22 may be configured to control any of the methods and/or processes described herein and/or to cause such methods, and/or processes to be performed, e.g., by the DP node 12. The memory 24 is configured to store data, programmatic software code and/or other information described herein. In some embodiments, the software may include instructions stored in memory 24 that, when executed by the processor 26 causes the processing circuitry 22 and/or configures the DP node 12 to perform the processes described herein with respect to the DP node 12, such as described with respect to, e.g., FIG. 3 and the other figures.

The SDN controller 14 includes a communication interface 28, processing circuitry 30, and memory 32. The communication interface 28 may be configured to communicate with any of the nodes in the system 10 to receive a request message including a downlink data measurement indicator to e.g., support configuration of downlink data measurement for MT-EDT procedures according to some embodiments of the present disclosure. In some embodiments, the communication interface 28 may be formed as or may include, for example, one or more radio frequency (RF) transmitters, one or more RF receivers, and/or one or more RF transceivers, and/or may be considered a radio interface. In some embodiments, the communication interface 28 may also include a wired interface.

The processing circuitry 30 may include one or more processors 34 and memory, such as, the memory 32. In particular, in addition to a traditional processor and memory, the processing circuitry 30 may comprise integrated circuitry for processing and/or control, e.g., one or more processors and/or processor cores and/or FPGAs (Field Programmable Gate Array) and/or ASICs (Application Specific Integrated Circuitry) adapted to execute instructions. The processors 34 may be configured to access (e.g., write to and/or read from) the memory 32, which may comprise any kind of volatile and/or nonvolatile memory, e.g., cache and/or buffer memory and/or RAM (Random Access Memory) and/or ROM (Read- Only Memory) and/or optical memory and/or EPROM (Erasable Programmable Read-Only Memory).

Thus, the SDN controller 14 may further include software stored internally in, for example, memory 32, or stored in external memory (e.g., database) accessible by the SDN controller 14 via an external connection. The software may be executable by the processing circuitry 30. The processing circuitry 30 may be configured to control any of the methods and/or processes described herein and/or to cause such methods, and/or processes to be performed, e.g., by the SDN controller 14. The memory 32 is configured to store data, programmatic software code and/or other information described herein. In some embodiments, the software may include instructions stored in memory 32 that, when executed by the processors 34, causes the processing circuitry 30 and/or configures the SDN controller 14 to perform the processes described herein with respect to the SDN controller 14, such as with respect to FIG. 4 and/or the other figures. In FIG. 2, the connection between the DP node 12 and the SDN controller 14 is shown without explicit reference to any intermediary devices or connections. However, it should be understood that intermediary devices and/or connections may exist between these devices, although not explicitly shown.

Although FIG. 2 shows DP synchronizer 16 and SDN synchronizer 18, as being within a respective processor, it is contemplated that these elements may be implemented such that a portion of the elements is stored in a corresponding memory within the processing circuitry. In other words, the elements may be implemented in hardware or in a combination of hardware and software within the processing circuitry.

FIG. 3 is a flowchart of an exemplary process in a DP node 12 according to one or more of the techniques in the present disclosure. One or more Blocks and/or functions and/or methods performed by the DP node 12 may be performed by one or more elements of DP node 12 such as by DP synchronizer 16 in processing circuitry 22, memory 24, processor 26, communication interface 20, etc. according to the example method. The method includes obtaining (Block S100), such as via DP synchronizer 16, processing circuitry 22, memory 24, processor 26 and/or communication interface 20, at least one flow rule from a software-defined network, SDN, controller and storing the at least one flow rule in at least one non-persistent storage device. The method includes storing (Block S102), such as via DP synchronizer 16, processing circuitry 22, memory 24, processor 26 and/or communication interface 20, the at least one flow rule in at least one persistent local storage device associated with the data path node. The method includes performing (Block S104), such as via DP synchronizer 16, processing circuitry 22, memory 24, processor 26 and/or communication interface 20, at least one synchronization action to synchronize flow rules stored in the at least one persistent local storage device with flow rules for the data path node at the SDN controller.

In some embodiments, the at least one synchronization action is performed by a synchronization daemon at the data path node and the at least one synchronization action includes: obtaining the flow rules stored in the at least one persistent local storage device; performing a hash function on the flow rules stored in the at least one persistent local storage device to generate a hash value; and sending the generated hash value to the SDN controller. In some embodiments, the generated hash value indicates to the SDN controller whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller. In some embodiments, obtaining the at least one flow rule comprises at least one of: receiving, by a virtual switch at the data path node, the at least one flow rule from the SDN controller via an in-band communication; and reading, by a daemon at the data path node, the at least one flow rule from the virtual switch; and sending the generated hash value to the SDN controller comprises sending the generated hash value to the SDN controller via an out-of-band communication.

In some embodiments, the method further includes determining whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller, the determining includes: a daemon periodically polling a virtual switch associated with the data path node; and when the flow rules at the virtual switch are a same as the flow rules in the at least one persistent local storage device for a predetermined number of consecutive polling instances, determining that the flow rules in the at least one persistent local storage device are synchronized to the flow rules at the SDN controller.

In some embodiments, the method further includes determining whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller, the determining includes: receiving, from the SDN controller, a message indicating whether the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller.

In some embodiments, the method further includes when the flow rules stored in the at least one persistent local storage device are synchronized with the flow rules for the data path node at the SDN controller, setting a current state of the flow rules in the at least one persistent local storage device as a synchronized state, the current state being associated with a hash value sent to the SDN controller. In some embodiments, the method further includes as a result of one of a reboot of a virtual switch associated with the data path node and a connection flap associated with the data path node: determining that a flow rules table at the virtual switch is empty; performing a hash function on the flow rules stored in the at least one persistent local storage device to generate a hash value; loading the flow rules from the at least one persistent local storage device to the flow rules table at the virtual switch; and sending the hash value to the SDN controller.

In some embodiments, the method further includes as a result of one of a reboot of a virtual switch associated with the data path node and a connection flap associated with the data path node: receiving a subset of the flow rules for the data path node stored at the SDN controller, the subset corresponding to a difference between the flow rules stored at the SDN controller for the data path node and the flow rules stored in the at least one persistent local storage device associated with the data path node; and loading the flow rules from the at least one persistent local storage device associated with the data path node to a flow rules table at the virtual switch. In some embodiments, the at least one synchronization action is performed periodically.

FIG. 4 is a flowchart of an exemplary process in a SDN controller 14 according to one or more of the techniques in the present disclosure. One or more Blocks and/or functions and/or methods performed by the SDN controller 14 may be performed by one or more elements of SDN controller 14 such as by SDN synchronizer 18 in processing circuitry 30, memory 32, processors 34, communication interface 28, etc. according to the example method. The example method includes maintaining (Block S106), such as via SDN synchronizer 18, processing circuitry 30, memory 32, processors 34 and/or communication interface 28, a subset of flow rules for the data path node, the subset representing a difference between the flow rules stored at the SDN controller for the data path node and flow rules stored in at least one persistent local storage device associated with the data path node and the subset being associated with a first hash value. The method includes receiving (Block S108), such as via SDN synchronizer 18, processing circuitry 30, memory 32, processors 34 and/or communication interface 28, a second hash value from the data path node. The method includes comparing (Block SI 10), such as via SDN synchronizer 18, processing circuitry 30, memory 32, processors 34 and/or communication interface 28, the second hash value to the first hash value associated with the subset. The method includes based on the comparison, determining (Block SI 12), such as via SDN synchronizer 18, processing circuitry 30, memory 32, processors 34 and/or communication interface 28, whether the flow rules stored at the SDN controller for the data path node are synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node.

In some embodiments, maintaining and comparing comprises maintaining, via a plug-in at the SDN controller, the subset of flow rules and comparing, via the plug-in, the second hash value to the first hash value. In some embodiments, the subset represents the difference between the flow rules stored at the SDN controller and the flow rules stored in the at least one persistent local storage device from a previous synchronization state to a current expected synchronization state. In some embodiments, the subset of flow rules representing the difference are stored in a separate storage location from a storage location at which a full set of the flow rules for the data path node are stored.

In some embodiments, determining includes: when the first hash value is a same as the second hash value, determining, such as via SDN synchronizer 18, processing circuitry 30, memory 32, processors 34 and/or communication interface 28, that the flow rules stored at the SDN controller for the data path node are synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node; and setting, such as via SDN synchronizer 18, processing circuitry 30, memory 32, processors 34 and/or communication interface 28, a current state of the flow rules for the data path node as a first synchronized state associated with the first hash value and including any subsequent flow rules and modifications to existing flow rules for the data path node within the subset until a next synchronized state is determined. In some embodiments, the next synchronized state is associated with a third hash value, the third hash value being different from the first hash value that is associated with the first synchronized state.

In some embodiments, determining includes: when the first hash value is different from the second hash value, determining, such as via SDN synchronizer 18, processing circuitry 30, memory 32, processors 34 and/or communication interface 28, that the flow rules stored at the SDN controller for the data path node are not yet synchronized with the flow rules stored in the at least one persistent local storage device associated with the data path node. In some embodiments, the method further includes sending, such as via SDN synchronizer 18, processing circuitry 30, memory 32, processors 34 and/or communication interface 28, at least one flow rule to a virtual switch at the data path node via an in-band communication; and wherein receiving the second hash value comprises receiving, such as via SDN synchronizer 18, processing circuitry 30, memory 32, processors 34 and/or communication interface 28, the second hash value from a daemon at the data path node via an out-of-band communication.

In some embodiments, the method further includes as a result of one of a reboot of a virtual switch associated with the data path node and a connection flap associated with the data path node: receiving, such as via SDN synchronizer 18, processing circuitry 30, memory 32, processors 34 and/or communication interface 28, a fourth hash value from the data node; and when the fourth hash value is different from an expected hash value, sending only the subset of flow rules that are one of sent to the data path node and modified between a most recent synchronization state and a current state. In some embodiments, the method further includes as a result of one of a reboot of a virtual switch associated with the data path node and a connection flap associated with the data path node: receiving, such as via SDN synchronizer 18, processing circuitry 30, memory 32, processors 34 and/or communication interface 28, a fourth hash value from the data node; and when the fourth hash value is a same as an expected hash value, determining that the flow rules stored at the SDN controller for the data path node and the flow rules stored in the at least one persistent local storage device associated with the data path node are a same and forgoing sending any flow rules to the data path node until at least one of at least one new flow rule and at least one modification to at least one existing flow rule is determined for the data path node.

Having generally described arrangements for resynchronizing flow rules, a more detailed description of some of the embodiments are provided as follows with reference to the FIGS. 5 and 6, and which may be implemented by DP node 12 and/or SDN controller 14.

FIG. 5 shows the embodiment of the proposed system and method. The DP synchronizer 16 (which may be implemented as a sync daemon on compute nodes) and the SDN synchronizer 18 (which may be implemented as a sync plugin on the SDN controller 14) may be considered the additional system components proposed in some embodiments of the present disclosure to implement the resync mechanism according to the techniques disclosed herein.

Example System Components

The cloud administrator may deploy the DP synchronizer 16 (e.g., sync daemon) on the DP nodes 12 (i.e., compute nodes including an OvS). In some embodiments, the DP synchronizer 16 (e.g., sync daemon) may be considered an independent software entity operating on e.g., processing circuitry 22, with an access to the OvS present in the same DP node 12 and has access to the persistent medium (storage) proposed herein, preferably in the local node.

The SDN synchronizer 18 (e.g., sync plugin) may be implemented as part of the SDN controller 14 as a helper component. The detailed information about the components that includes the SDN synchronizer 18 (e.g., sync plugin) and the DP synchronizer 16 (e.g., sync daemon) may be as follows:

• Persistence Module:

The persistence module 36 may be configured to persist the flow rules present in the OvS 38 onto a storage medium. During OvS reboot, flow rules are loaded directly from the local persistent storage 40 which may avoid the SDN controller 14 pushing thousands of flow rule via the OpenFlow protocol in in-band communication over the network. The persistence module 36 may periodically scan the OvS 38 for new flow rules via a Representational State Transfer (REST) Application Programming Interface (API) or through a command line interface (CLI) and persist them on to the storage medium. The functionality of the persistence module 36 may include persisting the flow rules and reloading the persisted flow rules during a reboot.

• Hash Calculator Module:

The hash calculator module 42 may be included in the SDN synchronizer 18 (e.g., sync plugin) and in the DP synchronizer 16 (e.g., sync daemon). On the SDN controller 14, the hash calculator module 42 may be configured to calculate the hash for the flow rules e.g., ‘FlowMod’ message (e.g., add/delete/modify flow rules) that the SDN controller 14 has sent to an OvS 38. The computed hash may be considered unique for every OvS 38 and when new FlowMod sends out messages for a particular OvS 38, new hashes may be computed for that OVS 38.

On the DP node 12, the hash calculator module 44 may be configured to read flow rules information persisted in the storage medium and calculate the hash for the persisted flow rules. To obtain matching results, the mathematical hash function, hashing techniques and the input parameters provided to the hash calculator module 42 on the SDN synchronizer 18 (e.g., sync plugin) and the hash calculator module 44 on the DP synchronizer 16 (e.g., sync daemon) may be same.

• Flows Rules Delta Module:

The flow rules delta module 46 maintains the delta or the difference in flow rules that are yet to be persisted and acknowledged by the DP synchronizer 16 (e.g., sync daemon) in the DP node 12. The flow rules delta module 46 may be configured to maintain a separate list for each OvS. This list/data structure may be separate from the DP node’s 12 Openflow datastore maintained by the SDN controller 14. This list/data structure may be specific to/associated with the SDN synchronizer 18 (e.g., sync plugin) in the SDN controller 14.

• Communication Module:

The communication modules 48, 50 may used for exchanging the hash and other OvS related information between the DP synchronizer 16 (e.g., sync daemon) in the DP node 12 and the SDN synchronizer 18 (e.g., sync plugin) in the SDN controller 14, respectively. Additional Features of an Example System o Mathematical Hash Function, Hash Technique and Input Parameters:

Some embodiments may use a hash function. Any existing strong hash producing function may be used to compute the hash values described herein. Any mathematical hash function may be used. In some embodiments, a rolling hash may not be suitable for the example system since the sequence of transmission of flow rules by the SDN controller 14 and the sequence of arrival of those flow rules at the DP node 12 may be different. For computing the hash value, in some embodiments, all flow rules present in the OvS 38 may be uses in a hash method, where a sliding window of a last ‘N’ installed flow rules may be used to compute the hash value. In some embodiments, for the input parameter a flow identifier (flow-id), in port, out port IP-port tuple in the flow rules may be considered as inputs. o Data Structure (e.g., in flow rules delta module 46) for maintaining a difference in flow rules between the sync point to the current state:

In some embodiments, any well-known methods that may be used to maintain and produce the delta/difference between objects can be maintained, for example, as in Git where Tree objects are maintained in blob and pack files. Another method may be to maintain an in-memory list/queue/data structure or a file for each OvS 38 and keep appending the new FlowMod rules into the list/queue/data structure until the sync point is reached. Once the sync point is reached, a same list or file could be overwritten with the new flow rules e.g., from this sync point to the current state. o Type of storage medium and the format in which the data is persisted in the storage medium:

The DP node 12 and/or the persistence module 36 may persist the flow rules in e.g., JavaScript Object Notation (JSON) format or in another format that would be suitable to pass over the OVS REST API or through its CLI.

It is to be noted that the format in which the persisted flow rules may provide all the input used to compute the hash by the hash calculator modules 42, 44. o Communication between the SDN synchronizer 18 (e.g., sync plugin) and the DP synchronizer 16 (e.g., sync daemon):

Any communication protocol carrying the payload of the out-of-band communication e.g., the details of the OVS identification (ID) and the computed hash value from the DP synchronizer 16 (e.g., sync daemon) to the SDN synchronizer 18 (e.g., sync plugin) may be used. One such non-limiting example may be using UDP. In another embodiment, TCP may be used as the communication protocol for the out-of-band communications.

In some embodiments, the SDN controller 14 pushes the new flow rules to a corresponding OVS 38, the SDN synchronization 18 (e.g., sync plugin) in the SDN controller 14 calculates a hash value (including the new flows) and stores the new hash value for the corresponding OVS 38. On the other end, the DP synchronization 16 (e.g., sync daemon) on the DP node 12 periodically (or, as triggered) reads the flow rules from the OVS 38 and dumps/persists them on to a local storage 40. In some embodiments, DP synchronization 16 (e.g., sync daemon) periodically reads the persisted flow rules and calculates the hash value for the persisted flow rules and sends the hash value to the SDN controller. In some embodiments, if both the hash values match, both the SDN synchronization 18 (e.g., sync plugin) and the DP synchronization 16 (e.g., sync daemon) mark/set this state as a sync point/committed point. As the new flow rules are pushed (added/deleted/modified), new hash values are computed, and new sync points are identified. When all the new flows are persisted, the DP synchronization 16 (e.g., sync daemon) reaches the new sync point. Until a sync point is reached, the SDN controller 14 records and maintains the delta/difference in the flow rules (that are added/deleted/modified) between the previous sync point to the current expected state. In case of an OvS reboot, the OVS 38 loads the persisted flow rules from the DP node 12 local storage 40 until e.g., the persisted flow rules associated with the last sync point have been loaded to the OVS 38 and sends the last sync point hash value to the SDN controller 14.

In some embodiments, during an OVS 38 reboot, if new flow rules are added/deleted/modified the SDN controller 14 pushes only the difference in the flow rules starting from the previous sync point to the current state. If there is no change - then no flow rules are pushed by the SDN controller 14 to the synced OVS 38.

FIG. 6 provides an example illustration of states maintained in the SDN synchronizer 18 (e.g., sync plugin), OVS 38 and DP synchronizer 16 (e.g., sync daemon) at a given time instance. The example shows the table (on the left top) maintained by the SDN synchronizer 18 (e.g., sync plugin) related to the hash values. The right hand side is an illustration showing the state of various flow rules for the DP node 12, including flow rules pushed by the SDN controller 14, a subset of flow rules not received by the OvS 38 and in transit and a subset of flow rules yet to persisted in the persistent storage 40.

FIG. 6 shows an example in which the latest expected hash at the SDN controller 14 is hash 19 (hl9) (e.g., since the SDN controller 14 has sent the flow rules that it used to calculate hl 9); however, such flow rules associated with hl9 have not yet been received by the OVS (e.g., the OVS having ID 1 in the DP node 12). Further, FIG. 6 illustrates a subset of flow rules (flow rules associated with hl7, hl8 and hl9) that have not yet been persisted at the persistent storage 40. The DP synchronizer’s 16 current calculated hash to the SDN controller 14 corresponds to hl6 since the flow rules only up to hl6 have been used to calculate the current hash value (e.g., even though the OVS 38 has received flow rules via the in-band communication up to hl 8 from the SDN controller 14, the DP synchronizer 16 has yet to poll the OVS 38 to read the flow rules associated with hl7 and hl8). The latest hash sync point at the DP synchronizer 16 and SDN synchronizer 18 is for hash hl4. In this example, the flow rules delta module 46 may include in its list/data structure flow rules between its latest sync point associated with hl4 to the current state associated with hl9. Thus, if there was an unexpected reboot at this time instance, most of the flow rules for the OVS 38 may be obtained from the persistent storage 40 and the SDN synchronizer 18 may only send the flow rules in the list/data structure at the flow rules delta module 46, which may be a subset of the flow rules for the OVS 38 that is much less than all the flow rules for the OVS 38. Such embodiments may advantageously be more efficient in terms of time and network resource usage, as compared to existing arrangements that include sending all of the flow rules to the OVS 38 upon a reboot.

Some embodiments of the present disclosure may be realized using one-way UDP communication or bidirectional UDP communication between the DP synchronizer 16 (e.g., sync daemon) and the SDN synchronizer 18 (e.g., sync plugin) based on a cloud environment.

Uni-directional Communication: using one-way UDP communication - From DP synchronizer 16 (e.g., sync daemon) to SDN synchronizer 18 (e.g., sync plugin):

• This method may use only one UDP listening port on SDN synchronizer 18 (e.g., sync plugin) for the incoming messages from all the DP synchronizers 16 (e.g., sync daemons). It may also not use any additional UDP ports on the DP synchronizers 16 (e.g., sync daemons).

• Communication module 50 in the SDN synchronizer 18 (e.g., sync plugin) may start to listen on a specified UDP Port and act as a receiver for the incoming UDP packet with the computed hash value from the DP synchronizers 16 (e.g., sync daemons) on DP nodes 12.

• SDN controller 14 uses the existing OpenFlow protocol running over TCP to push FlowMod (flow rules) to the OVS 38. • When the SDN controller 14 pushes a new set of flow rules to an OvS 38 (e.g., can add/delete/modify flow rules), the hash calculator module 42 in the SDN synchronizer 18 (e.g., sync plugin) may compute a new hash using the new set of flow rules that are sent and then stores these (e.g., new hash value and/or new set of flow rules) in a field named for example ‘New Expected Hash’ as show in FIG. 6.

• In some embodiments, a hash mechanism on SDN synchronizer 18 (e.g., sync plugin) and DP synchronizer 16 (e.g., sync daemon) may be preferred over using sequence numbers or flow-IDs because the order in which the flow rules are inserted into the OvS table message may be different from the order/sequence in the FlowMod message originally sent by the SDN controller 14; unless dedicated barrier messages are sent from the SDN controller 14 to maintain the order for dependent messages.

In some embodiments, the expected order from the SDN controller 14 may be different from the order in which the flow rules are installed. However, once all the flow rules are inserted into the OvS 38 and persisted in the persistent storage 40, then computing the hash on the flow rules may be the same irrespective of the order in which they are inserted.

• For each OvS 38, the flow rules delta module 46 in the SDN synchronizer 18 (e.g., sync plugin) maintains a list of the new flow rules that are yet to be persisted and acknowledged by the DP synchronizer 16 (e.g., sync daemon) from the corresponding DP node 12.

• The persistence module 36 in the DP synchronizer 16 (e.g., sync daemon) periodically polls the OVS 38 to search for and/or identify a change in the flow rules in the flow tables at the OVS 38. The new flow rules/changes may then be stored onto the local persistent storage 40. All the flow rules in the OVS 38 may be persisted in the storage 40.

• The hash calculator module 44 in the DP synchronizer 16 (e.g., sync daemon) periodically scans for the new flow rules appended in the storage 40. If new flow rules are found, then the hash calculator module 44 computes the new hash and sends the computed hash value to the communication module 48.

• Communication module 48 on the DP synchronizer 16 (e.g., sync daemon) pushes/sends the details of the OVS 38 (e.g., OVS ID) along with the newly computed hash to the UDP port on which the receiver (e.g., listening UDP Port at communication module 50 at SDN synchronizer 18) is listening.

• As a result of receiving the newly compute hash value, the SDN synchronizer 18 (e.g., sync plugin) updates the field ‘Eatest Hash Value Received’ and also updates the timestamp at the which the latest hash value received by the SDN controller 14 ends onto to a filed ‘Latest Hash Value Received Timestamp’.

• The SDN synchronizer 18 (e.g., sync plugin) also compares the received hash value with the expected hash value for the specific DP node 12. If the hash matches, then a ‘Sync Point’ has been determined. This hash value is updated under the field ‘Previous Committed Hash’ and will be used as a restoring point during a corresponding OvS 38 reboot.

• Periodically, the DP synchronizer 16 (e.g., sync daemon) may send out the hash value to the SDN controller 14 via the out-of-bound communication channel. In some embodiments, on the receiver end (SDN synchronizer 18) the latest timestamp field may be updated with the latest timestamp value. This field may serve as a heartbeat or as an indication to SDN synchronizer 18 that the DP synchronizer 16 (e.g., sync daemon) is up and running.

• Hash values computed by the SDN synchronizer 18 (e.g., sync plugin) and DP synchronizer 16 (e.g., sync daemon) are asynchronously evolving states - i.e., the current set of flow rules visible to the DP synchronizer 16 (e.g., sync daemon) and the order in which they are persisted may not affect the end result. Eventually, when all the flow rules are persisted the hash values may match.

• As the new flow rules are pushed, new hashes are computed, and new sync points are identified. When all the new flow rules are persisted, then the DP synchronizer 16 (e.g., sync daemon) may reach the new sync point. The new sync point will be used as a restoring point during OVS 38 reboots.

• It should be noted that, in some embodiments, in order for the DP synchronizer 16 (e.g., sync daemon) to realize that it has reached a sync point along with the SDN synchronizer 18 (e.g., sync plugin), a self-inferred method may be used. For example, during the periodic polling of OVS 38, if the DP synchronizer 16 (e.g., sync daemon) identifies that no new flow rules have been added/modified/deleted from the OVS 38 for ‘N’ consecutive times, and all the existing flow rules are already persisted, then the DP synchronizer 16 (e.g., sync daemon) may determine and/or deduce/infer that it is in sync with the SDN synchronizer 18 (e.g., sync plugin) and may mark/set this point as a sync point. Also, the sync point hash may be persisted along with the flow rules.

• Since UDP is used in some embodiments, the DP synchronizer 16 (e.g., sync daemon) may push the same UDP message a few times so that the chances of the UDP messages getting lost may be reduced. Bi-directional Communication: using bi-directional communication between DP synchronizer 16 (e.g., sync daemon) and SDN synchronizer 18 (e.g., sync plugin):

• The basic functionality of the overall system may be the same and/or similar to the UDP embodiment described above, expect that the DP synchronizer 16 (e.g., sync daemon) on the DP node 12 listens on a predefined UDP port for bidirectional communication.

• The SDN synchronizer 18 (e.g., sync plugin) on the SDN controller 14 sends the ‘Expected Hash' to the DP synchronizer 16 (e.g., sync daemon) when the new flow rules are pushed into the OVS 38, since in volatile cloud environments, a long silence period with no changes is the OvS flow rules is highly unlikely.

• If appropriate, the DP synchronizer 16 (e.g., sync daemon) may be put to sleep and the ‘Expected Hash’ message from the SDN synchronizer 18 (e.g., sync plugin) may be used as a wake-message to persist the new flow rules and to compute the new hash, as opposed to the periodic polling of the OvS 38 by the DP synchronizer 16 described herein.

• Also, the DP synchronizer 16 (e.g., sync daemon) may respond to the SDN synchronizer 18 (e.g., sync plugin) once it reaches ‘Expected Hash’ or the new sync state. This may reduce the number of intermittent hash messages sent from the DP synchronizer 16 (e.g., sync daemon) to the SDN synchronizer 18 (e.g., sync plugin).

OVS Reboot Scenario

1. OVS reboot may occur when a subset of flow rules are yet to be persisted, e.g., a sync point has not yet been reached:

This can happen when, for example, the SDN controller 14 has pushed flow rules but the OVS 38 crashed and rebooted before the flow rules are persisted. This may also occur when, for example, during the OVS 38 reboot, the SDN controller 14 determines to push new flow rules to the OVS 38. In case of OVS 38 reboot and the SDN controller 14 determines to push new flow rules, the flow rule delta modules 46 append the new flow rules to the delta/difference list. The hash calculator module 42 on the SDN synchronizer 18 (e.g., sync plugin) updates the ‘New Expected Hash’. When the OVS 38 comes back online, the persistence module 36 polls the OVS 38 for the flow rules (which will be empty).

Thus, the DP synchronizer 16 (e.g., sync daemon) will compute the hash of the persisted flow rules and verifies with the previously stored hash to check for the integrity of the persisted flow rules. The DP synchronizer 16 (e.g., sync daemon) may then load the persisted flow rules onto the OVS 38 flow tables until e.g., the last sync point marked in the persistent storage 40. The DP synchronizer 16 (e.g., sync daemon) may send the hash value to the SDN synchronizer 18 (e.g., sync plugin) in the SDN controller 14. The SDN controller 14 compares the received hash value with the previous sync point /committed hash value. If it matches, only the new flow rules available at the flow rules delta module 46 for the corresponding OVS 38 is pushed to the OVS 38. The DP synchronizer 16 (e.g., sync daemon) may later persist these new flow rules and may eventually reach the expected hash value, i.e., a new sync point will be identified/reached; and the flow rules in the flow rules delta module’s 46 list for the OVS 38 is freed (e.g., may be overwritten or deleted). Accordingly, the rebooted OVS 38 may be in a correct forwarding state and in sync with the SDN controller 14 more quickly and/or efficiently, as compared to existing arrangements.

2. OVS 38 reboot may occur when all flow rules are already persisted, i.e., a sync point is reached (e.g., during a state in which SDN controller 14 and DP node 12 are synchronized):

During the OVS 38 reboot, there are no changes to the existing flow rules - when the OVS 38 reboots the persistent module 36 will load all the flow rules from the local persistent storage 40. The hash value computed at the DP synchronizer 16 should be equal to the previous sync point hash at the SDN synchronizer 18. Thus, the SDN controller 14 may not push any flow rules to the OVS 38, since they are already in sync.

Controller Reboot Scenario

1. The SDN controller 14 reboot may occur when, for example, all flow rules are already persisted by the DP synchronizer 16 (e.g., sync daemon) - sync point is reached:

During the SDN controller 14 reboot, the SDN synchronizer 18 (e.g., sync plugin) repopulates the hash table for the all the OVSs 38 that it controls and reopens the UDP port for communication with the DP synchronizers 16 (e.g., sync daemons). Meanwhile, the DP synchronizers 16 (e.g., sync daemons) on the DP node 12 may be sending the hash values to the SDN synchronizer 18 (e.g., sync plugin). If the hash value matches, the SDN controller 14 will mark/set a sync point for the corresponding OVS 38 and the SDN controller 14 may not push any flow rules to the OVS 38, since they are already in a sync state.

2. The SDN controller 14 reboot may also occur when, for example, a subset of the flow rules is yet to be persisted by the DP synchronizer 16 (e.g., sync daemon) - sync point not reached:

During the SDN controller 14 reboot, the SDN synchronizer 18 (e.g., sync plugin) repopulates the hash table for the all the OVSs 38 and reopens the UDP port for communication with the DP synchronizers 16 (e.g., sync daemons). Meanwhile, the DP synchronizers 16 (e.g., sync daemons) on the DP node 12 will be sending the hash values to the SDN synchronizer 18 (e.g., sync plugin). If the hash value does not match, the SDN controller 14 will resend all flow rules from the previous sync point to the corresponding OVS 38. Now the new expected hash will be updated, and the system may resume to normalcy.

3. The SDN controller 14 reboot may also occur when, for example, some flow rules are yet to be persisted - sync point not reached (SDN controller 14 NOT storing the delta flow rules in a node-flow rules-inventory):

The SDN controller 14 not storing the delta flow rules in a node-flow rules- inventory, is an unusual scenario (remote comer-case). In case this scenario occurs, the SDN controller 14 may be configured to fall back to the native method of exchanging flow rules with the OVS 38 (e.g., sending all the flow rules for the OVS 38 over the network, e.g., in- band communication).

Some embodiments of the present disclosure may include arrangements used to resync a rebooting OVS 38 to reach a correct forwarding state i.e., using a sync point as a restoring point and pushing only the delta flow rules from the SDN controller 14. In some embodiments, a consensus protocol or other known algorithm may be used to arrive at a sync point.

Fault Tolerance

1. DP synchronizer 16 (e.g., sync daemon) failure and OVS 38 is stable:

Failure during runtime: DP synchronizer 16 (e.g., sync daemon) may crash/fail during the runtime. In some embodiments, a software utility cron, i.e., a cron job, may be setup on the DP node 12 to check the health of the DP synchronizer 16 (e.g., sync daemon). In case of failure, cron jobs may restart and it can start to function normally since the functionality of the DP synchronizer 16 (e.g., sync daemon) is stateless. In case the cron job is unable to bring the daemon up and running, the SDN controller 14 may come to know that the daemon has failed - based on the ‘Latest Hash Value Received Timestamp’ field in the SDN synchronizer 18 (e.g., sync plugin) for the corresponding OVS 38. In case of failure, the timestamp filed would have not been updated for a long time. Based on a timeout mechanism, the SDN synchronizer 18 (e.g., sync plugin) may declare the DP synchronizer 16 (e.g., sync daemon) corresponding to a particular OVS 38 to be ‘Inactive’. In case of a OVS 38 reboot, the SDN controller 14 may fall back to the native implementation of pushing all the flow rules to the OVS 38. Failure during DP node 12 reboot: this is a scenario where the entire DP node 12 reboots and the DP synchronizer 16 (e.g., sync daemon) has failed to bootup. In this case, the SDN controller 14 may come to know about the OVS 38 reboot, since OVS 38 would have TCP re-established the session with the SDN controller 14. The SDN controller 14 may wait a predefined time for the DP synchronizer 16 (e.g., sync daemon) to load the persisted flow rules onto the OVS 38 and send the hash value. If the SDN controller 14 does not receive a hash value from the OVS 38 during the predefined time, the SDN controller 14 may push all the flow rules to the OVS 38.

Unable to load/store from the persistent storage medium: if the DP synchronizer 16 (e.g., sync daemon) is unable to load or store from the persistent storage medium, then it can notify the SDN synchronizer 18 (e.g., sync plugin) by setting e.g., an error value in the UDP message. Corrupted flow rules while reading from the persistent storage 40 may lead to a different hash (other than the previous sync point/last committed hash). The SDN controller 14 may push all the flow rules to the OVS 38 in such scenario.

2. SDN synchronizer 18 (e.g., sync plugin) on the SDN controller 14 fails:

This may be considered a single-point failure scenario and may impact all the rebooting OVS 38. The SDN controller 14 may try to reboot the SDN synchronizer 18. If successfully rebooted, the SDN synchronizer 18 may quickly rebuild its previous state by looking into flow rules of each OVS 38 in the SDN controller 14 and compute the expected hash. If the SDN controller 14 is unable to reboot the SDN synchronizer 18 (e.g., sync plugin), the SDN controller 14 may fall back to the native implementation of sending of all the flow rules to the OVS 38.

Some embodiments may include one or more of the following alternative embodiments:

• The DP synchronizer 16 (e.g., sync daemon) may be integrated and implemented into OVS 38 through OvS -Plugins.

• The communication between the DP synchronizer 16 (e.g., sync daemon) and SDN synchronizer 18 (e.g., sync plugin) may be implemented over TCP or any custom defined protocols.

• The persistence storage 40 may be any persistent storage medium, such as a solid state drive (SSD), a hard disk drive (HDD), persistent memory (PMEM) or also be over a storage area network (SAN).

• Instead of only persisting the flow rules, if possible, the entire OVS 38 flow tables may be snapshotted at periodic intervals. Some embodiments of the present disclosure may use the proposed arrangements to identify a difference in the flow rules between the SDN controller 14 and the OVS 38 and send (out-of-band) only the difference in flow rules.

Some embodiments of the present disclosure may include periodically persisting the flow rules and marking/setting sync points. Some embodiments may provide for a reset/reboot to restore from a sync point to a stable state quickly.

Some embodiments of the present disclosure may include one or more of:

• a system and a method to resync the flow rules between the SDN controller 14 and OVS 38 during reboot or an OVS 38 connection flap scenario;

• a DP synchronizer 16 (e.g., sync daemon) on the DP node 12 persists the new flow rules from the OVS 38 periodically;

• a SDN synchronizer 18 (e.g., sync plugin) on the SDN controller 14 that monitors and/or maintains a list of flow rules that are yet to be persisted and acknowledged from the DP node 12;

• a hash-based mechanism to identify sync points between the SDN controller 14 and OVS 38.

• during reboot only the delta/difference in the flow rules between the SDN controller 14 and OVS 38 is pushed, as opposed to sending the entire flow rules for the OVS 38 over the network.

As will be appreciated by one of skill in the art, the concepts described herein may be embodied as a method, data processing system, and/or computer program product. Accordingly, the concepts described herein may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects all generally referred to herein as a “circuit” or “module.” Furthermore, the disclosure may take the form of a computer program product on a tangible computer usable storage medium having computer program code embodied in the medium that can be executed by a computer. Any suitable tangible computer readable medium may be utilized including hard disks, CD-ROMs, electronic storage devices, optical storage devices, or magnetic storage devices.

Some embodiments are described herein with reference to flowchart illustrations and/or block diagrams of methods, systems and computer program products. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer readable memory or storage medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture including instruction means which implement the function/act specified in the flowchart and/or block diagram block or blocks.

The computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

It is to be understood that the functions/acts noted in the blocks may occur out of the order noted in the operational illustrations. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality/acts involved. Although some of the diagrams include arrows on communication paths to show a primary direction of communication, it is to be understood that communication may occur in the opposite direction to the depicted arrows.

Computer program code for carrying out operations of the concepts described herein may be written in an object oriented programming language such as Java® or C++. However, the computer program code for carrying out operations of the disclosure may also be written in conventional procedural programming languages, such as the "C" programming language. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer. In the latter scenario, the remote computer may be connected to the user's computer through a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). Many different embodiments have been disclosed herein, in connection with the above description and the drawings. It will be understood that it would be unduly repetitious and obfuscating to literally describe and illustrate every combination and subcombination of these embodiments. Accordingly, all embodiments can be combined in any way and/or combination, and the present specification, including the drawings, shall be construed to constitute a complete written description of all combinations and subcombinations of the embodiments described herein, and of the manner and process of making and using them, and shall support claims to any such combination or subcombination. It will be appreciated by persons skilled in the art that the embodiments described herein are not limited to what has been particularly shown and described herein above. In addition, unless mention was made above to the contrary, it should be noted that all of the accompanying drawings are not to scale. A variety of modifications and variations are possible in light of the above teachings without departing from the scope of the following claims.