Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD, COMPUTER READABLE STORAGE MEDIUM AND DEVICE FOR PACKET TRANSMISSION IN CONVERGECAST NETWORK
Document Type and Number:
WIPO Patent Application WO/2018/207391
Kind Code:
A1
Abstract:
The present invention relates to a method determining scheduling for packet transmission in a convergecast network. The method includes receiving a request to perform an operation in the network. Initializing a query from the sink-node, the query is transmitted to a plurality of nodes, and in response to the query, the receiver receives information indicative at a period of time. Sorting the nodes by the sink-node based on the received information and using a sorting function that prioritizes each node to obtain a prioritized order of the nodes. Calculating an end-to-end delay for each node based on the time data by determining a difference between the data generation release time for each node and the data delivery time at the sink-node for each node. Scheduling packet transmission for each of the nodes based on a scheduling function having a set of predetermined scheduling criteria. Performing packet transmissions in the tree using the scheduled order.

Inventors:
KIM KYEONGJIN (US)
BURGHAL DAOUD (US)
GUO JIANLIN (US)
ORLIK PHILIP (US)
Application Number:
PCT/JP2017/041576
Publication Date:
November 15, 2018
Filing Date:
November 13, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MITSUBISHI ELECTRIC CORP (JP)
International Classes:
H04L45/02; H04L45/122; H04W84/18
Foreign References:
US8005002B22011-08-23
US20150036570A12015-02-05
US8005002B22011-08-23
Other References:
"Theoretical aspects of Distributed Computing in Sensor Networks", 1 January 2011, SPRINGER VERLAG, Berlin, Heidelberg, ISBN: 978-3-642-14849-1, article OZLEM DURMAZ INCEL ET AL: "Scheduling Algorithms for Tree-Based Data Collection in Wireless Sensor Networks", pages: 407 - 445, XP055452598, DOI: 10.1007/978-3-642-14849-1_14
Attorney, Agent or Firm:
SOGA, Michiharu et al. (JP)
Download PDF:
Claims:
[CLAIMS]

[Claim 1]

A method for determining scheduling for packet transmission in a convergecast network, the convergecast network including a sink-node and a plurality of nodes, wherein during an operation in the convergecast network, each node generates packets to transmit to the sink-node, and a receiver of the sink-node receives information indicative of a topology of the convergecast network that includes a tree having one or more branches, and each node in the tree is associated with a hop count for a route to the sink-node through a specific branch, the method comprising:

receiving a request to perform an operation in the convergecast network;

initializing a query from the sink-node, the query is transmitted to the plurality of nodes through the branches, and in response to the query, the receiver receives information indicative at a period of time, such that the information includes a topology of the convergecast network, properties of each node and time data for each node generated indicative of a data generation release time from each node and a data delivery time at the sink-node for each node;

sorting the plurality of nodes by the sink-node based on the received information and using a sorting function that prioritizes each node to obtain a prioritized order of the plurality of nodes;

calculating an end-to-end delay for each node based on the time data by determining a difference between the data generation release time for each node and the data delivery time at the sink-node for each node;

scheduling packet transmission for each of the plurality of nodes by the sink-node based on a scheduling function having a set of predetermined scheduling criteria, so as to obtain a scheduled order for the plurality of nodes; and

performing packet transmissions in the tree using the scheduled order, wherein performing the operation using the scheduled order substantially optimizes the operation by reducing a total number of timeslots required to complete the operation and by reducing an amount of end-to-end delay for the plurality of nodes in the convergecast network.

[Claim 2]

The method of claim 1, wherein the sorting function and the end-to-end delay are simultaneously or j ointly performed.

[Claim 3]

The method of claim 1 , wherein the sorting function prioritizes each node to obtain a prioritized order of the plurality of nodes based on an amount of total load, an amount of load to be forwarded, an amount of load remaining after forwarding, a prioritization of a node or some combination thereof.

[Claim 4]

The method of claim 1, wherein the sorting function prioritizes the plurality of nodes based on a total load or a remaining load,

wherein the total load includes a number of packets to be transmitted by each node, a number of packets to be forwarded by each node, or both, in a period of time, along with including any previous transmitted packets by each node, any previous forwarded packets by each node, or both, in a period of time,

wherein the remaining load is a number of packets each node transmits, forwards, or both, in a remaining of the period of time.

[Claim 5]

The method of claim 4, wherein the sorting function includes a variant

implementation function, such that the variant implementation function includes:

dividing the plurality of nodes into a first group including child nodes of the sink node, and a second group that includes other nodes except the child nodes of the sink node; and sorting the first group and the second group based on a predetermined metric, such that the first group is prioritized over the second group.

[Claim 6]

The method of claim 1, further comprising:

a scheduling matrix

S £ {— L,—L + 1, ... , L - 1, L}MX(N+1

wherein ( L ) is a number of frequency channels, ( M ) is a schedule size, ( N ) is a number of nodes, ( + ) is transmits and ( - ) is receives,

wherein child nodes of the sink-node have a higher priority over other nodes of the plurality of nodes regarding transmitting or receiving, such that each child node of the sink-node is sorted by a remaining load and each other node of the other nodes is sorted by a total load,

wherein the total load includes a number of packets to be transmitted by each node, a number of packets to be forwarded by each node, or both, in a period of time, along with including any previous transmitted packets by each node, any previous forwarded packets by each node, or both, in a period of time,

wherein the remaining load is a number of packets each node transmits, forwards, or both, in a remaining of the period of time.

[Claim 7]

The method of claim 1 , wherein the operation starts when a packet is generated from any node of the plurality of the nodes.

[Claim 8]

The method of claim 1, further comprising:

obtaining a hop-count to the sink-node for each node in the plurality of the nodes prior to scheduling the packet transmissions.

[Claim 9] The method of claim 1, wherein each node in the plurality of nodes can be in one of the following states during each timeslot of the operation:

a receiving state, during which the node may receive a packet from a neighboring node;

a transmitting state, during which the node may transmit a packet to a neighboring node; and

an idle state, during which the node neither transmits nor receives.

[Claim 10]

The method of claim 1 , wherein the convergecast network is a wireless sensor network.

[Claim 1 1]

The method of claim 1, wherein the set of predetermined scheduling criteria of the scheduling function includes at least one of a generating time of the packet, a buffer status of a node, a buffer status of a child node and a buffer status of a parent of the node.

[Claim 12]

The method of claim 1, wherein the scheduling function includes:

computing a generating time difference for each node from the time data between a holding time of an oldest packet of the node and a holding time of an oldest packet held by the node's associated child nodes, such that if the generating time difference is above a predetermined time threshold, then, the oldest packet held by the node's associated child node is scheduled by the sink node to transmit the packet from the child node.

[Claim 13]

The method of claim 1, wherein the scheduling function includes:

computing a systematic threshold considering buffer status of associated child nodes' and buffer status of associated parent nodes', such that receive packets from an associated child node or transmit packets to associated parent node, wherein as the buffer of the parent node is filled with more packets, the threshold is decreased, so that a node is scheduled by the sink node to receive packets from its associated child node.

[Claim 14]

The method of claim 13, wherein the buffer of the node is filled with more packets, the threshold is increased, so that the node is scheduled by the sink node to transmit its packets inside of the buffer to its associated parent node,

wherein, as buffers of the node and its associated parent node are simultaneously filled with more packets, the sink node prevents the node from transmitting or receiving operation.

[Claim 15]

The method of claim 1, wherein the tree for the convergecast network is obtained by:

broadcasting a message from the sink-node to all one-hop neighbors or child nodes of the sink-node;

propagating the message to each node in the plurality of the nodes through forwarding a received copy of the message with a smaller hop-count; and

obtaining a shortest-hop-count tree to form the tree.

[Claim 16]

A computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a method for optimizing packet transmission during an operation in a convergecast network, the convergecast network including a sink- node and a plurality of nodes, wherein during an operation in the convergecast network, each node generates packets to transmit to the sink-node, and a receiver of the sink-node receives information indicative of a topology of the convergecast network that includes a tree having one or more branches, and each node in the tree is associated with a hop count for a route to the sink-node through a specific branch, the method comprising: receiving a request to perform an operation in the convergecast network; initializing a query from the sink-node, the query is transmitted to the plurality of nodes through the branches, and in response to the query, the receiver receives information indicative at a period of time, such that the information includes a topology of the convergecast network, properties of each node and time data for each node that is generated indicative of a data generation release time from each node and a data delivery time at the sink-node for each node;

sorting the plurality of nodes by the sink-node based on the received information and using a sorting function that prioritizes each node to obtain a prioritized order of the plurality of nodes;

calculating an end-to-end delay for each node based on the time data by determining a difference between the data generation release time for each node and the data delivery time at the sink-node for each node;

scheduling packet transmission for each of the plurality of nodes by the sink-node based on a scheduling function having a set of predetermined scheduling criteria, so as to obtain a scheduled order for the plurality of nodes; and

performing packet transmissions in the tree using the scheduled order, wherein the scheduled order substantially optimizes the operation by reducing a total number of timeslots required to complete the operation with reducing an amount of end-to-end delay for the plurality of nodes in the convergecast network, wherein a timeslot specifies an amount of time.

[Claim 17]

The method of claim 16, wherein the sorting function prioritizes the plurality of nodes based on a total load or a remaining load,

wherein the total load includes a number of packets to be transmitted by each node, a number of packets to be forwarded by each node, or both, in a period of time, along with including any previous transmitted packets by each node, any previous forwarded packets by each node, or both, in a period of time,

wherein the remaining load is a number of packets each node transmits, forwards, or both, in a remaining of the period of time.

[Claim 18]

The method of claim 16, wherein the sorting function includes a variant

implementation function, such that the variant implementation function includes:

dividing the plurality of nodes into a first group including child nodes of the sink node, and a second group that includes other nodes except the child nodes of the sink node;

sorting the first group and the second group on a predetermined metric, such that the first group is prioritized over the second group.

[Claim 19]

The method of claim 16, wherein the scheduling function includes:

computing a generating time difference for each node from the time data between a holding time of an oldest packet of the node and holding times of oldest packets held by the node's associated child nodes, such that if the generating time differences are above a predetermined time threshold, then the oldest packets held by the node's associated child nodes are scheduled by the sink node to transmit the packets from the child node.

[Claim 20]

A device that optimizes packet transmission during an operation in a convergecast network, the convergecast network including a sink-node and a plurality of nodes, wherein during the operation in the convergecast network, each node generates packets to transmit to the sink-node, and a receiver of the sink-node receives information indicative of a topology of the convergecast network that includes a tree having one or more branches, and each node in the tree is associated with a hop count for a route to the sink-node through a specific branch, the device comprising: a receiver configured to receive a request to perform the operation in the

convergecast network;

a processor of the sink-node is configured to:

initialize a query to broadcast to the plurality of nodes through the branches, and in response to the query, the receiver receives information indicative at a period of time, wherein the information includes a topology of the convergecast network, properties of each node and time data for each node that is generated indicative of a data generation release time from each node and a data delivery time at the sink- node for each node;

sort the plurality of nodes by the sink-node based on the received information and using a sorting function that prioritizes each node to obtain a prioritized order of the plurality of nodes;

calculate an end-to-end delay for each node based on the time data by determining a difference between the data generation release time for each node and the data delivery time at the sink-node for each node;

schedule packet transmission for each of the plurality of nodes by the sink- node based on a scheduling function having a set of predetermined scheduling criteria, so as to obtain a scheduled order for the plurality of nodes; and

perform packet transmissions in the tree using the scheduled order, wherein the scheduled order substantially optimizes the operation by reducing a total number of timeslots required to complete the operation with reducing an amount of end-to-end delay for the plurality of nodes in the convergecast network.

Description:
[DESCRIPTION]

[Title of Invention]

METHOD, COMPUTER READABLE STORAGE MEDIUM AND DEVICE FOR PACKET TRANSMISSION IN CONVERGECAST NETWORK

[Technical Field]

[0001] The present disclosure relates generally to delay sensitive scheduling in wireless multi-hop convergecast networks, and more particularly scheduling schemes that jointly minimize schedule size and end-to-end delay in non-uniform release time environments.

[Background Art]

[0002] Wireless sensor network (WSN) is an important component of the Internet of Things (IoT) effort. It provides seamless connectivity and control capabilities to surrounding infrastructure and facilities. It also provides access to real world data through measuring and monitoring the environment. However, applications with stringent reliability constrains have not fully adopted the WSN. For instance, delay is a major concern in many manufacturing and other sensitive facilities. In such environment, sensor nodes must convey the data to the sink node in the monitoring center as soon as it becomes available. Such data collection could be periodic or aperiodic process. Once the data is available from all or subset of the nodes, the control center may choose the appropriate action.

[0003] Thus, it is necessary to develop specialized networks that are capable of accommodating applications with a strict delay and reliability requirements. Recently, several standards have been developed to address some issues that improve the reliability of WSN such as interference, delay, and integration with current and future standards. For instance, WirelessHart and 802.15.4e provide MAC layer that uses TDMA, where the scheduling time is divided into periodic intervals, called time-slots, and frequency hopping (FH) to combat delay and interference. All nodes are assumed to have a synchronized time clock so that they know the starting and finishing times of each slot.

[Summary of Invention]

[Technical Problem]

[0004] US 8,005,002 B2 assumes that the data is available at all the nodes at the beginning of the scheduling. However, this assumption is not satisfied in industrial and other delay sensitive applications. Furthermore, the US 8,005,002 B2 prior work focuses on the schedule size. However, minimizing the schedule size in the constrained nonuniform release time does not minimize the end-to-end delay.

[0005] Therefore, there is a need to address scheduling schemes that address the delay sensitive scheduling problem in wireless multi-hop convergecast networks and many-to-one networks.

[Solution to Problem]

[0006] The present disclosure relates generally to delay sensitive scheduling in wireless multi-hop many-to-one network, which is also referenced as convergecast networks, and more particularly scheduling schemes that jointly minimize schedule size and end-to-end delay in non-uniform release time environments.

[0007] The delay sensitive scheduling in wireless multi-hop many-to-one network include a sink node, that is also known as a base-station or a gateway, and a plurality of data nodes. Each data node collects or generates data. At least one aspect of each data node is to transmit the data to the sink node, possibly through other data nodes, i.e. nodes, with a minimum delay. The time data is collected or generated and could be different for different nodes depending upon the specific embodiment of the present disclosure.

[0008] The present disclosure is based on several realizations that include addressing minimizing the schedule size and end-to-end delay simultaneously. In particular, we understand delay is a major concern in many manufacturing and other sensitive facilities. We looked to address overcoming the problems in convergecast network to accommodate for applications requiring a strict delay and a certain level of reliability requirements. At least one important feature of a scheduling scheme for packet transmission according to the present disclosure is to collect all data with a small number of time slots, i.e., a small schedule size. However, the schedule size may not reflect the true delay.

[0009] For instance, we discovered that in an application where data is available at different nodes in different time instances, the schedule size does not represent the actual delay. For example, a network in an industrial setting, can include a sink node, and a set of wireless nodes, wherein each node is attached to a process that can be attached to another process. Each process can release data at a given time, such that the release times for each node could be different for each process. In other words, one process can have a larger value than a time of that of another process releases data. Wherein, according to embodiments of the present disclosure, we focus on cases when release times are not equal, which is also termed non-uniform release times.

[0010] In such cases, where release times for multiple processes are not equal, the difference between the time data that is released and it's delivered time is a meaningful metric according to the present disclosure. This quantity is referred to as an end-to-end delay, or a relative delay.

[0011] We discovered that minimizing the end-to-end delay for all traffic is challenging task. Thus, we figured out that minimizing a maximum or an average end-to- end delay along with the schedule size can demonstrate good results according to the present disclosure. In addition, a small schedule size is also an important feature of a scheduling scheme in many scenarios according to the present disclosure, and can provide for a duration for parodic procedures in an industrial setting. Wherein we consider the schedule size that can be defined as number of time slots to collect all data.

[0012] In these types of networks, we realized through experimentation the scheduling scheme jointly minimizes the schedule size and end-to-end delay in nonuniform release time environment. To this end, a joint sorting and scheduling can be developed. We figured out that the sink node can sort the nodes based on a given criterion, and then can schedule the nodes to achieve at least two objectives; first a minimum schedule size and second, a minimum end-to-end delay.

[0013] Another realization we realized in designing scheduling schemes is that the load of nodes is an important metric when prioritizing the transmission of nodes. For example, according to an embodiment of the present disclosure, the sink node sorts the nodes based on a particular metric. In a particular implementation, the nodes can be sorted based on the total load or the remaining load. In another implementation, the nodes can be divided into two groups: (i) child nodes of the sink node and (ii) other nodes except the child nodes of the sink node. Then, the nodes in each of the two groups can be sorted based on a particular metric, wherein the first group is prioritized over the second group.

[0014] The sink node schedules the nodes with high priority to transmit or receive depending on some condition. In a particular implementation, if the node has at least one packet in its buffer, then the node is scheduled to transmit. If the node's buffer is empty, the node schedules one of its child nodes depending on the scheduling rule. In one implementation of this rule according to the present disclosure, a child node i with an oldest packet is scheduled to transmit, unless there is another child node j with a larger load and has at least one packet within d timeslots, from the oldest packet in the buffer of node i.

[0015] In another implementation according to the present disclosure, if node k has a non-empty buffer that is occupied by b k packets, then the node can be scheduled to receive, only if, one of its child nodes have an older packet according to the scheduling rule Ffb j J, where Ffb^) is a function that depends on a buffer size of node k, b k .

[0016] In another implementation according to an embodiment of the present disclosure, the link to node k or from node k can be scheduled on a current time-frequency resource if no other conflicting links are scheduled on the same block. Otherwise, the link to node k or from node k is scheduled on a different frequency channel if available. If not, scheduling the link is deferred. In another implementation, sorting of the nodes can be local, such that each node knows scheduling orders of itself and its neighbor nodes. In such case, the scheduling of a node with high priority can be done in similar fashion as earlier. In other words, a node with high priority decides either to transmit or receive according to the scheduling rule.

[0017] Further still, in another implementation of the present disclosure, may include the load of a node representing an effective load of that node. The effective load can be the maximum number of packets that is expected to pass through the node in a given time window W.

[0018] It is possible other embodiments of the present disclosure can be structured differently, for example, one embodiment can have a Small Schedule Size, that prioritizes the nodes with a large load reduces the schedule size. Another embodiment may have a Small End-to-End Delay, by choosing a child node with the old packet reduces the end-to- end delay without increasing the schedule size. Further still, another embodiment may be structured that Reduces Buffer Overflow, by using the scheduling rule F(.) provides a tradeoff between the schedule size and end-to-end delay. Wherein, the scheduling rule F(.) constitutes a method to avoid overflow in the buffer. Further, an embodiment may be structured having a Distributed Implementation, wherein the central controller is not required since no global knowledge is required. It is noted that the sink node can act as central controller, so that it controls each node when it receives the data packet from its associated child node or transmits the data packet to its associated parent node.

[0019] Examples of some methods and systems of the present disclosure, can include determining scheduling for packet transmission in a convergecast network. Wherein during an operation, each node generates packets to transmit to the sink-node. A receiver of the sink-node can receive information indicative of a topology of the convergecast network that includes a tree having one or more branches. Further, each node in the tree can be associated with a hop count for a route to the sink-node through a specific branch. An initial step can include receiving a request to perform an operation in the convergecast network, then initializing a query from the sink-node. The query can be transmitted to the plurality of nodes through the branches, and in response to the query, the receiver receives information indicative at a period of time. Specifically, the information can include a topology of the convergecast network, properties of each node and time data for each node generated. Wherein the time data is indicative of a data generation release time from each node and a data delivery time at the sink-node for each node. Then, sorting the plurality of nodes by the sink-node based on the received information and using a sorting function that prioritizes each node to obtain a prioritized order of the plurality of nodes. Follow by, calculating an end-to-end delay for each node based on the received time data by determining a difference between the data generation release time for each node and the data delivery time at the sink-node for each node. It is then possible to begin scheduling packet transmission for each of the plurality of nodes by the sink-node, based on a scheduling function having a set of predetermined scheduling criteria, so as to obtain a scheduled order for the plurality of nodes. Thus, performing packet transmissions in the tree can be accomplished using the scheduled order, which results in optimizes the operation by reducing a total number of timeslots required to complete the operation, along with reducing an amount of end-to-end delay for the plurality of nodes in the convergecast network.

[0020] According to an embodiment of the disclosure, a method for determining scheduling for packet transmission in a convergecast network. The convergecast network including a sink-node and a plurality of nodes. Wherein during an operation in the convergecast network, each node generates packets to transmit to the sink-node, and a receiver of the sink-node receives information indicative of a topology of the convergecast network that includes a tree having one or more branches. Such that each node in the tree is associated with a hop count for a route to the sink-node through a specific branch. The method including receiving a request to perform an operation in the convergecast network. Initializing a query from the sink-node. The query is transmitted to the plurality of nodes through the branches, and in response to the query, the receiver receives information indicative at a period of time. Such that the information includes a topology of the convergecast network, properties of each node and time data for each node generated indicative of a data generation release time from each node and a data delivery time at the sink-node for each node. Sorting the plurality of nodes by the sink-node based on the received information and using a sorting function that prioritizes each node to obtain a prioritized order of the plurality of nodes. Calculating an end-to-end delay for each node based on the time data by determining a difference between the data generation release time for each node and the data delivery time at the sink-node for each node. Scheduling packet transmission for each of the plurality of nodes by the sink-node based on a scheduling function having a set of predetermined scheduling criteria, so as to obtain a scheduled order for the plurality of nodes. Performing packet transmissions in the tree using the scheduled order. Wherein performing the operation using the scheduled order substantially optimizes the operation by reducing a total number of timeslots required to complete the operation and by reducing an amount of end-to-end delay for the plurality of nodes in the convergecast network.

[0021] According to another embodiment of the disclosure, a computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a method for optimizing packet transmission during an operation in a convergecast network. The convergecast network including a sink-node and a plurality of nodes. Wherein during an operation in the convergecast network, each node generates packets to transmit to the sink-node. A receiver of the sink-node receives information indicative of a topology of the convergecast network that includes a tree having one or more branches. Such that each node in the tree is associated with a hop count for a route to the sink-node through a specific branch. The method including receiving a request to perform an operation in the convergecast network. Initializing a query from the sink-node, wherein the query is transmitted to the plurality of nodes through the branches. In response to the query, the receiver receives information indicative at a period of time. Such that the information includes a topology of the convergecast network, properties of each node and time data for each node that is generated indicative of a data generation release time from each node and a data delivery time at the sink-node for each node. Sorting the plurality of nodes by the sink-node based on the received information and using a sorting function that prioritizes each node to obtain a prioritized order of the plurality of nodes. Calculating an end-to-end delay for each node based on the time data by determining a difference between the data generation release time for each node and the data delivery time at the sink-node for each node. Scheduling packet transmission for each of the plurality of nodes by the sink-node based on a scheduling function having a set of predetermined scheduling criteria, so as to obtain a scheduled order for the plurality of nodes. Performing packet transmissions in the tree using the scheduled order. Wherein the scheduled order substantially optimizes the operation by reducing a total number of timeslots required to complete the operation with reducing an amount of end-to-end delay for the plurality of nodes in the convergecast network. Wherein a timeslot specifies an amount of time.

[0022] According to another embodiment of the disclosure, a device that optimizes packet transmission during an operation in a convergecast network. The convergecast network including a sink-node and a plurality of nodes. Wherein during the operation in the convergecast network, each node generates packets to transmit to the sink-node. A receiver of the sink-node receives information indicative of a topology of the convergecast network that includes a tree having one or more branches. Each node in the tree is associated with a hop count for a route to the sink-node through a specific branch. The device including a receiver configured to receive a request to perform the operation in the convergecast network. A processor of the sink-node is configured to initialize a query to broadcast to the plurality of nodes through the branches. In response to the query, the receiver receives information indicative at a period of time. Wherein the information includes a topology of the convergecast network, properties of each node and time data for each node that is generated indicative of a data generation release time from each node and a data delivery time at the sink-node for each node. Sort the plurality of nodes by the sink- node based on the received information and using a sorting function that prioritizes each node to obtain a prioritized order of the plurality of nodes. Calculate an end-to-end delay for each node based on the time data by determining a difference between the data generation release time for each node and the data delivery time at the sink-node for each node. Schedule packet transmission for each of the plurality of nodes by the sink-node based on a scheduling function having a set of predetermined scheduling criteria, so as to obtain a scheduled order for the plurality of nodes. Perform packet transmissions in the tree using the scheduled order, wherein the scheduled order substantially optimizes the operation by reducing a total number of timeslots required to complete the operation with reducing an amount of end-to-end delay for the plurality of nodes in the convergecast network.

[0023] Further features and advantages of the present disclosure will become more readily apparent from the following detailed description when taken in conjunction with the accompanying Drawing.

[0024] The present disclosure is further described in the detailed description which follows, in reference to the noted plurality of drawings by way of non-limiting examples of exemplary embodiments of the present disclosure, in which like reference numerals represent similar parts throughout the several views of the drawings. The drawings shown are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the presently disclosed embodiments.

[Brief Description of the Drawings]

[0025]

[Fig.lA] FIG. 1A is a block diagram illustrating method steps for determining scheduling for packet transmission in a convergecast network, according to some embodiments of the present disclosure.

[Fig.lB]

FIG. IB is a schematic illustrating a convergecast network in an industrial setting, according to some embodiments of the present disclosure.

[Fig.lC]

FIG. 1C is a block diagram illustrating function blocks of a node from the set of wireless nodes shown in FIG. IB, according to some embodiments of the present disclosure.

[Fig.2A]

FIG. 2A is a schematic illustrating an example of network with nodes with different release times and illustration of local buffer, according to embodiments of the present disclosure.

[Fig.2B]

FIG. 2B is a schematic illustrating the network and buffer states at a future time instance of the network in FIG. 2A, according to embodiments of the present disclosure. [Fig.3]

FIG. 3 is a table illustrating time frequency allocation to communication links, according to embodiments of the present disclosure.

[Fig.4A]

FIG. 4A is a schematic illustrating an example of group of nodes to demonstrate the scheduling rule of the child nodes, according to embodiments of the present disclosure. [Fig.4B]

FIG. 4B is a block diagram illustrating function blocks of a demonstration of the scheduling threshold of the parent node, according to embodiments of the present disclosure. [Fig.4C]

FIG. 4C is a block diagram illustrating function blocks of a demonstration of the scheduling threshold of the parent node, according to embodiments of the present disclosure.

[Fig.5]

FIG. 5 is a schematic illustrating an example of group of nodes to demonstrate an example of the effective load calculation, according to embodiments of the present disclosure.

[Fig.6A]

FIG. 6 A is a schematics illustrating an example of network with nodes with different release times and an example of a packet accumulation problem, according to embodiments of the present disclosure.

[Fig.6B]

FIG. 6B is a schematics illustrating an example of network with nodes with different release times and an example of a packet accumulation problem, according to embodiments of the present disclosure.

[Fig.6C]

FIG. 6C is a schematics illustrating an example of network with nodes with different release times and an example of a packet accumulation problem, according to embodiments of the present disclosure.

[Fig.7]

FIG. 7 is a schematic describing transmit or receive regimes when the buffer status of the parent node is also used, according to embodiments of the present disclosure.

[Fig.SA]

FIG. 8A is a block diagram illustrating function blocks describing a centralized implementation of the scheduling scheme, according to embodiments of the present disclosure. [Fig.SB]

FIG. 8B is a block diagram illustrating function blocks describing a centralized implementation of the scheduling scheme, according to embodiments of the present disclosure.

[Fig.9A]

FIG. 9A is a block diagram illustrating function blocks describing the scheduling of the nodes with the buffer, according to embodiments of the present disclosure.

[Fig.9B]

FIG. 9B is a block diagram illustrating function blocks describing the scheduling of the nodes with the buffer, according to embodiments of the present disclosure.

[Fig.10]

FIG. 10 is a block diagram of illustrating the method of FIG. 1A, that can be implemented using an alternate computer or processor, according to embodiments of the present disclosure.

[0026] While the above-identified drawings set forth presently disclosed embodiments, other embodiments are also contemplated, as noted in the discussion. This disclosure presents illustrative embodiments by way of representation and not limitation. Numerous other modifications and embodiments can be devised by those skilled in the art which fall within the scope and spirit of the principles of the presently disclosed embodiments.

[Description of Embodiments]

[0027] The following description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the following description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing one or more exemplary embodiments. Contemplated are various changes that may be made in the function and arrangement of elements without departing from the spirit and scope of the subject matter disclosed as set forth in the appended claims.

[0028] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, understood by one of ordinary skill in the art can be that the embodiments may be practiced without these specific details. For example, systems, processes, and other elements in the subject matter disclosed may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known processes, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Further, like reference numbers and designations in the various drawings indicated like elements.

[0029] Also, individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be re-arranged. A process may be terminated when its operations are completed, but may have additional steps not discussed or included in a figure. Furthermore, not all operations in any particularly described process may occur in all embodiments. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, the function's termination can correspond to a return of the function to the calling function or the main function.

[0030] Furthermore, embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of machines, hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine readable medium. A processor(s) may perform the necessary tasks.

[0031] Overview

The present disclosure relates to delay sensitive scheduling in wireless multi-hop many-to-one network, also referenced as convergecast networks, and more particularly scheduling schemes that jointly minimize schedule size and end-to-end delay in nonuniform release time environments.

[0032] The delay sensitive scheduling in wireless multi-hop many-to-one network include a sink node, i.e. a base-station or a gateway, and a plurality of data nodes. Each data node collects or generates data, such that each node is to transmit the data to the sink node, possibly through other nodes, with a minimum delay. The time data is collected or generated, and could be different for different nodes depending upon the specific embodiment of the present disclosure.

[0033] The present disclosure is based on several realizations that include addressing minimizing the schedule size and end-to-end delay simultaneously. In particular, we understand delay is a major concern in many manufacturing and other sensitive facilities. We looked to address overcoming the problems in convergecast network to accommodate for applications requiring a strict delay and a certain level of reliability requirements. At least one important feature of a scheduling scheme for packet transmission according to the present disclosure is to collect all data with a small number of time slots, i.e., a small schedule size.

[0034] However, the schedule size may not reflect the true delay. For instance, we discovered that in an application where data is available at different nodes in different time instances, the schedule size does not represent the actual delay. For example, a network in an industrial setting, can include a sink node, and a set of wireless nodes, wherein each node is attached to a process that can be attached to another process. Each process can release data at a given time, such that the release times for each node could be different for each process. In other words, one process can have a larger value than a time of that of another process releases data. Wherein, according to embodiments of the present disclosure, we focus on cases when release times are not equal, which is also termed nonuniform release times.

[0035] In such cases, where release times for multiple processes are not equal, the difference between the time data that is released and it's delivered time is a meaningful metric according to the present disclosure. This quantity is referred to as an end-to-end delay, or a relative delay. We discovered that minimizing the end-to-end delay for all traffic is challenging task. Thus, we figured out that minimizing a maximum or an average end-to-end delay along with the schedule size can demonstrate good results according to the present disclosure. In addition, a small schedule size is also an important feature of a scheduling scheme in many scenarios according to the present disclosure, and can provide for a duration for parodic procedures in an industrial setting.

[0036] In these types of networks, we realized through experimentation the scheduling scheme jointly minimizes the schedule size and end-to-end delay in nonuniform release time environment. To this end, a joint sorting and scheduling can be developed. We figured out that the sink node can sort the nodes based on a given criterion, and then can schedule the nodes to achieve at least two objectives; first a minimum schedule size and second, a minimum end-to-end delay.

[0037] We had another realization in designing scheduling schemes is that the load of nodes is an important metric when prioritizing the transmission of nodes. For example, according to an embodiment of the present disclosure, the sink node sorts the nodes based on a particular metric depending upon the prioritization condition. One particular implementation can include the nodes being sorted based on the total load or the remaining load. In another implementation, the nodes can be divided into two groups: (i) child nodes of the sink node and (ii) other nodes except the child nodes of the sink node. Then, the nodes in each of the two groups can be sorted based on the particular metric, wherein the first group is prioritized over the second group.

[0038] The sink node schedules the nodes with high priority to transmit or receive depending on some condition. For example, if the node has at least one packet in its buffer, then the node is scheduled to transmit. If the node's buffer is empty, the node schedules one of its child nodes depending on the scheduling rule. In another implementation of this rule according to the present disclosure is that a child node with an oldest packet is scheduled to transmit, unless there is another child node j with a larger load and has at least one packet within d timeslots, from the oldest packet in the buffer of node / ' .

[0039] It is also possible another implementation can include that if node k has a non-empty buffer that is occupied by b k packets, then the node can be scheduled to receive, only if, one of its child nodes have an older packet according to the scheduling rule Ffb j J, where FfbjJ is a function that depends on a buffer size of node k, b k .

[0040] We also figured out another implementation that includes the link to node k or from node k can be scheduled on a current time-frequency resource if no other conflicting links are scheduled on the same block. Otherwise, the link to node k or from node k is scheduled on a different frequency channel if available. If not, scheduling the link is deferred. In another implementation, sorting of the nodes can be local, such that each node knows scheduling orders of itself and its neighbor nodes. In such case, the scheduling of a node with high priority can be done in similar fashion as earlier. In other words, a node with high priority decides either to transmit or receive according to the scheduling rule. Further still, in another implementation of the present disclosure, may include the load of a node representing an effective load of that node. The effective load can be the maximum number of packets that is expected to pass through the node in a given time window W.

[0041] It is possible other embodiments of the present disclosure can be structured differently, for example, one embodiment can have a Small Schedule Size, that prioritizes the nodes with a large load reduces the schedule size. Another embodiment can have a Small End-to-End Delay, by choosing a child node with the old packet reduces the end-to- end delay without increasing the schedule size. Further still, another embodiment can be structured that Reduces Buffer Overflow, by using the scheduling rule F(.) provides a tradeoff between the schedule size and end-to-end delay. Wherein, the scheduling rule F(.) constitutes a method to avoid overflow in the buffer. Further, an embodiment may be structured having a Distributed Implementation, wherein a central controller is not required since no global knowledge is required. Further, the sink node can act as central controller, so that it controls each node when it receives the data packet from its associated child node or transmits the data packet to its associated parent node.

[0042] As noted above, the convergecast network includes a sink-node and a plurality of nodes. Wherein during an operation in the convergecast network, each node generates packets to transmit to the sink-node, and a receiver of the sink-node receives information indicative of a topology of the convergecast network that includes a tree having one or more branches. Such that each node in the tree is associated with a hop count for a route to the sink-node through a specific branch. Further, that the hop-count to the sink-node for each node in the plurality of the nodes is obtained prior to scheduling the packet transmissions. For example, the tree for the convergecast network can be obtained by broadcasting a message from the sink-node to all one-hop neighbors or child nodes of the sink-node. Propagating the message to each node in the plurality of the nodes through forwarding a received copy of the message with a smaller hop-count. Along with obtaining a shortest-hop-count tree to form the tree.

[0043] FIG. 1A is a block diagram of method steps for determining scheduling for packet transmission in a convergecast network, according to some embodiments of the present disclosure. The first step 101 of method 100 includes receiving a request to perform an operation in the convergecast network by a processor 104 of a sink-node. Note, the operation begins or starts when a packet is generated from any node of the plurality of the nodes. Further, that each node in the plurality of nodes can be in one of the following states during each timeslot of the operation: a receiving state, during which the node may receive a packet from a neighboring node; a transmitting state, during which the node may transmit a packet to a neighboring node; and an idle state, during which the node neither transmits nor receives.

[0044] Then, step 103 includes initializing a query from the sink-node, wherein the query can be transmitted to the plurality of nodes through the branches. Step 105, in response to the query, the receiver receives information indicative at a period of time. Specifically, the information can include a topology of the convergecast network, properties of each node and time data for each node generated. Wherein the time data is indicative of a data generation release time from each node and a data delivery time at the sink-node for each node.

[0045] Step 107, sorting all the plurality of nodes by the sink-node based on the received information and using a sorting function that prioritizes each node to obtain a prioritized order of the plurality of nodes. Further, the sorting function can prioritize each node to obtain a prioritized order of the plurality of nodes based on an amount of total load, an amount of load to be forwarded, an amount of load remaining after forwarding, a prioritization of a node, or some combination thereof.

[0046] Further still, the sorting function can also prioritize the plurality of nodes based on a total load or a remaining load. Wherein the total load includes a number of packets to be transmitted by each node, a number of packets to be forwarded by each node, or both, in a period of time. Along with including any previous transmitted packets by each node, any previous forwarded packets by each node, or both, in a period of time. Wherein the remaining load is a number of packets each node transmits, forwards, or both, in a remaining of the period of time. Further, it is possible for the sorting function to include a variant implementation function, such that the variant implementation function includes: dividing the plurality of nodes into a first group including child nodes of the sink node, and a second group that includes other nodes except the child nodes of the sink node; and sorting the first group and the second group based on a predetermined metric, such that the first group is prioritized over the second group.

[0047] Step 109 includes calculating an end-to-end delay for each node based on the received time data by determining a difference between the data generation release time for each node and the data delivery time at the sink-node for each node. It is noted that the sorting function and the end-to-end delay can be simultaneously or jointly performed.

[0048] Step 111 is scheduling packet transmission for each of the plurality of nodes by the sink-node, based on a scheduling function having a set of predetermined scheduling criteria, so as to obtain a scheduled order for the plurality of nodes. Further, the set of predetermined scheduling criteria of the scheduling function includes at least one of a generating time of the packet, a buffer status of a node, a buffer status of a child node and a buffer status of a parent of the node.

[0049] Further still, the scheduling function can include computing a generating time difference for each node from the time data between a holding time of an oldest packet of the node and a holding time of an oldest packet held by the node's associated child nodes, such that if the generating time difference is above a predetermined time threshold, then, the oldest packet held by the node's associated child node is scheduled by the sink node to transmit the packet from the child node.

[0050] It is possible that the scheduling function can include computing a systematic threshold considering buffer status of associated child nodes' and buffer status of associated parent nodes', such that receive packets from an associated child node or transmit packets to associated parent node. Wherein as the buffer of the parent node is filled with more packets, the threshold is decreased, so that a node is scheduled by the sink node to receive packets from its associated child node.

[0051] In other words, as the buffer of the node is filled with more packets, the threshold is increased, so that the node is scheduled by the sink node to transmit its packets inside of the buffer to its associated parent node. Wherein, as buffers of the node and its associated parent node are simultaneously filled with more packets, the sink node prevents the node from transmitting or receiving operation.

[0052] Finally, step 1 13 is performing packet transmissions in the tree which is accomplished using the scheduled order. The results of method 100 optimizes the operation by reducing a total number of timeslots required to complete the operation, along with reducing an amount of end-to-end delay for the plurality of nodes in the convergecast network.

[0053] FIG. IB is a schematic illustrating a convergecast network 102 in an industrial setting, according to some embodiments of the present disclosure. In particular, FIG. IB shows an example of the network 102 in industrial setting, including a sink node S-110, and a set of wireless nodes, Nl-120, N2-140, N3-160, N4-170. Each node is attached to a process, e.g., Nl-120 is attached to process Pl-130. Each process releases data at a given time, e.g., Pl-120 releases data at time 7Ό 1 - 135. Note that release times could be different for each node, Nl-120, N2-140, N3-160, N4-170. For instance, T}-135 of node 120 could have a larger value than the time that process P2-150 releases data at Γ 0 2 -155 of node 140. Specifically, we focus on the case when release times are not equal, which is termed as non-uniform release times.

[0054] FIG. 1C is a schematic illustrating a block diagram of functioning blocks of a node from the set of wireless nodes, Nl-120, N2-140, N3-160, N4-170 shown in FIG. IB, according to some embodiments of the present disclosure. For example, the node includes a memory 122, at least one processor 124, a receiver / transmitter 126, other interfaces 128 and at least one machine interface/sensor 129. In particular, we refer to an amount of memory dedicated in 122 for receiving or collecting data as the buffer.

[0055] FIG. 2 A is an example of network with nodes with different release times and illustration of local buffer, according to embodiments of the present disclosure. In particular, FIG. 2 A shows an example of a sensor network. Similar to FIG. IB, nodes have different packet release times. In FIG. 2 A, the processes in FIG. IB or machine or sensors etc., have been suppressed like in FIG. IB. The release time is attached to each node. For instance, Nl 210 has release time shown in 212, T Q = 2. Similarly, N8 280, has release time in 282 with T Q = 1. Capturing the state of the network at timeslot 1, 201, the state of the buffers is also attached to the nodes. For instance, FIG. 2A illustrates N8 with buffer 281, and Nl with buffer 21 1. In this example, N8 has one packet in the buffer, which is the data generated locally at time instance one. Nl has an empty buffer. In one implementation according to the present disclosure, the packets in the buffer are identified by their release times. Basic block diagram of functioning blocks of the node is shown in FIG. 1C. We refer to the amount of memory dedicated in 124 for receiving or collecting data as the buffer.

[0056] Furthermore, FIG. 2A shows routes, e.g., N8 transmits its data packet to N5 250 through wireless link 283. Since wireless communication is broadcast in nature, nodes could interfere with one another if they transmit on the same time frequency resource. In FIG. 2 A, although the dedicated parent of node N7 270 is node N4 240, N8 interferes with N7 transmission through the wireless link represented by 274, unless the transmission on 273 happens on a different time-frequency block. Similarly, N5 interferes with N7. Thus, N5 and N7 can transmit simultaneously on two different frequency channels. However, note that since nodes can either transmit or receive in a given time slot, N5 cannot transmit if N5 is in receiving. Additionally, with simple signal processing techniques, two nodes that share the parent node cannot transmit simultaneously, even on different frequency channels. For instance, N7 and N6 cannot simultaneously transmit to N4 240.

[0057] FIG. 2B is a schematic of network and buffer states at future time instance of the network in FIG. 2A, according to embodiments of the present disclosure. Given the constraints above, nodes N5 and N7 transmit one packet to their parents, N2 and N4, respectively. We further note the update in the status of the buffers. Additionally, we note that N8 was not able to transmit a packet since its parent node is in transmitting. Additionally, N6 was not able to transmit a packet, since its parent node N4 is busy in receiving a packet from node N7.

[0058] FIG. 3 is a table illustrating time frequency allocation to communication links, according to embodiments of the present disclosure. In particular, FIG. 3 shows an example of the time frequency resources and the schedule for the first time slot. Further, FIG. 3 shows the availability of three frequency channels. In the following, we describe how to achieve the minimum schedule size and minimum end-to-end delay.

[0059] Small Schedule Size (Minimum schedule size)

In various embodiments of the present disclosure, we use sorting to identify the priority of the nodes. This has major impact on the schedule size. To demonstrate sorting metrics, we take three examples.

[0060] Sorting the nodes based on the total load refers to prioritizing the nodes with total number of packets need to be forward. In FIG. 2A, node N2 220 has to forward six packets, thus it has total load of six; node Nl 210 has two packets as total load; node N4 240 has three; node N7 has one and node N5 has two.

[0061] Another sorting technique is based on the remaining load. For instance, in FIG. 2B, N7 has zero load after forwarding its packet; node N5 250 has one, while node N2 has five packets.

[0062] Another sorting technique prioritizes the child nodes of the sink node, since it is the bottleneck of the scheduler. Prioritizing the child nodes of the sink node, nodes Nl, N2, and N9 have the highest priority. Thus, in case of the conflict of transmission between Nl and N4, Nltransmits first. Note that sorting the child nodes of the sink node can be done based on a specific metric and the non-child nodes of the sink node based on possibly different metric. This allows adjusting the complexity of the scheduler.

[0063] Furthermore, other sorting metrics can be used as well.

[0064] Low End-to-End Delay To minimize the end-to-end delay without increasing the schedule size, various embodiments of the present disclosure allow the scheduling in the parent node. In such technique, a parent node, say / , may choose to schedule itself to transmit or receive based on some rules and conditions, such as the release time of the packet and the status of the buffer. This rule can be specified by the scheduling rule F.

[0065] FIG. 4A, FIG. 4B and FIG. 4C are examples of the scheduling rule. For example, FIG. 4A is an example of group of nodes Ni l, N12, N13, N15, N17 to demonstrate the scheduling rule of the child nodes, according to embodiments of the present disclosure. FIG. 4B and FIG. 4C are a demonstration of the scheduling threshold of the parent node, according to embodiments of the present disclosure.

[0066] One implementation 450 in FIG. 4B, includes node k 460 and its associated child nodes 450. Among the child nodes of node k, node k chooses the child node, say child node j * , 467 in FIG. 4C, to transmit only if it has a packet that is older than a certain threshold y k , compared to the oldest packet in the local buffer of node k. The threshold could be a function of the status of the local buffer of node k, (number of packets in the local memory), status of parent node's (par fc ) buffer, and status of child node's buffer.

[0067] As a case, parameters of scheduling rule F depend only on the load of the local buffer, b k .

[0068] In 461 of FIG. 4B, node k computes the oldest release time, min T^, from its local buffer b k . From the routes, node k finds the set of child nodes C k 462. For the number of child nodes of the set C k , say /, node k computes the oldest release time 7) for a particular child node j G C k 463. After the computation of the / oldest release times from / child nodes, node k computes the minimum oldest release time and its corresponding chide node as 464:

j * = arg min 7}

j=i....J so that t ch denotes the oldest packet release time stored in the child nodes' buffer. Also, j * denotes the index of the selected child node for scheduling. When the buffer is empty T j can be set to∞.

[0069] The proposed scheduling rule F can be written as 465 of FIG. 4C:

( -1 , min T k - t ch > y k (b k )

F(b k , t ch ) = I +1, min T k - t ch Yk (b k (1)

0, otherwise

[0070] In equation (1), —1 indicates receiving from a child node 467 of FIG. 4C, whereas +1 indicates transmitting from the local buffer 466 of FIG. 4C. In addition, 0 indicates skipping node k for scheduling.

[0071] Although the scheduling rule F decides to transmit, this transmission can be possible only when par fc can receive a packet 468 of FIG. 4C and does not cause interference with previously scheduled links. Otherwise, node k defers its transmission 469 of FIG. 4C. Choosing 7 ¾ (ί) ¾ ) =∞, the scheduler results always transmission from the local buffer when the load is equal to b k . The choice of y fc (i? fc ) demonstrates a trade-off between the schedule size and end-to-end delay. For a small b k , large values of Y k Cb^ will reduce the schedule size, while the opposite reduces the end-to-end delay.

[0072] FIG. 5 shows an example of the effective load calculation, according to embodiments of the present disclosure which is disclosed in detail below. FIG. 5 includes nodes N1-N6 and a sink-node S. In particular, FIG. 5 provides a numerical example of effective load calculation at time slot one for child nodes N2, N3, N4, N6 of the sink node S. In this example, we see that Nl has a packet in its buffer and the next arriving packet is at time nine from N2. On the other hand, N5 has a packet in its buffer and the next arriving packet is at time two from N6. Thus, the effective load of Nl is one, while the effective load of N5 is two. This is discussed in more detail below.

[0073] Referring to FIG. 4A, FIG. 4A demonstrates an example to this rule, where we assume that node N12 is permitted to transmit or receive, and y 12 (ft 12 ) =∞. Thus, N12 picks, typically, the oldest packet. In this example, it is the packet with release time nine. This is because local buffer has two packets released at nine and ten. Since a packet released at nine is older than other packets, the packet released at time nine is pick for scheduling.

[0074] As an illustration, we can have 1 and min r N12 = 9 . Then, the difference by equation (1) is 9-1 = 8. Thus, if we choose y 12 (2) = 10, then min N 12 — t ch < However, if we choose 7i 2 (2) = 5, then the min 7 N12 — tc h > Ύ ΝΙΖ Φ ΝΙΙ ) * so ^at N12 receives a packet from N15.

[0075] One problem that could arise from this scheduling rule is the repetitive reception of packets, i.e., when node k chooses to receive packets from the child nodes, C k , over number of consecutive timeslots, while par fe and (possibly its parents) have empty buffers. This accumulation of packets in node k may increase the schedule size by missing transmission opportunities to the sink. To demonstrate this, FIG. 6A, FIG. 6B and FIG. 6C show an example.

[0076] FIG. 6A, FIG. 6B and FIG. 6C are an example of a packet accumulation problem, according to embodiments of the present disclosure. For example, let the nodes in FIG. 6A, FIG. 6B and FIG. 6C have the following priority {N2, Nl, N3, N4}, determined by the number of the conflicting links or number of neighbors. Furthermore, we assume that the buffer size is large for all the nodes N1-N4, so that we do not expect an overflow of a buffer for the current load, and y t = 0, for all i E {Nl, N2, N3, N4}.

[0077] As we discussed above, in order to reduce the schedule size, the sink should be kept busy. However, due to a particular sorting in this example, node N2-620 of FIG. 6A_iias higher pr iority than the node close to the_sink,_ node : Nl-610 of FIG.. 6A. Thus,- _ node using the scheduling of the parent node considering the local and child buffers will increase the schedule size. As shown in the FIG. 6B, at time instant two, node N2 uses link 650 instead of using link 640, despite the fact that Nl has empty buffer. In the FIG. 6C, link 660 can be used in time instant three. This increases the schedule size by one timeslot.

[0078] FIG. 7 is a schematic describing transmit or receive regimes when the scheduling of the parent node is used, according to embodiments of the present disclosure. For example, FIG. 7 includes the buffer status of the parent node, par fc , so we can alleviate this problem. In particular, FIG. 7 shows the desirable behavior of such a scheduling rule. Wherein this is a special case of a more general rule where the status of the parent's buffer, b park , is considered as well (note, the sink node makes the schedule).

[0079] Still referring to FIG. 7, node k cannot receive or transmit in the regions where the local buffer is full 715 or parent node's buffer is full 710. If the local and parent node's buffers are full, 750, then node k should be skipped for scheduling. The reception of packets should be encouraged over transmission if the buffer of par fc has large number of packets, 730, while transmission from the local buffer should be encouraged if the local buffer has large number of packets, 740. In general, when both buffers of node k and par fc have large number of packets, it indicates a possible congestion and an increased delay in the path to the sink node, so that skipping node k for scheduling could be preferable.

[0080] In one implementation of the present disclosure, one of the possible functions of the threshold y k {b k , & pa r k ) mat has most of the desirable properties described above is as follows:

where where b max is the maximum buffer size, a and β are design parameters. Also, ceil(.) denotes the ceiling function. Note that the state of the parent node's buffer is one of the key parameters of function f(b k , b park . In the FIG. 4A, for par N12 = Nil, we take = 0.5, β = 0.25, and b max =3, then the result for equation (2) is shown in Table 1.

[0081] Table 1

The value of y fe (fr k , b park based on equation (2) and equation (3):

[0082] Note the variation of values of Y k (b k , ^par k ) as a function of b k and b pa r k> so does the scheduling rule F (b k , b park , t ch ).

[0083] Referring to FIG. 4A to FIG. 4C, provides an illustration with b max = 3 packets, node N12 is discouraged from transmitting since the number of packets in the buffer of Ni l increases. On the other hand, N12 is encouraged to transmit when its local load is large and its parent node has small number of packets in its buffer since Y k (b kl &par k ) keeps increasing. According to the table, and obtained 7NI2 (^NI2 > ^NII) = 6, we see that N12 is scheduled to receive a packet from N15.

[0084] Finally, note the instant function in equation (3) is more suitable for packet networks with a small number of packets, and with an exponential increasing as a function of the local buffer. Other functions with a slower increase (linear for example) can be used as well.

[0085] Effective Load The definition of load in non-uniform release time has impact on the priority of nodes and thus the scheduling of the nodes. The release time could be relatively large at some nodes, so that prioritizing the nodes based on such load could be misleading. Thus, in one embodiment of the present disclosure, we consider the effective load, which is defined as the maximum number of packets passed through a node within a given time window W, where W = [ti^], ^ < t 2 . Note that if we define W = [0, oo), then the effective load becomes a total load. Also, with another window, W = [t,∞), the effective load becomes the remaining load at time t.

[0086] In general, the choice of t x and t 2 impacts the scheduling decisions and complexity. For instance, using the remaining load to resort all the nodes could be complicated if the number of nodes is large. In addition, in the non-uniform release time environment, the choice of t 2 is important.

[0087] Referring to FIG. 5, one example is to make t 2 increase gradually until the nodes have different loads or reach the maximum release time. FIG. 5 provides a numerical example of effective load calculation at time slot one for child nodes of the sink node. In this example, we see that Nl has a packet in its buffer and the next arriving packet is at time nine from N2. On the other hands, N5 has a packet in its buffer and the next arriving packet is at time two from N6. Thus, the effective load of Nl is one, while the effective load of N5 is two.

[0088] Scheduling Scheme

For many delay sensitive applications, the central controller could be used to reduce the delay. As noted above, the sink node can act as the central controller, so that it controls each node when it receives the data packet from its associated child node or transmits the data packet to its associated parent node.

[0089] In the following, we provide a centralized scheme that minimizes the schedule size and end-to-end delay. However, it is also possible to design a distributed scheme based on one or more embodiments of the present disclosure. [0090] We first define new data sets that we use in the scheme below. Let I k be the set of interfering nodes with node k . depends on the topology of the network and whether acknowledgement is implemented in the system. Further, let S be a M x (N + 1) matrix that represents the scheduling of the network, where N is the total number of the sensor nodes in the network and M is the expected size of the schedule. S(t, k) takes values {0, ±1, ±2, ±L} , where L is the number of frequency channels, the sign of S(t, k) represents whether node k is set to transmit or receive at timeslot t over channel ch £ {1, ... , L} . Also, +ve indicates transmission, whereas— ve indicates receiving. Otherwise, it is in idle mode.

[0091] In other words, as noted above, the scheduling matrix represents the scheduling of the network:

S E {-L, -L + 1 L - 1, L} MX(N+1 \

wherein ( L ) is a number of frequency channels, ( M ) is a schedule size, ( N ) is a number of nodes, ( + ) is transmits and ( - ) is receives. Further, the child nodes of the sink-node have a higher priority over other nodes of the plurality of nodes regarding transmitting or receiving. Such that each child node of the sink-node is sorted by a remaining load and each other node of the other nodes is sorted by a total load. Wherein the total load includes a number of packets to be transmitted by each node, a number of packets to be forwarded by each node, or both, in a period of time, along with including any previous transmitted packets by each node, any previous forwarded packets by each node, or both, in a period of time. Wherein the remaining load is a number of packets each node transmits, forwards, or both, in a remaining of the period of time.

[0092] Let Q represent a sorted list of nodes as described above. In this scheme,, we use sorting that the child nodes of the sink node have higher priority. In other words, we split Q as Q = { Q', Q"}, where Q' is the sorted child nodes of the sink node, i.e., Q' includes all the nodes in C sink . On the other hand, Q" is the list of sorted nodes that are not the child nodes of the sink node, i.e., C t Vi 6 {1, ... , N}.

[0093] Centralized Scheduling Scheme

FIG. 8A and FIG. 8B show a centralized implementation of the present disclosure. In Algorithm 1 of FIG. 8A, we assume that the input of the algorithm, at 801, includes the convergecast route, the release times for all the packets {Γ°}, the interference relations {I k }, sorting metric of interest. In this implementation of the scheme, we use the total load for the nodes that are not child nodes of the sink node and the remaining load for the child nodes of the sink node. This is advantageous as we shall describe later.

[0094] Still referring to FIG. 8A, next in the algorithm, 802, initializes the schedule 5 as an all zeros M x (N + 1) matrix. Also it initializes timeslot t to zero. Next, we perform an initial sorting for all the nodes. In this implementation, we prioritize the child nodes of the sink node. The sorted non-child nodes of the sink node are listed in Q" 803. The algorithm proceeds until all the packets are received 804. Then, the output of algorithm is the schedule S, 890.

[0095] Still referring to FIG. 8A, in each iteration, at each time slot, the algorithm sorts the child nodes of the sink node based on the remaining load, Q' in 805. Using this implementation, in each iteration, resorting is done for the child nodes of the sink node only, which has a small complexity compared to sorting all the nodes, Q = {Q Q"} 806. The two sets are stored in set Q. In this iteration, all busy nodes, i.e., nodes that have S t, 0≠ 0 , are removed from Q , 807. Next, in 808, the iteration is initialized by incrementing the time indicator t.

[0_0_96] _ ._ Referring to FIG._8A and FIG. 8B, ifthe set Q is not empty, 809, we traverse over Q. In each iteration, k is set to be the first node in Q 820, then we take out the first element of Q 824. For node k, we apply Algorithm 2, at 826 to decide whether node k has to transmit or receive. After Algorithm 2, 850, we test whether Q is empty at 809. When Q is empty, the iteration over this time slot is concluded, then we have to check whether we have received all the packets, and then proceed to 804 or 890.

[0097] FIG. 9A and FIG. 9B are block diagrams illustrating function blocks describing the scheduling of node k considering its buffer status, associated parent node's buffer status, and associated child nodes' buffer status, according to embodiments of the present disclosure. Specifically, in FIG. 9A and FIG. 9B, we show an implementation of Algorithm 2.

[0098] Referring to FIG. 9A, starting at 911, after 826, we test whether node k is permitted to receive with the current buffer size b k . This can be done by checking if Y k ibji) < oo. Next, we identify, i.e. update, 912 the set of feasible child nodes C' k c C k , where C' k are the subset of child nodes of node k that are not busy and can transmit to node k at timeslot t, as in 462 of FIG. 4B. Note that a node j E C k can transmit to node k if neither of node k or node j are busy and there is a channel ch £ {1, ... , L], such that assigning ch to (j, k at time t does not cause interference to any other links in the network.

[0099] Similarly, at 913 we verify whether node k may transmit over (k, par k ) and whether par k is busy. If either of these conditions are not satisfied, i.e., 913 is negative, then node k cannot transmit to the par k . In 914, we need to apply the scheduling rule F(b k , oo) using the threshold Y k (b k , oo) and test if node k can receive.

[0100] Referring to FIG. 9B, based on the particular case, the algorithm updates the schedule 5 and the buffer states of the related nodes and using one of the available channels, ch, as shown in 916 and 917. After the procedures of Algorithm 2 are concluded at 850, we continue with remaining of Algorithm 1 as indicated above at 850 and 809.

[0101] When node k is scheduled to transmit, it identifies empty channel to its parent node par k 920 for transmissions. Then update the scheduling matrix 922, that is, node k transmits packets to node par k via channel ch at time slot t (S(t, k) = ch) working in pairs p r k receives packets from node k via channel ch at time slot (S(t, par k )=—ch). In 924, update buffers of node k and its associated buffer of node par k due to respectively transmitting and receiving. When node k is scheduled to receive, it identifies empty channel to its selected child node j * 930 for receiving. Then update the scheduling matrix 932, that is, node k receives packets from node j * via channel ch at time slot t (S(t, k) =—ch) working in pairs j * transmits packets to node k via channel ch at time slot (S(t,j * )= ch). In 924, update buffers of node k and its associated buffer of node j * due to respectively receiving and transmitting.

[0102] FIG. 10 is a block diagram of illustrating the method of FIG. 1 A, FIG. 1 B and FIG. 1C that can be implemented using an alternate node, according to embodiments of the present disclosure. Each node of the plurality of nodes along with the sink-node 101 1 may include, a processor 1040, computer readable memory 1012, storage 1058 and user interface 1049 with display 1052 and keyboard 1051, which are connected through bus 1056. For example, the user interface 1049 in communication with the processor 1040 and the computer readable memory 1012, acquires and stores the data in the computer readable memory 1012 upon receiving an input from a surface, keyboard surface, of the user interface 1057 by a user.

[0103] Contemplated is that the memory 1012 can store instructions that are executable by the processor, historical data, and any data to that can be utilized by the methods and systems of the present disclosure. The processor 1040 can be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. The processor 1040 can be connected through a bus 1056 to one or more input and output devices. The memory 1012 can include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems.

[0104] Still referring to FIG. 10, a storage device 1058 can be adapted to store supplementary data and/or software modules used by the processor. For example, the storage device 1058 can store historical data and other related data as mentioned above regarding the present disclosure. Additionally, or alternatively, the storage device 1058 can store historical data similar to historical network data similar to the disclosed network of the present disclosure. The storage device 1058 can include a hard drive, an optical drive, a thumb-drive, an array of drives, or any combinations thereof.

[0105] The system can be linked through the bus 1056 optionally to a display interface (not shown) adapted to connect the system to a display device (not shown), wherein the display device can include a computer monitor, camera, television, projector, or mobile device, among others.

[0106] The node 101 1 can include a power source 1054, depending upon the application the power source 1054 may be optionally located outside of the node 1011. Linked through bus 1056 can be a user input interface 1057 adapted to connect to a display device 1048, wherein the display device 1048 can include a computer monitor, camera, television, projector, or mobile device, among others. A printer interface 1059 can also be connected through bus 1056 and adapted to connect to a printing device 1032, wherein the printing device 1032 can include a liquid inkjet printer, solid ink printer, large-scale commercial printer, thermal printer, UV printer, or dye-sublimation printer, among others. A network interface controller (NIC) 1034 is adapted to connect through the bus 1056 to a network 1036, wherein data or other data, among other things, can be rendered on a third party display device, third party imaging device, and/or third party printing device outside of the node 1011.

[0107] Still referring to FIG. 10, the data or other data, among other things, can be transmitted over a communication channel of the network 1036, and/or stored within the storage system 1058 for storage and/or further processing. Further, the data or other data may be received wirelessly or hard wired from a receiver 1046 (or external receiver 1038) or transmitted via a transmitter 1047 (or external transmitter 1039) wirelessly or hard wired, the receiver 1046 and transmitter 1047 are both connected through the bus 1056. Further, a GPS 1001 may be connected via bus 1056 to the node 1011. The node 1011 may be connected via an input interface 1008 to external sensing devices 1044 and external input/output devices 1041. The node 1011 may be connected to other external computers 1042. An output interface 1009 may be used to output the processed data from the processor 1040.

[0108] The above-described embodiments of the present disclosure can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. Use of ordinal terms such as "first," "second," in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.