Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS, SYSTEM AND APPARATUS FOR PACKET ROUTING USING A HOP-BY-HOP PROTOCOL IN MULTI-HOMED ENVIRONMENTS
Document Type and Number:
WIPO Patent Application WO/2013/036453
Kind Code:
A1
Abstract:
A method of routing data packets associated with a communication session between a sending node and a receiving node using an intermediate node is disclosed. The method includes the intermediate node receiving a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; determining whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the intermediate node; and responsive to a determination that one or more data packets of the allocation of data packets is to be provided by the cached packets, sending one or more of the cached data packets.

Inventors:
ZHANG FEIXIONG (US)
REZNIK ALEXANDER (US)
LIU HANG (US)
Application Number:
PCT/US2012/053418
Publication Date:
March 14, 2013
Filing Date:
August 31, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTERDIGITAL PATENT HOLDINGS (US)
ZHANG FEIXIONG (US)
REZNIK ALEXANDER (US)
LIU HANG (US)
International Classes:
H04L29/08
Domestic Patent References:
WO1999048003A21999-09-23
Foreign References:
US20110055386A12011-03-03
US20110153937A12011-06-23
US7864764B12011-01-04
Other References:
"A transmission control scheme for media access in sensor networks", PROCEEDINGS OF ACM MOBICOM'01, 16 July 2004 (2004-07-16)
"Reliable and Efficient Hop-by-Hop Flow Control", ACM SIGCOMM, 1994
"HxH: A Hop-by-Hop Transport Protocol for Multi-Hop Wireless Networks", WICON, 2008
"On Hop-by-Hop Rate-Based Congestion Control", IEEE/ACM TRANSACTIONS ON NETWORKING, vol. 4, no. 2, April 1996 (1996-04-01)
"Hop-by-Hop Congestion Control over a Wireless Multi-Hop Network", IEEE/ACM TRANSACTIONS ON NETWORKING, vol. 15, no. 1, 2007
"The transport layer revisited", THE 2ND INTERNATIONAL CONFERENCE ON COMMUNICATION SYSTEMS SOFTWARE AND MIDDLEWARE, 2007. COMSWARE 2007, January 2007 (2007-01-01)
"A receiver-centric transport protocol for mobile hosts with heterogeneous wireless interfaces", ACM MOBICOM, 14 September 2003 (2003-09-14), pages 1,15
"WebTP: A Receiver-Driven Web Transport Protocol", PROCEEDINGS OF IEEE INFOCOM'99
"Multiple Sender Distributed Video Streaming", IEEE TRANSACTIONS ON MULTIMEDIA, vol. 6, no. 2, April 2004 (2004-04-01)
"Networking Named Content", CONEXT '09, 2009, pages 1 - 12
"Rate control for communication networks: Shadow prices, proportional fairness and stability", JOURNAL OF OPERATIONS RESEARCH SOCIETY, vol. 49, no. 3, March 1998 (1998-03-01), pages 237 - 252
"Charging and rate control for elastic traffic", EUR TRANS ON TELECOMMUN, vol. 8, 1997, pages 33 - 37
"A Duality Model of TCP and Queue Management Algorithms", IEEE/ACM TRANS. ON NETWORKING, October 2003 (2003-10-01)
Attorney, Agent or Firm:
NACCARELLA, Theodore (LLC200 Bellevue Parkway,Suite 30, Wilmington Delaware, US)
Download PDF:
Claims:
Claims:

What is claimed is:

1 . A method of routing data packets associated with a communication session between a sending node and a receiving node through at least a first intermediate node, comprising: receiving, by the first intermediate node, a signal indicating an allocation of data packets to be sent from the sending node to the receiving node; determining, by the first intermediate node, whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the first intermediate node; and responsive to a determination that one or more data packets of the allocation of data packets is to be provided by the cached data packets, sending, by the first intermediate node, one or more of the cached data packets.

2. The method of claim 1 , wherein the signal received by the intermediate node is received from one of the receiving node and a second intermediate node disposed between the first intermediate node and the receiving node.

3. The method of claim 1 or claim 2, wherein: the determining of whether one or more data packets of the allocation of the data packets is to be provided by cached data packets includes determining whether data packets of the communication session pending reception by the receiving node are cached by the first intermediate node; and the sending of one or more of the cached data packets includes sending one or more of the cached data packets that are pending reception toward the receiving node.

4. The method of claim 1 , wherein the signal received by the first intermediate node indicates at least a quantity of data packets of the communication session to be sent to the receiving node.

5. The method of claim 1 or claim 4, wherein the signal received by the first intermediate node further indicates pending data packets of the communication session that are pending reception by the receiving node.

6. The method of claim 5, further comprising: sending, by the first intermediate node, data packets that are pending reception by the receiving node, including one or more of the cached data packets, based on the received signal by: selecting respective data packets that are pending reception by the receiving node, including at least one cached data packet, based on the indicated allocation in the received signal, and sending the selected, respective, data packets toward the receiving node.

7. The method of claim 1 or claim 2, further comprising: caching, by the first intermediate node, one or more of the data packets of the communication session destined for the receiving node responsive to available cache space at the intermediate node exceeding a threshold amount.

8. The method of claim 1 or claim 7, further comprising: dropping, by the first intermediate node, one or more of the data packets of the communication session destined for the receiving node responsive to the available cache space being below a threshold amount.

9. The method of claim 8, further comprising: responsive to a determination that one or more data packets of the allocation of data packets is not to be provided by the cached packets, sending, by the first intermediate node, an allocation signal toward the sending node and receiving one or more data packets of the communication session.

10. The method of claim 1 or claim 4, further comprising: responsive to a determination that one or more data packets of the allocation of data packets is not to be provided by the cached packets, sending, by the first intermediate node, an allocation signal toward the sending node and receiving one or more data packets of the communication session.

1 1 . The method of claim 1 or claim 2, wherein, responsive to a determination that one or more data packets of the allocation of data packets is not to be provided by the cached packets, sending, by the first intermediate node, an allocation signal toward the sending node and receiving one or more data packets of the communication session.

12. A method of routing data packets associated with a communication session between a sending node and a receiving node using at least one intermediate node, comprising: receiving, by a first intermediate node, a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; determining, by the first intermediate node, whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the intermediate node; and responsive to a determination that one or more data packets of the allocation of data packets is not to be provided by the cached packets, sending, by the first intermediate node, an allocation signal toward one of the sending node and a second intermediate node disposed between the sending node and the first intermediate node in the communication session and receiving one or more data packets of the communication session.

13. The method of claim 12, further comprising: forwarding the one or more data packets of the communication session received from the one of the sending node and the second intermediate node to the receiving node.

14. A method of routing data packets associated with a communication session between a sending node and a receiving node through an intermediate node, comprising: sending upstream, by the intermediate node, a signal indicating an allocation of data packets of the communication session to be sent toward the receiving node; receiving, by the intermediate node, at least one data packet of the communication session; selecting, by the intermediate node, one of a plurality of operating modes including at least first and second operating modes; and responsive to selection of the first operating mode, caching, by the intermediate node, the received at least one data packet for forwarding toward the receiving node.

15. The method of claim 14, wherein the selecting is based on a received signal from at least one of the receiving node and a downstream node and wherein, (1 ) the first operating mode includes caching the received one or more data packets by the intermediate node and forwarding them toward the receiving node; and (2) the second operating mode includes routing the received one or more data packets toward the receiving node without being cached.

16. The method of claim 14 or claim 15, further comprising: receiving, by the intermediate node, a further signal sent by at least one of a downstream node in the communication session and the receiving node indicating another allocation of data packets to be sent toward the receiving node; and sending, by the intermediate node, cached data packets cached by the intermediate node, in accordance with the further signal toward the receiving node.

17. The method of claim 14 or claim 15, wherein the sending of the signal upstream by the intermediate node is to at least one of an upstream node in the communication session and the sending node.

18. The method of claim 14 or claim 15, wherein the selecting of the operating mode is based on at least one of: (1 ) congestion of a transmission path downstream of the intermediate node; and (2) flow control in accordance with information sent by the downstream node or the receiving node.

19. The method of claim 14 or claim 15, further comprising: storing policies for processing data packets; inspecting, by the intermediate node, from the one or more data packets received, packet information used for packet processing; and packet processing, by the intermediate node, the inspected packets based on the packet information and the stored policies.

20. The method of claim 14 or claim 15, wherein the caching of the at least one received data packet includes caching the at least one received data packet responsive to available cache space at the intermediate node exceeding a threshold amount.

21 . The method of claim 14 or claim 15, further comprising: dropping one or more of the received data packets, responsive to available cache space being below a threshold amount.

22. A method of routing data packets associated with first and second communication sessions between one or more sending node and a receiving node using first and second intermediate nodes, comprising: determining, by the receiving node, a first allocation of data packets to be provided to the receiving node via the first communication session and a second allocation of data packets to be provide to the receiving node via the second communication session; sending, by the receiving node to the first intermediate node, a first signal indicating the first allocation of data packets of the first communication session to be sent to the receiving node; sending, by the receiving node to the second intermediate node, a second signal indicating the second allocation of data packets of the second communication session to be sent to the receiving node; and receiving, by the receiving node, the first allocation of data packets via the first communication session and the second allocation of data packets via the second communication session.

23. The method of claim 22 wherein the receiving node is multi-homed on a plurality of networks and the first transmission path and the second transmission path comprise different networks.

24. The method of claim 22 or claim 23, further comprising: determining an estimated time of arrival at the receiving node of a respective data packet requested; and scheduling the respective data packet for the first or second allocations or a subsequent allocation based on the estimated time of arrival.

25. The method of claim 22 or claim 23, wherein the receiving of the first and second allocations of data packets includes receiving at least one data packet of the first and second allocations that has been cached at one of the first intermediate node and the second intermediate node.

26. A method of routing data packets associated with a communication session between a sending node and a receiving node through an intermediate node, comprising: receiving, by the intermediate node, a signal indicating an allocation of data packets to be sent from the sending node to the receiving node; determining, by the intermediate node, whether one or more data packets of the allocation of the data packets is to be provided to the receiving node subject to a policy preconfigured in the intermediate node; and responsive to a determination that one or more data packets of the allocation of data packets is not to be provided, ignoring the signal indicating an allocation of data packets to be sent from the sending node to the receiving node.

27. Apparatus for routing data packets associated with a communication session between a sending node and a receiving node, comprising: a storage unit configured to cache received data packets; a transmitter/receiver unit configured to receive a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; and a processor configured to determine whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the storage unit, wherein, responsive to the processor determining that one or more data packets of the allocation of data packets is to be provided by the cached data packets, the transmitter/receiver unit sends one or more of the cached data packets.

28. Apparatus for routing data packets associated with a communication session between a sending node and a receiving node, comprising: a storage unit configured to cache received data packets; a transmitter/receiver unit configured to receive a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; and a processor configured to determine whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the storage unit, wherein responsive to the processor determining that one or more data packets of the allocation of data packets is not to be provided by the cached data packets, the transmitter/receiver unit sends one or more of the data packets forwarded from upstream nodes.

29. Apparatus for routing data packets associated with a communication session between a sending node and a receiving node, comprising: a storage unit configured to cache received data packets; a transmitter/receiver unit configured to send upstream in the communication session a signal indicating an allocation of data packets of the communication session to be sent toward the receiving node and to receive one or more data packets of the communication session; and a processor configured to determine whether to forward the received data packets toward the receiving node or to cache the one or more received data packets based on a received signal from the receiving node or a downstream node; wherein responsive to the processor determining that one or more data packets are to be cached by the storage unit, the storage unit caches the received one or more data packets subsequent to forwarding toward the receiving node.

30. A receiving node for managing data packets associated with first and second communication sessions with one or more sending nodes using first and second intermediate nodes, comprising: a processor configured to determine a first allocation of data packets to be provide to the receiving node via the first communication session and a second allocation of data packets to be provide to the receiving node via the second communication session; and a transmitter/receiver unit configured to: (1 ) send to the first intermediate node a first signal indicating the first allocation of data packets of the first communication session to be sent to the receiving node; (2) send to the second intermediate node a second signal indicating the second allocation of data packets of the second communication session to be sent to the receiving node; and (3) receive the first allocation of data packets via the first communication session and the second allocation of data packets via the second communication session.

31 . Apparatus for routing data packets associated with a communication session between a sending node and a receiving node, comprising: a storage unit configured to cache received data packets; a transmitter/receiver unit configured to receive a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; and a processor configured to determine whether one or more data packets of the allocation of the data packets is to be provided to the receiver subject to a policy preconfigured in the intermediate node; wherein, responsive to the processor determining that one or more data packets of the allocation of data packets is not to be provided, the transmitter/receiver unit ignoring the signal indicating an allocation of data packets of the communication session to be sent to the receiving node.

Description:
METHODS, SYSTEM AND APPARATUS FOR PACKET ROUTING USING A HOP- BY-HOP PROTOCOL IN MULTI-HOMED ENVIRONMENTS

FIELD

[0001 ] The present disclosure relates to the field of wireless communications and, more particularly, to methods and apparatus for packet routing.

BACKGROUND

[0002] Mobile Internet has become a technology used for billions of wireless devices, including laptops and cell phones that are connected to the Internet. Research relating to wireless access technologies and mobile data delivery is ongoing.

SUMMARY

[0003] Embodiments of the disclosure are directed to methods, systems and apparatus for routing data packets associated with a communication session between a sending node and a receiving node using an intermediate node. One representative method may include the intermediate node receiving a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; determining whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the intermediate node; and responsive to a determination that one or more data packets of the allocation of data packets is to be provided by the cached data packets, sending one or more of the cached data packets.

[0004] In certain representative embodiments, the signal received by the intermediate node may be sent from the receiving node or a downstream node.

[0005] In certain representative embodiments, the communication session may be an internet protocol (IP) communication session.

[0006] In certain representative embodiments, the determination of whether one or more data packets of the allocation of the data packets is to be provided by cached data packets may include determining whether data packets of the communication session pending reception by the receiving node are cached by the intermediate node. [0007] In certain representative embodiments, the sending of one or more of the cached data packets may include sending one or more of the cached data packets that are pending reception toward the receiving node.

[0008] In certain representative embodiments, the signal received by the intermediate node may indicate at least a quantity of data packets of the communication session to be sent to the receiving node.

[0009] In certain representative embodiments, the signal received by the intermediate node further may indicate pending data packets of the communication session that are pending reception by the receiving node.

[0010] In certain representative embodiments, data packets may be sent that are pending reception by the receiving node including one or more of the cached data packets based on the received signal by: (1 ) selecting respective data packets that are pending reception by the receiving node including at least one cached data packet based on the indicated allocation in the received signal, and (2) sending the selected, respective data packets toward the receiving node.

[001 1 ] In certain representative embodiments, the intermediate node may cache, one or more of the data packets of the communication session destined for the receiving node, responsive to available cache space at the intermediate node exceeding a threshold amount.

[0012] In certain representative embodiments, one or more of the data packets of the communication session may be dropped that are destined for the receiving node, responsive to the available cache space being at or below the threshold amount.

[0013] In certain representative embodiments, responsive to a determination that one or more data packets of the allocation of data packets is not to be provided by the cached packets, the intermediate node may send an allocation signal toward the sending node and may receive one or more data packets of the communication session.

[0014] In certain representative embodiments, the intermediate node may implement a protocol herein termed Hop Pull Control Protocol (HoPCoP).

[0015] Another representative method may include the intermediate node: (1 ) receiving a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; (2) determining whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the intermediate node; and (3) responsive to a determination that one or more data packets of the allocation of data packets is not to be provided by the cached packets, (i) sending an allocation signal towards the sending node and (ii) receiving one or more data packets of the communication session.

[0016] A further representative method may include the intermediate node: (1 ) sending upstream a signal indicating an allocation of data packets of the communication session to be sent toward the receiving node; (2) receiving one or more data packets of the communication session; (3) selecting one of a plurality of operating modes including at least first and second operating modes; and (4) responsive to selection of the first operating mode, caching the received one or more data packets for forwarding toward the receiving node.

[0017] In certain representative embodiments, a determination may occur, based on a received signal from the receiving node or a downstream node, whether to cache the one or more data packets, as a first determination result, or to route the received data packets towards the receiving node, as a second determination result.

[0018] In certain representative embodiments, the selecting of the one operating mode may include selecting one of: (1 ) the first operating mode in which the received one or more data packets are to be cached by the intermediate node and then to be forwarded toward the receiving node in accordance with the first determination result; or (2) the second operating mode in which the received one or more data packets are to be routed toward the receiving node in accordance with the second determination result.

[0019] In certain representative embodiments, the intermediate node may receive another signal sent by a downstream node or the receiving node indicating another allocation of data packets to be sent toward the receiving node; and may send cached data packets, cached by the intermediate node in accordance with the further signal, toward the receiving node.

[0020] In certain representative embodiments, the sending of the signal upstream by the intermediate node may be to an upstream node or the sending node.

[0021 ] In certain representative embodiments, the caching of the one or more data packets may be based on at least one of: (1 ) congestion of a transmission path downstream of the intermediate node; or (2) flow control in accordance with information sent by the downstream node or the receiving node.

[0022] In certain representative embodiments, policies may be stored for processing data packets; the intermediate node may inspect from the one or more data packets received, packet information used for packet processing; and may packet process the inspected packets based on the packet information and the stored policies.

[0023] In certain representative embodiments, the caching of the one or more received data packets may include caching the one or more of the received data packets, responsive to available cache space at the intermediate node exceeding a threshold amount.

[0024] In certain representative embodiments, one or more of the received data packets may be dropped, responsive to available cache space being below a threshold amount.

[0025] Embodiments of the disclosure are directed to methods, systems and apparatus for routing data packets associated with first and second communication sessions between one or more sending node and a receiving node using first and second intermediate nodes. An additional representative method may include a receiving node: (1 ) determining a first allocation of data packets to be provide to the receiving node via the first communication session and a second allocation of data packets to be provide to the receiving node via the second communication session; (2) sending to the first intermediate node a first signal indicating the first allocation of data packets of the first communication session to be sent to the receiving node; (3) sending to the second intermediate node a second signal indicating the second allocation of data packets of the second communication session to be sent to the receiving node; and (4) receiving the first allocation of data packets via the first communication session and the second allocation of data packets via the second communication session.

[0026] In certain representative embodiments, the receiving of the first allocation of data packets maybe independent of the receiving of the second allocation of data packets.

[0027] In certain representative embodiments, a first transmission path for the first allocation of data packets to the receiving node may be independent of a second transmission path for the second allocation of data packets to the receiving node.

[0028] In certain representative embodiments, an estimated time of arrival at the receiving node of a respective data packet or data segment requested may be determined; and the respective data packet or the data segment for the first or second allocations or a subsequent allocation may be scheduled based on the estimated time of arrival. [0029] In certain representative embodiments, the receiving of the first and second allocations of data packets may include receiving at least one data packet of the first and second allocations that has been cached at the first or second intermediate node.

[0030] In certain representative embodiments, the receiving node may implement one of Hop Pull Control Protocol (HoPCoP) or parallel HoPCoP (pHoPCoP).

[0031 ] One representative apparatus may include: (1 ) a storage unit configured to cache received data packets; (2) a transmitter/receiver unit configured to receive a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; and (2) a processor configured to determine whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the storage unit. Responsive to the processor determining that one or more data packets of the allocation of data packets is to be provided by the cached packets, the transmitter/receiver unit may send one or more of the cached data packets.

[0032] Another representative apparatus may include: (1 ) a storage unit configured to cache received data packets; (2) a transmitter/receiver unit configured to receive a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; and (3) a processor configured to determine whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the storage unit. Responsive to the processor determining that one or more data packets of the allocation of data packets is not to be provided by the cached packets, the transmitter/receiver unit may send one or more of the data packets forwarded from upstream nodes.

[0033] A further representative apparatus may include: (1 ) a storage unit configured to cache received data packets; (2) a transmitter/receiver unit configured to send upstream, a signal indicating an allocation of data packets of the communication session to be sent toward the receiving node and receive one or more data packets of the communication session; (3) a processor configured to determine whether to forward the received data packets towards the receiving node or to cache the one or more received data packets based on a received signal from the receiving node or a downstream node. Responsive to the processor determining that one or more data packets are to be cached by the storage unit, the storage unit caches the received one or more data packets subsequent to forwarding toward the receiving node. [0034] One representative receiving node may include: (1 ) a processor configured to determine a first allocation of data packets to be provide to the receiving node via the first communication session and a second allocation of data packets to be provide to the receiving node via the second communication session; and (2) a transmitter/receiver unit configured to: (i) send to the first intermediate node a first signal indicating the first allocation of data packets of the first communication session to be sent to the receiving node; (ii) send to the second intermediate node a second signal indicating the second allocation of data packets of the second communication session to be sent to the receiving node; and (iii) receive the first allocation of data packets via the first communication session and the second allocation of data packets via the second communication session.

BRIEF DESCRIPTION OF THE DRAWINGS

[0035] A more detailed understanding may be had from the following description, given by way of example in conjunction with the accompanying drawings. Figures in such drawings, like the detailed description, are examples. As such, the Figures and the detailed description are not to be considered limiting, and other equally effective examples are possible and likely. Furthermore, like reference numerals in the Figures indicate like elements, and wherein:

[0036] FIG. 1 A is a system diagram of a representative communication system in which one or more disclosed embodiments may be implemented;

[0037] FIG. 1 B is a system diagram of a representative wireless transmit/receive unit (WTRU) that may be used within the communication system illustrated in FIGS.

1 A;

[0038] FIGS. 1 C, 1 D, and 1 E are system diagrams of representative radio access networks and representative core networks that may be used within the communication system illustrated in FIGS. 1A, 2A and/or 2B ;

[0039] FIGS 2A and 2B are diagrams of representative communication systems in which one or more disclosed embodiments may be implemented;

[0040] FIGS. 3A and 3B are diagrams illustrating representative routing operations using an intermediate node;

[0041 ] FIG. 4 is a diagram illustrating another representative routing operation using a router; [0042] FIG. 5 is a diagram illustrating a communication system in which one or more disclosed embodiments may be implemented with a multi-homed mobile host;

[0043] FIG. 6 is a diagram illustrating a further representative routing operation using the communication system of FIG. 5;

[0044] FIG. 7 is a timing diagram illustrating transmission timing of data packets;

[0045] FIG. 8 is a state diagram of the connection management for pHoPCoP;

[0046] FIG. 9 is a block diagram illustrating a representative router for implementing routing operations in accordance with one or more disclosed embodiments; and

[0047] FIG. 10 is a block diagram illustrating input and output parameters of an intermediate node.

DETAILED DESCRIPTION

[0048] Although the representative embodiments are generally shown hereafter using wireless network architectures, any number of different network architectures may be used including networks with wired components and/or wireless components, for example.

[0049] FIG. 1A is a diagram of a representative communications system 100 in which one or more disclosed embodiments may be implemented. The communications system 100 may be a multiple access system that provides content, such as, data, video, messaging, broadcast, etc., to multiple wireless users. The communications system 100 may enable multiple users to access such content through the sharing of system resources, including wireless bandwidth. For example, the communications systems 100 may employ one or more channel access methods, such as code division multiple access (CDMA), time division multiple access (TDMA), frequency division multiple access (FDMA), orthogonal FDMA (OFDMA), single- carrier FDMA (SC-FDMA), and the like.

[0050] As shown in FIG. 1 A, the communication system 100 may include wireless transmit/receive units (WTRUs) 102a, 102b, 102c, 102d, a radio access network (RAN) 104, a core network 106, a public switched telephone network (PSTN) 108, the Internet 1 10, and other networks 1 12, though it will be appreciated that the disclosed embodiments contemplate any number of WTRUs, base stations, networks, and/or network elements. Each of the WTRUs 102a, 102b, 102c, 102d may be any type of device configured to operate and/or communicate in a wireless environment. By way of example, the WTRUs 102a, 102b, 102c, 102d may be configured to transmit and/or receive wireless signals and may include user equipment (UE), a mobile station, a fixed or mobile subscriber unit, a pager, a cellular telephone, a personal digital assistant (PDA), a smartphone, a laptop, a netbook, a personal computer, a wireless sensor, consumer electronics, and the like.

[0051 ] The communication systems 100 may also include a base station 1 14a and a base station 1 14b. Each of the base stations 1 14a, 1 14b may be any type of device configured to wirelessly interface with at least one of the WTRUs 102a, 102b, 102c, 102d to facilitate access to one or more communication networks, such as the core network 106, the Internet 1 10, and/or the networks 1 12. By way of example, the base stations 1 14a, 1 14b may be a base transceiver station (BTS), a Node-B, an eNode B, a Home Node B, a Home eNode B, a site controller, an access point (AP), a wireless router, and the like. While the base stations 1 14a, 1 14b are each depicted as a single element, it will be appreciated that the base stations 1 14a, 1 14b may include any number of interconnected base stations and/or network elements.

[0052] The base station 1 14a may be part of the RAN 104, which may also include other base stations and/or network elements (not shown), such as a base station controller (BSC), a radio network controller (RNC), relay nodes, etc. The base station 1 14a and/or the base station 1 14b may be configured to transmit and/or receive wireless signals within a particular geographic region, which may be referred to as a cell (not shown). The cell may further be divided into cell sectors. For example, the cell associated with the base station 1 14a may be divided into three sectors. Thus, in one embodiment, the base station 1 14a may include three transceivers, i.e., one for each sector of the cell. In another embodiment, the base station 1 14a may employ multiple-input multiple output (MIMO) technology and, therefore, may utilize multiple transceivers for each sector of the cell.

[0053] The base stations 1 14a, 1 14b may communicate with one or more of the WTRUs 102a, 102b, 102c, 102d over an air interface 1 16, which may be any suitable wireless communication link (e.g., radio frequency (RF), microwave, infrared (IR), ultraviolet (UV), visible light, etc.). The air interface 1 16 may be established using any suitable radio access technology (RAT).

[0054] More specifically, as noted above, the communication system 100 may be a multiple access system and may employ one or more channel access schemes, such as CDMA, TDMA, FDMA, OFDMA, SC-FDMA, and the like. For example, the base station 1 14a in the RAN 104 and the WTRUs 102a, 102b, 102c may implement a radio technology such as Universal Mobile Telecommunication System (UMTS) Terrestrial Radio Access (UTRA), which may establish the air interface 1 16 using wideband CDMA (WCDMA). WCDMA may include communication protocols such as High-Speed Packet Access (HSPA) and/or Evolved HSPA (HSPA+). HSPA may include High-Speed Downlink Packet Access (HSDPA) and/or High-Speed Uplink Packet Access (HSUPA).

[0055] In another embodiment, the base station 1 14a and the WTRUs 102a, 102b, 102c may implement a radio technology such as Evolved UMTS Terrestrial Radio Access (E-UTRA), which may establish the air interface 1 16 using Long Term Evolution (LTE) and/or LTE-Advanced (LTE-A).

[0056] In other embodiments, the base station 1 14a and the WTRUs 102a, 102b, 102c may implement radio technologies such as IEEE 802.16 (i.e., Worldwide Interoperability for Microwave Access (WiMAX)), CDMA2000, CDMA2000 1 X, CDMA2000 EV-DO, Interim Standard 2000 (IS-2000), Interim Standard 95 (IS-95), Interim Standard 856 (IS-856), Global System for Mobile communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), GSM EDGE (GERAN), and the like.

[0057] The base station 1 14b in FIG. 1A may be a wireless router, Home Node B, Home eNode B, or access point, for example, and may utilize any suitable RAT for facilitating wireless connectivity in a localized area, such as a place of business, a home, a vehicle, a campus, and the like. In one embodiment, the base station 1 14b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.1 1 to establish a wireless local area network (WLAN). In another embodiment, the base station 1 14b and the WTRUs 102c, 102d may implement a radio technology such as IEEE 802.15 to establish a wireless personal area network (WPAN). In yet another embodiment, the base station 1 14b and the WTRUs 102c, 102d may utilize a cellular-based RAT (e.g., WCDMA, CDMA2000, GSM, LTE, LTE-A, etc.) to establish a picocell or femtocell. As shown in FIG. 1A, the base station 1 14b may have a direct connection to the Internet 1 10. Thus, the base station 1 14b may not be required to access the Internet 1 10 via the core network 106.

[0058] The RAN 104 may be in communication with the core network 106, which may be any type of network configured to provide voice, data, applications, and/or voice over internet protocol (VoIP) services to one or more of the WTRUs 102a, 102b, 102c, 102d. For example, the core network 106 may provide call control, billing services, mobile location-based services, pre-paid calling, Internet connectivity, video distribution, etc., and/or perform high-level security functions, such as user authentication. Although not shown in FIG. 1A, it will be appreciated that the RAN 104 and/or the core network 106 may be in direct or indirect communication with other RANs that employ the same RAT as the RAN 104 or a different RAT. For example, in addition to being connected to the RAN 104, which may be utilizing an E-UTRA radio technology, the core network 106 may also be in communication with another RAN (not shown) employing a GSM radio technology.

[0059] The core network 106 may also serve as a gateway for the WTRUs 102a, 102b, 102c, 102d to access the PSTN 108, the Internet 1 10, and/or other networks 1 12. The PSTN 108 may include circuit-switched telephone networks that provide plain old telephone service (POTS). The Internet 1 10 may include a global system of interconnected computer networks and devices that use common communication protocols, such as the transmission control protocol (TCP), user datagram protocol (UDP) and the internet protocol (IP) in the TCP/IP internet protocol suite. The networks 1 12 may include wired or wireless communications networks owned and/or operated by other service providers. For example, the networks 1 12 may include another core network connected to one or more RANs, which may employ the same RAT as the RAN 104 or a different RAT.

[0060] Some or all of the WTRUs 102a, 102b, 102c, 102d in the communication system 100 may include multi-mode capabilities, i.e., the WTRUs 102a, 102b, 102c, 102d may include multiple transceivers for communicating with different wireless networks over different wireless links. For example, the WTRU 102c shown in FIG. 1A may be configured to communicate with the base station 1 14a, which may employ a cellular-based radio technology, and with the base station 1 14b, which may employ an IEEE 802 radio technology.

[0061 ] FIG. 1 B is a system diagram of a representative WTRU 102. As shown in FIG. 1 B, the WTRU 102 may include a processor 1 18, a transceiver 120, a transmit/receive element 122, a speaker/microphone 124, a keypad 126, a display/touchpad 128, non-removable memory 106, removable memory 132, a power source 134, a global positioning system (GPS) chipset 136, and other peripherals 138. It will be appreciated that the WTRU 102 may include any sub-combination of the foregoing elements while remaining consistent with an embodiment.

[0062] The processor 1 18 may be a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Field Programmable Gate Array (FPGAs) circuits, any other type of integrated circuit (IC), a state machine, and the like. The processor 1 18 may perform signal coding, data processing, power control, input/output processing, and/or any other functionality that enables the WTRU 102 to operate in a wireless environment. The processor 1 18 may be coupled to the transceiver 120, which may be coupled to the transmit/receive element 122. While FIG. 1 B depicts the processor 1 18 and the transceiver 120 as separate components, it will be appreciated that the processor 1 18 and the transceiver 120 may be integrated together in an electronic package or chip.

[0063] The transmit/receive element 122 may be configured to transmit signals to, or receive signals from, a base station (e.g., the base station 1 14a) over the air interface 1 16. For example, in one embodiment, the transmit/receive element 122 may be an antenna configured to transmit and/or receive RF signals. In another embodiment, the transmit/receive element 122 may be an emitter/detector configured to transmit and/or receive IR, UV, or visible light signals, for example. In yet another embodiment, the transmit/receive element 122 may be configured to transmit and receive both RF and light signals. It will be appreciated that the transmit/receive element 122 may be configured to transmit and/or receive any combination of wireless signals.

[0064] In addition, although the transmit/receive element 122 is depicted in FIG. 1 B as a single element, the WTRU 102 may include any number of transmit/receive elements 122. More specifically, the WTRU 102 may employ MIMO technology. Thus, in one embodiment, the WTRU 102 may include two or more transmit/receive elements 122 (e.g., multiple antennas) for transmitting and receiving wireless signals over the air interface 1 16.

[0065] The transceiver 120 may be configured to modulate the signals that are to be transmitted by the transmit/receive element 122 and to demodulate the signals that are received by the transmit/receive element 122. As noted above, the WTRU 102 may have multi-mode capabilities. Thus, the transceiver 120 may include multiple transceivers for enabling the WTRU 102 to communicate via multiple RATs, such as UTRA and IEEE 802.1 1 , for example.

[0066] The processor 1 18 of the WTRU 102 may be coupled to, and may receive user input data from, the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128 (e.g., a liquid crystal display (LCD) display unit or organic light- emitting diode (OLED) display unit). The processor 1 18 may also output user data to the speaker/microphone 124, the keypad 126, and/or the display/touchpad 128. In addition, the processor 1 18 may access information from, and store data in, any type of suitable memory, such as the non-removable memory 106 and/or the removable memory 132. The non-removable memory 106 may include random-access memory (RAM), read-only memory (ROM), a hard disk, or any other type of memory storage device. The removable memory 132 may include a subscriber identity module (SIM) card, a memory stick, a secure digital (SD) memory card, and the like. In other embodiments, the processor 1 18 may access information from, and store data in, memory that is not physically located on the WTRU 102, such as on a server or a home computer (not shown).

[0067] The processor 1 18 may receive power from the power source 134, and may be configured to distribute and/or control the power to the other components in the WTRU 102. The power source 134 may be any suitable device for powering the WTRU 102. For example, the power source 134 may include one or more dry cell batteries (e.g., nickel-cadmium (NiCad), nickel-zinc (NiZn), nickel metal hydride (NiMH), lithium-ion (Li-ion), etc.), solar cells, fuel cells, and the like.

[0068] The processor 1 18 may also be coupled to the GPS chipset 136, which may be configured to provide location information (e.g., longitude and latitude) regarding the current location of the WTRU 102. In addition to, or in lieu of, the information from the GPS chipset 136, the WTRU 102 may receive location information over the air interface 1 16 from a base station (e.g., base stations 1 14a, 1 14b) and/or determine its location based on the timing of the signals being received from two or more nearby base stations. It will be appreciated that the WTRU 102 may acquire location information by way of any suitable location-determination method while remaining consistent with an embodiment.

[0069] The processor 1 18 may further be coupled to other peripherals 138, which may include one or more software and/or hardware modules that provide additional features, functionality and/or wired or wireless connectivity. For example, the peripherals 138 may include an accelerometer, an e-compass, a satellite transceiver, a digital camera (for photographs or video), a universal serial bus (USB) port, a vibration device, a television transceiver, a hands free headset, a Bluetooth® module, a frequency modulated (FM) radio unit, a digital music player, a media player, a video game player module, an Internet browser, and the like.

[0070] FIG. 1 C is a system diagram of the RAN 104 and the core network 106 according to an embodiment. As noted above, the RAN 104 may employ a UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 1 16. The RAN 104 may also be in communication with the core network 106. As shown in FIG. 1 C, the RAN 104 may include Node-Bs 140a, 140b, 140c, which may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 1 16. The Node-Bs 140a, 140b, 140c may each be associated with a particular cell (not shown) within the RAN 104. The RAN 104 may also include RNCs 142a, 142b. It will be appreciated that the RAN 104 may include any number of Node-Bs and RNCs while remaining consistent with an embodiment.

[0071 ] As shown in FIG. 1 C, the Node-Bs 140a, 140b may be in communication with the RNC 142a. Additionally, the Node-B 140c may be in communication with the RNC142b. The Node-Bs 140a, 140b, 140c may communicate with the respective RNCs 142a, 142b via an lub interface. The RNCs 142a, 142b may be in communication with one another via an lur interface. Each of the RNCs 142a, 142b may be configured to control the respective Node-Bs 140a, 140b, 140c to which it is connected. In addition, each of the RNCs 142a, 142b may be configured to carry out or support other functionality, such as outer loop power control, load control, admission control, packet scheduling, handover control, macrodiversity, security functions, data encryption, and the like.

[0072] The core network 106 shown in FIG. 1 C may include a media gateway (MGW) 144, a mobile switching center (MSC) 146, a serving GPRS support node (SGSN) 148, and/or a gateway GPRS support node (GGSN) 150. While each of the foregoing elements are depicted as part of the core network 106, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.

[0073] The RNC 142a in the RAN 104 may be connected to the MSC 146 in the core network 106 via an luCS interface. The MSC 146 may be connected to the MGW 144. The MSC 146 and the MGW 144 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices.

[0074] The RNC 142a in the RAN 104 may also be connected to the SGSN 148 in the core network 106 via an luPS interface. The SGSN 148 may be connected to the GGSN 150. The SGSN 148 and the GGSN 150 may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 1 10, to facilitate communications between and the WTRUs 102a, 102b, 102c and IP-enabled devices.

[0075] As noted above, the core network 106 may also be connected to the networks 1 12, which may include other wired or wireless networks that are owned and/or operated by other service providers.

[0076] FIG. 1 D is a system diagram of the RAN 104 and the core network 106 according to another embodiment. As noted above, the RAN 104 may employ an E- UTRA radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 1 16. The RAN 104 may also be in communication with the core network 106.

[0077] The RAN 104 may include eNode-Bs 160a, 160b, 160c, though it will be appreciated that the RAN 104 may include any number of eNode-Bs while remaining consistent with an embodiment. The eNode-Bs 160a, 160b, 160c may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 1 16. In one embodiment, the eNode-Bs 160a, 160b, 160c may implement MIMO technology. Thus, the eNode-B 160a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 102a.

[0078] Each of the eNode-Bs 160a, 160b, 160c may be associated with a particular cell (not shown) and may be configured to handle radio resource management decisions, handover decisions, scheduling of users in the uplink and/or downlink, and the like. As shown in FIG. 1 D, the eNode-Bs 160a, 160b, 160c may communicate with one another over an X2 interface.

[0079] The core network 106 shown in FIG. 1 D may include a mobility management gateway (MME) 162, a serving gateway 164, and a packet data network (PDN) gateway 166. While each of the foregoing elements are depicted as part of the core network 106, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator. [0080] The MME 162 may be connected to each of the eNode-Bs 160a, 160b, 160c in the RAN 104 via an S1 interface and may serve as a control node. For example, the MME 162 may be responsible for authenticating users of the WTRUs 102a, 102b, 102c, bearer activation/deactivation, selecting a particular serving gateway during an initial attach of the WTRUs 102a, 102b, 102c, and the like. The MME 162 may also provide a control plane function for switching between the RAN 104 and other RANs (not shown) that employ other radio technologies, such as GSM or WCDMA.

[0081 ] The serving gateway 164 may be connected to each of the eNode Bs 160a, 160b, 160c in the RAN 104 via the S1 interface. The serving gateway 164 may generally route and forward user data packets to/from the WTRUs 102a, 102b, 102c. The serving gateway 164 may also perform other functions, such as anchoring user planes during inter-eNode B handovers, triggering paging when downlink data is available for the WTRUs 102a, 102b, 102c, managing and storing contexts of the WTRUs 102a, 102b, 102c, and the like.

[0082] The serving gateway 164 may also be connected to the PDN gateway 166, which may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 1 10, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices.

[0083] The core network 106 may facilitate communications with other networks. For example, the core network 106 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices. For example, the core network 106 may include, or may communicate with, an IP gateway (e.g., an IP multimedia subsystem (IMS) server) that serves as an interface between the core network 106 and the PSTN 108. In addition, the core network 106 may provide the WTRUs 102a, 102b, 102c with access to the networks 1 12, which may include other wired or wireless networks that are owned and/or operated by other service providers.

[0084] FIG. 1 E is a system diagram of the RAN 104 and the core network 106 according to another embodiment. The RAN 104 may be an access service network (ASN) that employs IEEE 802.16 radio technology to communicate with the WTRUs 102a, 102b, 102c over the air interface 1 16. As will be further discussed below, the communication links between the different functional entities of the WTRUs 102a, 102b, 102c, the RAN 104, and the core network 106 may be defined as reference points.

[0085] As shown in FIG. 1 E, the RAN 104 may include base stations 170a, 170b, 170c, and an ASN gateway 172, though it will be appreciated that the RAN 104 may include any number of base stations and ASN gateways while remaining consistent with an embodiment. The base stations 170a, 170b, 170c may each be associated with a particular cell (not shown) in the RAN 104 and may each include one or more transceivers for communicating with the WTRUs 102a, 102b, 102c over the air interface 1 16. In one embodiment, the base stations 170a, 170b, 170c may implement MIMO technology. Thus, the base station 170a, for example, may use multiple antennas to transmit wireless signals to, and receive wireless signals from, the WTRU 102a. The base stations 170a, 170b, 170c may also provide mobility management functions, such as handoff triggering, tunnel establishment, radio resource management, traffic classification, quality of service (QoS) policy enforcement, and the like. The ASN gateway 172 may serve as a traffic aggregation point and may be responsible for paging, caching of subscriber profiles, routing to the core network 106, and the like.

[0086] The air interface 1 16 between the WTRUs 102a, 102b, 102c and the RAN 104 may be defined as an R1 reference point that implements the IEEE 802.16 specification. In addition, each of the WTRUs 102a, 102b, 102c may establish a logical interface (not shown) with the core network 106. The logical interface between the WTRUs 102a, 102b, 102c and the core network 106 may be defined as an R2 reference point, which may be used for authentication, authorization, IP host configuration management, and/or mobility management.

[0087] The communication link between each of the base stations 170a, 170b, 170c may be defined as an R8 reference point that includes protocols for facilitating WTRU handovers and the transfer of data between base stations. The communication link between the base stations 170a, 170b, 170c and the ASN gateway 172 may be defined as an R6 reference point. The R6 reference point may include protocols for facilitating mobility management based on mobility events associated with each of the WTRUs 102a, 102b, 100c.

[0088] As shown in FIG. 1 E, the RAN 104 may be connected to the core network 106. The communication link between the RAN 104 and the core network 106 may defined as an R3 reference point that includes protocols for facilitating data transfer and mobility management capabilities, for example. The core network 106 may include a mobile IP home agent (MIP-HA) 174, an authentication, authorization, accounting (AAA) server 176, and a gateway 178. While each of the foregoing elements are depicted as part of the core network 106, it will be appreciated that any one of these elements may be owned and/or operated by an entity other than the core network operator.

[0089] The MIP-HA 174 may be responsible for IP address management, and may enable the WTRUs 102a, 102b, 102c to roam between different ASNs and/or different core networks. The MIP-HA 174 may provide the WTRUs 102a, 102b, 102c with access to packet-switched networks, such as the Internet 1 10, to facilitate communications between the WTRUs 102a, 102b, 102c and IP-enabled devices. The AAA server 176 may be responsible for user authentication and for supporting user services. The gateway 178 may facilitate interworking with other networks. For example, the gateway 178 may provide the WTRUs 102a, 102b, 102c with access to circuit-switched networks, such as the PSTN 108, to facilitate communications between the WTRUs 102a, 102b, 102c and traditional land-line communications devices. In addition, the gateway 178 may provide the WTRUs 102a, 102b, 102c with access to the networks 1 12, which may include other wired or wireless networks that are owned and/or operated by other service providers.

[0090] Although not shown in FIG. 1 E, it will be appreciated that the RAN 104 may be connected to other ASNs and the core network 106 may be connected to other core networks. The communication link between the RAN 104 the other ASNs may be defined as an R4 reference point, which may include protocols for coordinating the mobility of the WTRUs 102a, 102b, 102c between the RAN 104 and the other ASNs. The communication link between the core network 106 and the other core networks may be defined as an R5 reference, which may include protocols for facilitating interworking between home core networks and visited core networks.

[0091 ] Although the receiver is described in FIGS. 1 A-1 E as a wireless terminal, it is contemplated that, in certain representative embodiments, such a terminal may use wired communication interfaces with the communication network.

[0092] A mobile user may choose from a wide range of technologies to access networks such as GPRS, EDGE, 3G and/or 4G for wide area access, and/or Wife for local area access. Mobile hosts are increasingly becoming multi-homed (e.g., connected via multiple access technologies and/or multi-access points) and may possess two or more heterogeneous interfaces. Internet content is being increasingly distributed (e.g., over a "cloud") such that content delivery is becoming more complex (e.g., to get the right content from the right location).

[0093] In certain representative embodiments, content may be provided to a receiver node from a plurality of transmitter nodes so that the receiver node may receive the desired content from multiple sources and choose which copy of the content (or portions thereof) it wishes to use in order to obtain the content in as efficient manner as possible.

[0094] Consider a consumer desiring to view a motion picture via an on demand viewing option. The consumer commonly will not care about the source of the desired content (the motion picture), but merely about obtaining it as quickly as possible. In a conventional TCP type scenario, the consumer's device (e.g., a mobile host) will request the content by identifying a specific source node of the content, e.g., the server node of a cable television network. This is true even if an intermediate router in the network positioned between the server node and the mobile host contains a cached copy of the same content (which may be the case, if, for example, the consumer's neighbor just ordered the same motion picture on demand so that the local router serving that particular neighborhood has a cached copy of the movie). In such a case, it would be a more efficient use of network resources for the mobile host to obtain the content from the local router rather than from the network server node, which will transmit the same content to the mobile host through the very same router that already contains a cached copy of the content. In other scenarios, it may be efficient to obtain content from multiple source nodes on the network, either by obtaining different portions of the overall content from different source nodes on the network or requesting the same content from multiple source nodes and using the copy of the content that arrives at the mobile host first or with the highest QoS.

[0095] In certain representative embodiments, a multi-homed mobile host fully utilizes all available interfaces (wireless and wired) to efficiently receive content. To do so, it may first query the content management service to find the locations of content providers, and then download the content from those content providers through its multiple interfaces. [0096] In certain representative embodiments, a multi-homed wireless device (e.g., a mobile host, mobile device, netbook and/or UE, among others) may access or receive (e.g., efficiently access or receive) content (e.g., internet-based content).

[0097] In certain representative embodiments, a multi-homed mobile host may use (e.g., may fully utilize) a subset or all of the available interfaces (e.g., wireless and/or wired) to send content or to receive content (e.g., efficiently receive content).

[0098] FIG. 2A is a diagram of a representative communication system communicating to a mobile host via multiple interfaces.

[0099] Referring to FIG. 2A, the communication system may include a content provider, one or more of routers 204 (e.g., with a subset or all of the routers having in-network cache capabilities), a content management service/server 206 and/or a mobile host 208. The mobile host may desire to download content from the content provider 202. The mobile host 208 may query the content management server 206 to find or determine locations (or addresses such as IP addresses) of the content providers having the desired content. The query may include a content identifier (e.g., a unique identifier) that may identify the content desired by the mobile host. The content management service or server 206 may store a table (not shown) of records with locations (e.g., a pointer to the content) that are associated with respective content identifiers and the name of the content stored at the respective location. The content management server 206 may solicit (e.g., pull) the information for the stored records from the content providers and other network resources (e.g., cache and/or packet processing capable routers) on the communication system or may receive the information pushed to the content management server from the content providers and/or the other network resources on the communication system.

[0100] Although the content management service/server 206 is shown as a single device, it is contemplated that the service may be distributed and use a plurality of devices/servers (e.g., each associated with a different portion of the communication network, for example, based on the location and/or network domain of the content).

[0101 ] The communication system may include one or more different networks (e.g., a local area network and/or a wide area network) and/or access technologies (e.g., UMTS, LTE, WMAX, GPRS, EDGE, 3G, 4G, Ethernet, and/or Wife, among others). The mobile host 208 may connect to the communication system 200 via one or more access points (APs). The APs may be of the same or a different access technology. The mobile host 208 may download the desired content from the content provider 202 via one or multiple APs using corresponding interfaces. The mobile host 208 may request the content from the content provider 202 via one of the transmission paths between the content provider and the mobile host. The transmission paths may be based on or in accordance with established interfaces between the mobile host and the access networks. For example, the mobile host 208 may be multi- homed. Multi-homing generally refers to a host or mobile host having more than one globally routable address (e.g., IP address) at the same time, which may be linked together. In certain representative embodiments, the host (e.g., mobile host) may have multiple interfaces and each interface may have one or more IP addresses. If one of the interfaces (e.g., communication links) fails, its IP address or addresses may become unreachable and the other IP addresses associated with the other interfaces may still operate.

[0102] In certain representative embodiments, the mobile host 208 may announce the IP address space on its upstream links to its upstream nodes. When one of the links fails, the transport protocol (e.g., Hop Pull Control Protocol (HoPCoP) and/or parallel HoPCoP pHoPCoP protocol) may notice the failure on both sides and the traffic may not be sent over the failed link.

[0103] In certain representative embodiments, the mobile host may determine, using a content management service, one or more content providers and may map the desired content to one or more of the determined content providers. Paths (e.g., via multiple interfaces, such as a plurality of heterogeneous interfaces) from the determined content provider or providers to the mobile host may be independent from each other. Each determined content provider may have the complete content desired, or may have a subset (e.g., only a part) of the desired content.

[0104] In certain representative embodiments, a subset or all of the content may have been cached at intermediate nodes or routers. In certain representative embodiments, transmission may be real-time or near real-time (e.g., for interactive content) or may be non-real-time.

[0105] In certain representative embodiments, a discovery mechanism may be provided to identify each segment of the desired content to the network. For example, the desired content may include video, audio, and/or text. The discovery mechanism may separately identify the video, audio and/or text with sub-content identifiers in addition to or in lieu of the content identifier for the entirety of the desired content. The content or sub-content identifiers may be used by the cache-capable nodes or routers (which may also have packet processing capabilities) to selectively cache the content along the transmission path or paths from the content provider to the mobile host in accordance with cache rules or policies.

[0106] In certain representative embodiments, the transmission path or paths may include a plurality of hops using, for example, intermediate nodes or routers having routing functionality. The intermediate nodes may include other functions or modules in addition to the routing function (or module). For example, the intermediate nodes may have a storage module (for caching data) and may be able to perform other data operations such as flow detection, flow control, data security operations, and/or intrusion monitoring, among others.

[0107] By way of example, the mobile host 208 may have multiple heterogeneous interfaces 210, 212 and may be connected to multiple networks (e.g., heterogeneous access networks). The conventional transport layer protocol for the Internet is Transmission Control Protocol (TCP) and is an end-to-end sender-driven protocol with the data sender performing control tasks including flow control, congestion control, and reliability such that the receiver participates in the operation by sending feedback in the form of acknowledgements, while the intermediate nodes forward data packets (only forward data packets) based on an internal routing table.

[0108] In certain representative embodiments, a hop-by-hop (HBH) receiver-driven transport protocol for mobile hosts with heterogeneous wireless interfaces may be used. For example, the last hop to the mobile host may or may not be a wireless link. For example, the HBH transport scheme may be used to enhance the operation of intermediate nodes. In the conventional Internet, end-to-end control principles are used such that the intermediate nodes use their best efforts while end points of the network are responsible for controlling functionality such as error control, congestion control, and flow control. In certain representative embodiments, the intermediate nodes may include some control functions in addition to packet routing functions or operations (e.g., that route data packets traversing a particular intermediate node). HBH routing may provide an Internet Service Provider (ISP) more control (e.g., significant control) over traffic traversing its network to allow the ISP (e.g., mobile network operators) the ability to influence network usage. For HBH transmission, two or more hops may form a segment. For example, using HBH transmission, selected ones of the nodes or routers on the communication network may be HBH-capable while the remaining nodes or routers may be conventional such that two or more hops may be considered a path segment and may include at least one HBH-capable node or router. By aggregating nodes in path segments, higher-level operations (such as caching and/or flow control) of the nodes or routers may be implemented in a minimum or optimum number of nodes or routers without changing the TCP for each of the other nodes or routers. The HBH-capable nodes may be nodes of a first type with a first capability and non-HBH nodes may be nodes of a second type with a second capability.

[0109] Although the HBH capable nodes are described as having one capability, it is contemplated that the HBH-capable nodes may be any number of different types with differing capabilities to perform selected operations as intermediate nodes in the transmission path. For example, a particular HBH-capable node may include one or more capabilities such as: (1 ) routing capabilities; (2) cache capabilities; (2) flow detection capabilities; (3) flow routing capabilities; (4) congestion control capabilities; and/ or (5) security control capabilities, among others.

[01 10] The receiver-driven transport scheme may enable a receiver to have more knowledge and/or more timely knowledge about the receiver's operations and/or the wireless environment. The receiver-driven transport scheme may provide control of data transmission at the receiver. Since a multi-homed mobile host may be connected to multiple access networks and different access networks may have different (e.g., completely different) characteristics, a receiver-driven transport scheme may be used to deal with the receiver side heterogeneity to aggregate bandwidth more efficiently than a sender-driven transport scheme. The receiver- driven transport scheme, in which the receiver may control how much and which data to receive through its multiple interfaces, may improve receiver side heterogeneity operations.

[01 1 1 ] For example, by providing a multi-homed mobile host with control of the transport of data packets, the mobile host may send an allocation signal or message indicating a particular quantity of traffic (e.g., data packets) over respective interfaces. In turn, the next upstream node may receive the allocation signal and may determine based on the traffic with its next upstream node which portions of the traffic (e.g., data packets) to send to the mobile host and which traffic, if any, to cache. The caching of traffic may be based on congestion or other policies or rules that are pre-established or provided by the network operator. Each HBH node may independently determine, based on localized congestion or other policies (e.g., localized or global), whether to cache the traffic or data packets and/or provide higher level functions to the traffic or data packets. In certain representative embodiments, the HBH nodes may provide flow control based on flow indicators in the data packet headers.

[01 12] By providing, in-network caching at the HBH-capable nodes, when the connection of the mobile host may be transient, data packets or traffic cached along a transmission path may be used by an intermediate HBH node, instead of retransmission from the sender. For example, the HBH node may receive with each allocation signal or message the pending data packets to be received by the mobile host and may determine whether to request a respective pending data packet from the next upstream node or whether the respective pending data packet has been cached locally in the HBH node.

[01 13] FIG. 2B is a diagram of another representative communication system communicating with a mobile host via multiple interfaces to download content from a plurality of content providers. The representative communication system 250 of FIG. 2B is similar to that of FIG. 2A with the exception that it includes the plurality of content providers.

[01 14] Referring to FIG. 2B, the communication system 250 may include a plurality of content providers 202a, 202b, a plurality of routers 204 (e.g., with a subset or all of the routers having in-network cache capabilities), a content management service/server 206 and/or a mobile host 208. The mobile host 208 may desire to download content that may be distributed among a plurality of the content providers. For example, the mobile host may query the content management server to find or determine locations (or addresses such as IP addresses) of the content providers 202a, 202b having the desired content. The query may include a content identifier (e.g., a unique identifier) that may identify the content desired by the mobile host 208. The content management service 206 may store a table (not shown) of records with locations that are associated with respective content identifiers and the name of the content stored at the respective location. The content management server 206 may solicit (e.g., pull) the information for the stored records from the content providers 202a, 202b and other network resources or may receive the information pushed to the content management server from the content providers and/or the other network resources. [01 15] The mobile host 208 may download the desired content from the content providers 202a, 202b via one or multiple APs using corresponding interfaces, connections and/or sessions. In one example, the mobile host 208 may request a first portion of the content from a first content provider 202a via a first one of the transmission paths (connections and/or sessions) between the first content provider 202a and the mobile host 208 and may request a second portion of the content from a second content provider 202b via a second one of the transmission paths (connections and/or sessions) between the second content provider 202b and mobile host 208. In another example, the mobile host 208 may request a portion of the content from a first provider via a first one of the transmission paths (connections and/or sessions) between the first content provider 202a and the mobile host 208 and also request the same portion of the content from a second content provider 202b via a second one of the transmission paths (connections and/or sessions) between the second content provider 202b and mobile host 208.

[01 16] In certain representative embodiments, the host or mobile host may setup or establish a first IP session with the first content provider using a first IP address and may set up or establish a second IP session with the second content provider using a second IP address. For example, the mobile host may be multi-homed.

[01 17] Each content provider may have the complete content desired, or may have a subset (e.g., only a part) of the desired content. In certain representative embodiments, a subset or all of the content may have been cached at intermediate nodes or routers and may be retrieved from the intermediate nodes instead of a content provider.

[01 18] The discovery mechanism may identify each segment of the desired content such that the video may be stored at the first content provider and the audio may be stored at the second content provider. Sub-content identifiers may be used to retrieve the video content via a first transmission path from the first content provider and the audio content via a second transmission path from the second content provider.

[01 19] It is contemplated that content delivery may be improved using advanced routers (e.g., cache-capable or packet processing capable routers). Each router using a HBH protocol may act as an independent agent that may accept input from "downstream" routers and data from "upstream" routers. In response to and/or based on the upstream data and/or the downstream input, an intermediate router may make decisions regarding which data to send, cache, discard and/or otherwise process, among others. In certain representative embodiments, one or more different types of routers or routing operations may be used and may enable effective network operation, for example, for content delivery.

[0120] For example, end-to-end principle may treat the transmission path, as a data pipeline, and may contemplate the use of a continuous source-to-destination path. End-to-end protocols such as TCP may keep in-network functions to a minimum and may push service-specific complexity to the end-points at the edge of the network. However, loss of a data packet during transmission, for example using TCP/IP, due to a discontinuous path (or intermittent session/connection) may cause (e.g., may always cause) retransmission of the data packet from the sender. By using HBH transport, data may be pulled closer to the receiver such that retransmission after a packet loss may be provided from the last HBH intermediate router to have received the packet prior to the packet loss.

[0121 ] Because the capabilities (e.g., router memory and/or processing speed, among others) of routers continue to increase, the routers may provide an opportunity for data management (e.g., including routing) that may include, for example, the use of the HBH-capable routers for intermediate storage and to implement the HBH scheme (e.g., the HBH-capable routers may be able to decide whether to send data from a local cache or wait for it to be transmitted from an upstream node). In certain representative embodiments, when packet loss occurs, the receiver may fetch the packet from a closer location (e.g., an intermediate router that may have cached the data packet).

[0122] Because HBH schemes may provide control/processing functionality (e.g., beyond packet routing) to the intermediate routers of the ISP and/or network operator (NO), the ISPs and mobile NOs, which may control policies associated with the routers, may gain or improve control over their networks and network utilization (which may enable them to adopt their own control mechanisms and policies for the data traffic traversing their networks). For example, an enhanced router may, based on pre-established or dynamic policies, choose to: (1 ) forward some or all of its packets; (2) cache some or all of its packets; (3) drop some or all of its packets; (4) restrict its traffic throughput to certain receivers; (5) limit its data throughput rates; and/or (5) adapt other types of policies for packet processing, among others. The enhanced routers may: (1 ) enhance the ISP's traffic control functions; (2) provide flexibility, for example, for caching and retransmission operations, (3) provide faster reaction to changing link/NO conditions; and/or (4) provide higher network utilization.

[0123] In a typical end-to-end controlling scheme, round trip delay may be large and may delay changes associated with congestion control window operations, for example in a dynamic environment, such that communication may be adversely affected. For example, the congestion control window operations may not change quickly enough, causing loss of data packets. For a HBH controlling scheme, feedback information (e.g., regarding the congestion control window) may be available faster than for the end-to-end controlling scheme because of the smaller distance between hops. An enhanced HBH router may react quicker to network change due to the reduced lag time (e.g., compared to the lag associated with the end-to-end controlling scheme). For such a HBH controlling scheme, packets may be stored: (1 ) at the bottlenecked node; (2) at upstream nodes (e.g., nodes prior to the bottlenecked node) along the path from the sending node to the receiving node; and (3) at later nodes (e.g., nodes subsequent to the bottlenecked node) along the path from the sending node to the receiving node). When the bottleneck capacity increases, the HBH scheme may utilize the increased capacity more quickly.

[0124] It is contemplated that a receiver-driven routing mechanism may be implemented for routing in which the receiver may drive the progress of data transmission, e.g., the receiver may control the allocation of traffic along the transmission paths and certain end-to end operations with the sender (e.g., connection management and reliability while the sender's role may be minimized (e.g., to that of responding to the receiver's directions, end-to-end operations and sending corresponding feedback to the receiver)).

[0125] A receiver-driven routing mechanism may satisfy the receiver's desires, and/or requirements more easily and may give the receiver full control of data transmission. As it is the content consumer, the receiver may be able to obtain firsthand knowledge regarding its operations and/or the receiver-side network environment. With this knowledge, the receiver may adapt its behavior more timely and accurately. In a multipoint-to-point scenario, as there are multiple content providers, the receiver (e.g., only the receiver) may know how much and which data to receive from each content provider.

[0126] A receiver-driven routing mechanism may support heterogeneous wireless interfaces. A mobile host may be equipped with multiple heterogeneous interfaces to get performance tradeoffs that different access technologies (e.g., radio access technologies (RAT) and/or non-radio access technologies) such as Wife, LTE, WLAN and/or Ethernet, among others) exhibit regarding network capacity, mobility support, coverage area, and transmission power. The availability of heterogeneous interfaces may enable seamless handoffs and bandwidth aggregation, but may give rise to new challenges to existing transport protocols in terms of their functionalities. Different access networks may have different network characteristics and may use corresponding network specific transport control protocols. For a sender-driven routing mechanism, the sender may not (e.g., not easily) determine the receiver side network characteristics as the sender is typically remote (e.g., far away) from the receiver. The network specific transport control protocols may be implemented at a backbone server and the backbone server may select (e.g., choose) a suitable transport protocol for a specific connection. A receiver-driven routing scheme may be advantageous in order to deal with receiver side heterogeneity, as it may be easier for the receiver to choose a corresponding congestion control scheme for its interface (e.g., one or more multi-homed interfaces). In the multipoint-to point data transmission scenario, the receiver may be disposed in the center of control and may coordinate (e.g., easily coordinate) the transmission of multiple senders internally, without any explicit coordination between senders themselves.

[0127] A receiver-driven routing mechanism may support specific characteristics of content delivery. Conventionally, the sender has specific content to send to the receiver and it is the sender that knows the amount of content to be sent and/or the QoS, among others. In content delivery networks, the content "store" has a lot of content, but the receiver may desire a portion (e.g., a subset) of the stored content. The receiver may know how well the content is to be resolved, how fast the receiver desires the content, and/or the specific pieces the receiver is requesting.

[0128] In certain representative embodiments, a HBH receiver-driven transport protocol may be used such as Hop Pull Control Protocol (HoPCoP). Receiver-driven generally refers to the transport control functions being accomplished at the receiver- side (e.g., moving from a sender-driven transport control protocol such as TCP to a receiver side protocol). HBH principles may emphasize the participation of intermediate nodes in transport control. In certain representative embodiments, control functions may be distributed among senders, receivers, and intermediate routers. For HoPCoP, the receiver may have one (e.g., only one) active connection (e.g., communication session) when communicating with the network (e.g., downloading specific content) and, for parallel HoPCoP (pHoPCoP), the receiver may have multiple active connections (e.g., communication sessions) when communicating with the network (e.g., downloading specific content).

[0129] FIG. 3A is a diagram illustrating a sender/receiver architecture with an intermediate node and FIG. 3B is a diagram illustrating the protocol stacks used in the architecture of FIG. 3A.

[0130] Referring to FIG. 3A, the receiver 305 may request content from the sender 301 via the intermediate node 303. For example, the receiver 305 may send a request 307, which may be forwarded by the intermediate node 303 to the sender 301 . Responsive to reception of the forwarded request, the sender 301 may send data 309 (e.g., data packets) which may be forwarded by the intermediate node 303 to the receiver 305.

[0131 ] Referring now to FIG. 3B, the receiver 305 may include a protocol stack having a network layer 305a, a hop-to-hop sublayer 305b, an end-to-end sublayer 305c and an application layer 305d. The sender may include an identical protocol stack to that of the receiver. The intermediate node 303 may include a protocol stack having a network layer 303a and the hop-to-hop sublayer 305b. In certain exemplary embodiments, the hop-to-hop and the end-to-end sublayers may replace a TCP protocol layer. As shown in FIG. 3B, the sending of the request and the reception of the data may be hop-by-hop operations via the hop-to-hop sublayer. For example, the traffic (e.g., data packets) may be controlled at each intermediate node based on flow and/or congestion, among other things, and connection management and reliability may be controlled by end-to-end sublayers of the protocol stacks at the sender and the receiver.

[0132] In certain representative embodiments, a HBH transport protocol, Hop Pull Control Protocol (HoPCoP), may be used for hosts (e.g., mobile hosts) with an active connection for downloading of one or more selected files or content. The HoPCoP may be a receiver-driven transport protocol. It may distribute transport functionalities among the sender, receiver and/or one or more intermediate nodes.

[0133] In certain representative embodiments, a HBH transport protocol, pHoPCoP may be used for hosts (e.g., mobile hosts) with multiple active connections. It is contemplated that the multiple HoPCoP connections may be coordinated to aggregate (e.g., effectively aggregate) them into one abstract, virtual and/or composite connection for a higher layer application.

Transport control framework:

[0134] FIG. 3B shows a representative distribution of transport control functions among the sender 301 , intermediate node 303, and the receiver 305. The receiver may participate in transport control functionalities or operations (e.g., all transport control operations) including, for example, reliability, flow control and/or congestion control. The intermediate node may participate in flow and congestion control, and the sender may respond to its corresponding neighboring hop. The framework may be implemented on two sublayers. The HBH layer may operate on selected nodes or every node while the end-to-end layer may operate at the end points of the connection (e.g., session).

[0135] The control framework of HoPCoP may include different types of controlling loops (e.g. HBH controlling loops and end-to-end controlling loops). Congestion control and flow control may be supported in a HBH fashion and reliability and connection management may be supported in an end-to-end fashion. The HBH scheme may also provide reliability to a certain extent (e.g., HBH packet reliability). In certain representative embodiments, different types of feedback may be used (e.g., hop-by-hop feedback and/or end-to-end feedback). The HBH feedback may be used for congestion and flow control, while the end-to-end feedback may be used for reliability. The end-to-end feedback may address data packet reliability and may be used to calculate end-to-end round trip time (RTT) which may be used for packet scheduling in multi-homed scenarios. The HBH feedback may be used to provide a credit, a price and/or a HBH acknowledge.

[0136] FIG. 4 is a diagram illustrating another representative routing operation using a router.

[0137] Referring to FIG. 4, as a receiver-driven scheme, HoPCoP may use the REQ-DATA handshake for data transfer, where data (e.g., any data) transferred from the HoPCoP sender may be preceded with an explicit request (REQ) from the HoPCoP receiver (e.g., instead of using the DATA-ACK style of handshaking in TCP). As a HBH protocol, the REQ and DATA packets (e.g., all of the packets) may be transmitted in a HBH style. For example, the receiver may control the bandwidth of an established connection (e.g., session), how much data the intermediate node may send (e.g., and in turn the data quantities of the other segments of the transmission path from the sender to the receiver) and which data the segments of the data path may send.

[0138] The HoPCoP sender 400 may include a send buffer 402 and a SND.NXT data structure 404 used to maintain transport control functionalities. The HoPCoP receiver 430 may include a plurality of data structures that may be used to maintain transport control functionalities. For example, the HoPCoP receiver 430 may include a reliability function or module 431 including a RCV.NXT data structure 432, a REQ.NXT data structure 433; and/or a PENDING data structure 434. The reliability module 431 may receive via the intermediate node 460 from the sender, SEG.DAT 405, which may be a field in the data packet header of the data packets received that indicates the sequence number of a data segment that has been transmitted. The reliability module 431 may send to the sender 400 via the intermediate node 480, SEG.DEQ 463, which may indicate the highest data in-sequence received so far and may be sent in the REQ packet header of the request packet to notify the sender and/or intermediate node or nodes. The sender node 400 or intermediate node or nodes 460 may then have the choice to purge data from their buffers based on the indicated information.

[0139] The receiver may also include a congestion control function or module 435 that may receive from the reliability module 431 information 436 regarding the next request and/or information 437 data packet loss and/or progress (e.g., the data segments or data packets unsuccessfully sent and/or those successfully received). The congestion control module 435 may send to the sender 400 via the intermediate node 460, SEG.REQ 438, which may be a field in the REQ packet header indicating the sequence number of data packet or segments that may be requested. The sender and intermediate nodes may cache data packets in their send buffer 406 or data/req buffer 466. When the receiver 430 has requested multiple flows, the flow control module 444 may control the various priorities among the required flows. In the case when the receiver has access to multiple paths (i.e. it is multihomed), the flow control module may also establish the preference for the various possible paths for each flow (discussed in more detail below in connection with pHopCop). The flow control module 444 communicates these preferences to the congestion control module 435, which uses this information to select which flow a SEG.REQ 438 may be sent for. [0140] The operation of the congestion control module 462 and the flow control module 464 in the router 460 are similar to those in the receiver as described above. The difference is that the flow control policies in the router may be "local" (i.e. come from the router owner/network operator) and/or be communicated from a downstream router and/or the receiver using a side communication channel for policy control.

[0141 ] In certain representative embodiments, the HoPCoP may use a plurality of fields in packet headers of the data and request packets for interfaces among the sender, intermediate nodes and the receiver. The fields may include the SEG.REQ field; the SEG.DEQ field; and/or the SEG.DAT field. The SND.NXT may be a pointer maintained at the send buffer of the sender to indicate the maximum sequence number currently sent (e.g., sent thus far). The RCV.NXT may be a pointer maintained by the receiver to indicate the maximum sequence number received in order currently (e.g., thus far). The REQ.NXT may be a pointer used at the receiver to indicate the maximum sequence number requested currently (e.g., thus far).

[0142] After the connection (e.g., session) is established, the receiver may request data from the sender based on the size of an initial congestion window and the size may be adjusted based on the loss/progress indication from the reliability module to the congestion control module of the receiver.

[0143] In certain representative embodiments, selected ones of the intermediate nodes may have enhanced functions or operations. Some hops or nodes may continue to act as a classical router specified for routing, and other hops may act as an enhanced router, which participates in transport control. A transmission path may be partitioned into several segments. A segment may be a network domain operated by a specific ISP or several hops with the same network characteristics. In certain representative embodiments, end points (e.g., end nodes) of a segment may be enhanced routers and inner nodes to the segment may be ordinary routers or may have other packet processing functions or operations, which may be static or dynamically set. For example, a router may dynamically adapt its behavior during the communication session. Each segment may have a different congestion control algorithm. Each segment may operate with a congestion window based on, for example, Additive Increase Multiplicative Decrease (AIMD), Multiplicative Increase Multiplicative Decrease (MIMD) or Additive Increase Additive Decrease (AIAD) operations. [0144] Data transfer may be initiated by request packets sent out from the receiver and the sender may respond with corresponding data packets after receiving the request packets. The request packets and the data packets may be transmitted in a hop-by-hop style. The receiver may send a request either in a cumulative mode or in a pull mode by appropriately setting a corresponding pull flag in the packet header. When the sender receives a request packet with the pull flag set, it may send the data segment with the sequence number indicated in the request or the sender may cumulatively transmit data from SND.NXT that has not been sent yet.

[0145] When an intermediate node receives the request, it may check its memory and determine whether it has the corresponding data cached. If it has the data cached, it may send the data packet to the downstream hop or node and may delete the request. If it does not have the data cached, the request packet may enter the REQ buffer and may then be transmitted to the upstream hop or node according to the congestion window. The request packet may be then deleted from the REQ buffer after transmission upstream. The request packet may be dropped before transmission due to some packet management scheme or policy. For data packets received from upstream, the data packets may be cached even after transmission towards the receiver.

[0146] In certain representative embodiments, a three-way handshake may be used for a reliable and secure connection. In other representative embodiments, (e.g., for file delivery, a three-way handshake may not be used for connection setup and/or teardown. Because data transmission may be client oriented, the receiver may initiate and/or terminate a connection at its discretion (e.g., automatically without external intervention). As long as the receiver is aware of the address (e.g., IP address) of the sender (e.g., that it is requesting data from), the connection may be initiated by requesting a profile packet from that address. This initial profile may be a feature (e.g., standard feature) of the file and the receiver may request (e.g., always request) the initial profile first. The initial profile may indicate to the receiver the portion of the file that the content provider has stored and how to fragment the content. If appropriate, the receiver may reply with another request packet having the new content fragmentation information adapted by the receiver. Normal data transmission may begin. A flow may be terminated when the receiver does not request further (e.g., any further) packets. [0147] In certain representative embodiments, congestion control may be supported in a HBH fashion, for example, to maintain the appropriate amount of extra request packets in each respective enhanced hop or node. Different route segments or transmission path segments along the path may use the same or different congestion control algorithms. They may maintain the same or similar congestion control parameters including, for example, a congestion window and round-trip time information. The size of the congestion window may limit the amount of outstanding requests in each route segment and the mechanism for adapting or adjusting the congestion window (e.g., the size of the window) may be different for different kinds of HBH feedback. A plurality of different round-trip times (RTTs) may be used including HBH RTT, end-to-end RTT, and hop-to-end RTT. Hop-to-end RTT generally refers to the round-trip time between the enhanced hop and the sender. End-to-end RTT may be calculated at the receiver, while the HBH RTT and hop-to- end RTT may be calculated at each enhanced hop.

[0148] By way of example, for a congestion control algorithm, it is contemplated that A and B may be two neighboring enhanced hops with A closer to the content provider and they may use HBH acknowledge as feedback. A may send a corresponding feedback packet to B when it sends a request packet to its upstream node (e.g., and not when it receives a request packet). After B receives the feedback packet, B may calculate an HBH RTT. By defining BaseRTT as the minimum of measured (e.g., all measured) round-trip times for a given flow, the expected throughput may be given by ExpectedRate = CongestionWindow/BaseRTT. The current sending rate, ActualRate, may be calculated by computing a sample RTT. The calculation of ActualRate may be provided once per RRT. If Diff = ExpectedRate - ActualRate and two thresholds a and b exist where a < b, which may correspond to having too few and too many extra request packets in the network segment, respectively, when Diff < a, the congestion window may be increased linearly during the next RTT; when Diff > b, the congestion window may be decreased linearly during the next RTT; and otherwise, the congestion window may be kept unchanged. Slow start may be used when a timeout occurs.

[0149] In certain representative embodiments, flow control may allow each enhanced hop, node or router to limit (e.g., or restrict) the amount of data in-transit to the available buffer space. This may be supported in the HBH operation. For each enhanced hop, a request may be sent (e.g. only sent) if the corresponding data, when received, cannot cause an overflow for the enhanced hop.

[0150] In a network of unreliable nodes and links, reliability may be (e.g., may only be) ensured by end points of the connection (e.g., the sender and the receiver). An end-to-end retransmission mechanism may be provided at the end-to-end sub-layer of the framework. If no corresponding data is received for a transmitted request in a certain time, the receiver may retransmit the request packet. It is contemplated that, in a receiver-driven scheme, the receiver may have access (e.g., direct access) to the receiver buffer and may perform loss detection and loss recovery better than (e.g., more timely and accurately) in a sender-driven scheme.

[0151 ] Although a HoPCoP is shown in which the receiver has one active connection for content, it is contemplated that in a multi-homed scenario, a mobile host (e.g., a receiver) may have multiple active connections (or sessions) with several content providers, for example, to download a file. An extension for HoPCoP may include the use of parallel HoPCoPs (pHoPCoP). A pHoPCoP connection or session may include one receiver, one or more senders and intermediate nodes. A unicast pHoPCoP connection may be similar or equivalent to a HoPCoP connection and a multipoint-to-point pHoPCoP connection may be modeled as multiple HoPCoP connections running in parallel, for example, to download a file from several content providers. HoPCoP connections and multiple connection states may be coordinated. It is contemplated that the number of HoPCoP connections in a pHoPCoP connection may be larger than the number of interfaces that the receiver is equipped with since one interface may generate multiple connections for downloading a file from several senders, simultaneously.

[0152] FIG. 5 is a diagram illustrating multi-homing with different content providers for downloading the same file. Each access network A and B may have a HoPCoP controller 505 executing (e.g., running) at the same time, simultaneously and/or in parallel. The senders (e.g., servers I and I I) and/or the intermediate nodes (e.g., a first AP having a first access technology (AT) or radio AT and a second AP having the same or a different AT or RAT) may operate in the same or a similar manner to those of the HoPCoP in FIG. 4. The receiver or mobile host 507 may include a pHoPCoP engine communicating with a plurality of HoPCoP controllers. Each HoPCoP controller may communicate with the HoPCoP of an intermediate node, for example. An application at the mobile host may communication via the end-to-end sublayer, the HBH sublayer and the IP layer of the pHoPCoP, the HBH sublayer and IP layer of the intermediate node and the end-to-end sublayer, the HBH sublayer and the IP layer to an application at the sender, for example to download content from the sender to the receiver.

[0153] FIG. 6 is a diagram illustrating a pHoPCoP receiver 600 that may include a transport layer having a plurality of functional units. The pHoPCoP receiver 600 may include the pHoPCoP engine 602 and the HoPCoP controller 604. The pHoPCoP engine 602 may control the initialization, termination and/or cooperation of different HoPCoP controllers. An application 606 may push request packets to the pHoPCoP engine 602. The pHoPCoP engine may distribute the request packets to one or more HoPCoP controllers 604. The HoPCoP controllers 604 may get data packets from different IP interfaces and may send the data packets to the pHoPCoP engine 604 and the application 606 may receive (e.g., pump) data from the pHoPCoP engine 602.

[0154] A pHoPCoP connection may include multiple HoPCoP connections, each of which may have a different state at a specific time. For example, the pHoPCoP engine may desire to create or terminate a HoPCoP connection due to or based on one or more policies and/or during periods of mobility, the mobile host may handoff from one server to another or may change the number of servers it connects to. The states of the HoPCoP connections may be dynamically changing over a period of time. To support the changing of multiple HoPCoP connections, the pHoPCoP engine may handle multiple states as a multi-state extension of HoPCoP, and dynamically create and/or delete HoPCoP states according to the number of active connections in use. The states (e.g., multiple states) of the pHoPCoP may be managed at the receiver such that no change may be provided at the senders or intermediate nodes to support the multi-state operation.

[0155] A pHoPCoP connection with n active HoPCoP connections may include n states at the pHoPCoP receiver. In certain representative embodiments, the pHoPCoP may decouple the transport layer functionalities associated with the per- connection characteristics from those that pertain to the aggregate connection. This is done by implementing the transport control functions related to per-connection characteristics at respective HoPCoP controllers (e.g., each HoPCoP controller), while implementing functions pertaining to aggregate connection at the pHoPCoP engine. By contrast, the HoPCoP may have a HBH sub-layer and an end-to-end sub-layer that controls transport functionalities (e.g., all transport functionalities). Reliability and buffer management may pertain to the aggregate connection, and may be handled by the pHoPCoP engine. Flow control may also be handled (e.g., reside) at the pHoPCoP engine. Congestion control, which may be a per-connection functionality or operation, may be handled by individual HoPCoP controllers. The pHoPCoP engine may control the data to be requested from each sender, and individual HoPCoP controllers may control the amount of the requested data the HoPCoP may request along a path.

[0156] The pHoPCoP connection may include multiple HoPCoP connections. In certain representative embodiments, different HoPCoP connections may have mismatched characteristics, for example, regarding bandwidths, delays and/or loss rates, among others. For example, data segments with larger sequence numbers traversing the faster connections may arrive earlier than those with smaller sequence numbers traversing the slower connections. Out-of-order arrivals at the receiver buffer may cause head-of-line blocking and may make the aggregate connection stall. The pHoPCoP may achieve multiplexing and bandwidth aggregation by scheduling transmissions (and/or requests) based on the RTT and the congestion window of one or more HoPCoP connections. The HoPCoP controller may call for requests from the pHoPCoP engine when space exists in the corresponding congestion window and the pHoPCoP may assign the sequence of requests to the HoPCoP connection based on the estimated or modeled time the requested data segment is to arrive through the concerned connection (e.g., regardless of when the request is to be sent). The individual HoPCoP controllers may be responsible for and configured for loss detection and may report a loss (e.g., any loss) detected to the pHoPCoP engine such that the corresponding request may be reassigned to another HoPCoP controller that has space in its congestion window, to prevent the aggregate connection from stalling.

Representative Data Structures and Representative Packet Scheduling Algorithms

[0157] The pHoPCoP may maintain data structures and packet ranking algorithm for performing packet scheduling. For example, when there are plural HoPCoP connections (e.g., n HoPCoP connections where n is an integer number of HoPCoP connections in total), the pHoPCoP may include the following data structures: (1 ) a binding data structure that may maintain a mapping between: (i) transmitted requests that have a pending data reply (e.g., requests that have been sent but corresponding data segments have not yet been received); and (ii) the HoPCoP controller through which those requests were transmitted;

(2) a pending data structure that may maintain (or keep) ranges of sequence numbers for data yet to be requested, which may include the sequence numbers of data segments that are to be retransmitted, and sequence numbers greater than the highest sequence number requested so far (e.g., at the current time);

(3) a timelist data structure that may maintain (or keep): (a) the latency period information (such as measured or estimated round-trip-time (RTT) information and/or measured or estimated one-way time information, among others); (b) congestion window (CW) information, and/or (c) timestamp information (e.g., timestamps) of transmitted request packets (e.g., all request packets) that are pending data replies (e.g., transmitted request packets that may be sent without corresponding data packets received for each HoPCoP connection); and/or

(4) an active data structure that may maintain (or keep track of) which HoPCoP connections may be active and which may not be active. As the memory for the pHoPCoP controller may be finite in size (and/or limited), the pHoPCoP engine may determine whether it has enough space in its receive buffer when the HoPCoP controller calls for a request. If there is not enough space in the receive buffer, the pHoPCoP engine may return with a FREEZE command or signal to the corresponding HoPCoP controller. When the pHoPCoP determines that there is enough space after a FREEZE command or signal, the pHoPCoP engine may issue a resumeQ call to those HoPCoP connections that are in sleep mode (or not active). In certain representative embodiments, the RTTs and/or timestamps are parameters for packet scheduling and used in the packet ranking algorithm.

[0158] FIG. 7 is a timing diagram illustrating transmission timing of data packets.

[0159] Referring to FIG. 7, the pHoPCoP may include a packet-ranking algorithm. For example, in the pHoPCoP the self-clocking in individual HoPCoP connections may drive the transmissions of requests to pull data from a sender (e.g., each sender). The HoPCoP controller may send the data packet to the pHoPCoP engine upon receipt (e.g., as soon as the HoPCoP controller receives the data packet). The pHoPCoP engine may distribute a suitable request packet for the corresponding HoPCoP controller according to the packet-ranking algorithm, for example, to reduce or to avoid out-of-order data arrivals. The packet-ranking algorithm may determine the assignment of a respective data segment to a request based on a time the corresponding data is estimated to arrive (e.g., regardless of the time the request is sent). For the packet ranking algorithm, in the timelist data structure, for the i th connection, the RTT is denoted as RTT,, the congestion window is denoted as cwnd,, the timestamps of request packets are denoted as ΤΪ, Τ^, Τ^, ··· (where T[ is the oldest request packet pending a reply data packet, Ύ is the second oldest request packet pending a reply data packet, and so forth.). At time t=T, the j th HoPCoP controller may receive a data packet and may issue a receiveQ call to the pHoPCoP engine.

[0160] The pHoPCoP engine may locate the rank k of the request that is to be scheduled according to Equation 1 as follows: cwndi

where is the estimated number of packets

for the i connection that is requested and the corresponding data packets arriving before the arrival of the newly requested data packet in the j th connection, as shown in FIG. 7.

[0161 ] The pHoPCoP may distribute the request packet, which may be the k th segment to be requested in the pending data structure, to the HoPCoP controller. The pHoPCoP engine may insert the entry for the segment in the binding data structure, may delete the corresponding entry in the binding data structure, and may update an estimated value of the RTT and/or the timestamp or timestamps in the timelist data structure.

[0162] Referring again to FIG. 6, representative commands, functions, application and/or modules may act as an interface between the application 606 and the pHoPCoP engine 602 including: (1 ) OPEN(); (2) CLOSE(); (3) write(), and/or (4) read(), among others.

[0163] The OPEN()/CLOSE() call (or application) may open a pHoPCoP socket by using the OPEN() call and may terminate the pHoPCoP socket by the CLOSE() call. The write()/read() call or application may publish its ranges of sequence numbers for data to be requested to the pHoPCoP engine using the write() call and may fetch data from the receiving buffer of the pHoPCoP engine using the read() call. The pHoPCoP engine may create a HoPCoP controller by using the openQ call and the HoPCoP controller may start a connection setup procedure. The connection setup procedure for each HoPCoP connection is described below. The pHoPCoP engine may delete a HoPCoP connection by sending a closeQ call to the corresponding HoPCoP controller. A HoPCoP connection may be terminated when the HoPCoP controller no longer requests any further packets.

[0164] Representative interfaces between the pHoPCoP engine and the HoPCoP controllers may include, for example: (1 ) openQ interfaces; (2) closeQ interfaces; (3) establishedQ interfaces; (4) closedQ interfaces; (5) sendQ interfaces; (6) receiveQ interface; (7) freezeQ interface; (8) resumeQ interfaces;(9) lossQ interfaces; and/or (10) updateQ interfaces. These interfaces are discussed in detail, as follows.

(1 ) The pHoPCoP engine may create a HoPCoP controller by using the openQ call and the HoPCoP controller may start a connection setup procedure. The connection setup procedure for each HoPCoP connection is described below. The pHoPCoP engine may delete a HoPCoP connection by sending a closeQ call to the corresponding HoPCoP controller. A HoPCoP connection may be terminated when the HoPCoP controller no longer requests any further packets.

(2) When a HoPCoP connection is established, the corresponding HoPCoP controller may issue an establishedQ call to notify the pHoPCoP engine. When the HoPCoP connection is terminated, the HoPCoP controller may issue a closedQ call. As no further sender-receiver interaction is used for terminating a HoPCoP connection, in certain representative embodiments, the HoPCoP controller may send (e.g., may immediately send) back a closedQ call, when it receives a closeQ call from the pHoPCoP engine. (3) When a HoPCoP controller gets a data packet, it may send the data packet to the pHoPCoP engine with a new request packet query by using the receiveQ call. The pHoPCoP engine may distribute a suitable request packet to the HoPCoP controller using the sendQ call.

(4) When the pHoPCoP engine does not have sufficient buffer space to buffer incoming data packets (e.g., the buffer space is below a threshold level), the pHoPCoP may use a freezeQ call to freeze some or all of the HoPCoP connections and may update the active data structure. When sufficient buffer space is available (e.g., created again), the pHoPCoP engine may use the resumeQ call to reactivate some sleeping (or inactive) connections according to or based on information in the active data structure.

(5) When the HoPCoP controller detects a loss, the HoPCoP controller may use the lossQ call to inform the pHoPCoP engine. The pHoPCoP engine may unbind the lost segment in the binding data structure, may insert a sequence number in the pending data structure, and may delete a corresponding entry from the timelist data structure. When (e.g., whenever) a HoPCoP controller updates its RTT estimate and congestion window, it may use the updateQ call to inform the pHoPCoP engine, which may update the timelist data structure.

[0165] FIG. 8 illustrates a state diagram of the connection management for the pHoPCoP, for example to establish multi-homed connections to a receiver or a mobile host.

[0166] Referring to FIG. 8, when the application opens a pHoPCoP socket (801 ), the pHoPCoP engine may create at least one HoPCoP connection (803). After waiting for the connection setup procedure to complete (during which the application is in a Wait state (808)), the HoPCoP controller may issue an establishedQ call (803) to the pHoPCoP engine, and the pHoPCoP connection may be established (807). During the lifetime of the connection, the pHoPCoP engine may create more HoPCoP connections (813) based on the callbacks from the lower layers, or may delete a HoPCoP connection (815) when its sender is disconnected or the corresponding interface encounters some problem. The pHoPCoP may enter the ESTABLISHED(n) states (807) when the pHoPCoP has n HoPCoP connections opened (e.g., in total). When the application closes the pHoPCoP socket, the pHoPCoP engine may send a closeQ message (809) to the HoPCoP controllers (e.g., all HoPCoP controllers). When all closedQ messages are received by the pHoPCoP, the pHoPCoP connection may be closed (81 1 ).

[0167] Congestion control may be handled by the HoPCoP controllers associated with (e.g., in) a pHoPCoP connection. Each HoPCoP controller may control the quantity and timing of data transferred through a respective connection. As the receiver may be equipped with multiple interfaces with different access technologies, the respective HoPCoP controller, which may be related to a corresponding interface, may be aware of the specific network to which it has access. The HoPCoP controller may determine the congestion control mechanism to use and may adapt a congestion window accordingly for its HoPCoP connection, for example, by itself or through an initial setup procedure.

[0168] The pHoPCoP engine may manage flow control of the pHoPCoP connections, as the pHoPCop may have control over the receive buffer. The pHoPCoP engine may freeze a certain HoPCoP controller because the available buffer size or space is below a threshold, QoS on the respective connection is below a threshold, and/or based on one or more pre-established or dynamic policies. The sleeping or inactive HoPCoP connections may be re-activated when restrictions are removed.

[0169] Reliability including loss detection and/or loss recovery may be supported (or managed) by each HoPCoP controller and the pHoPCoP engine may manage loss recovery. For example, the pHoPCoP engine may maintain the binding data structure and the pending data structure. When a request (REQ) packet is sent through a HoPCoP connection, the relevant mapping information (e.g., of the request that is pending a reply to the HoPCoP controller that was used to send the request) may be recorded in the binding data structure. In response to the corresponding data packet being successfully received (e.g., the request no longer has a pending reply), the binding entry may be deleted. If the HoPCoP controller detects a loss of the corresponding data packet (based on, for example, a timeout condition in which the corresponding data packet does not arrive before a timeout occurs), the HoPCoP controller may issue a lossQ call to the pHoPCoP engine. The pHoPCoP may delete the binding entry and add the corresponding sequence number to the pending data structure to enable the REQ packet to be retransmitted. The pHoPCoP engine may be responsible for maintaining (e.g., may manage or control maintenance of) the SEG.DEQ field of the REQ packet which may be used by senders and intermediate nodes to purge data, appropriately.

[0170] Because, in a HBH protocol, each intermediate router (e.g., network element) may be a decision maker (e.g., the intermediate routers may enable packet processing operations other than routing), the router may enable functionality or operations at the intermediate node. The decisions that may be made by the router at each hop and the algorithm used to make those decisions may be a part of the protocol design.

[0171 ] Each router may operate using the HoPCoP and/or pHoPCoP. In certain representative embodiments, an overall router architecture may be defined using localized optimization by each router independent of other routers or based on a transmission path segment including a plurality of routers (e.g., some or all of which may be HBH enabled). For example, the protocol operation may be an instant of a global optimization, which is effectively partitioned into a number of local optimizations. Several specific instances of the localized optimizations may be defined that may lead to representative algorithms and associated performance points thereof.

[0172] FIG. 9 is a block diagram illustrating a representative router for implementing routing operations in accordance with one or more disclosed embodiments.

[0173] Referring to FIG. 9, the router 900 may include: (1 ) a packet management controller 903; (2) a forwarding table 905; (3) a routing algorithm 907; (4) one or more input queues 909, as a first type of buffer; (5) one or more output queues 91 1 , as a second type of buffer and/or (6) cache memory 913, as a third type of buffer. The packet management controller 903 may manage: (1 ) lookups in the forwarding table 905, (2) the input and output queues 909, 91 1 , (3) the cache memory 913, (4) packet scheduling, and/or (5) transport control.

[0174] Network congestion may be divided into link congestion and node congestion such that when a link is bottlenecked (e.g., having too much data passing therethrough, having an inputted data rate to the link exceeding the outputted data rate from the link, and/or having a data rate exceeding a threshold rate), link congestion may occur. For example, when too much data enters a bottlenecked node and the data entering rate is larger than the data leaving rate, node congestion may occur. Different congestion types may have different influence, for example, on packets being dropped. [0175] Upon receiving a request, the router may select one of a plurality of request reception modes (e.g., may initiate one or more operations corresponding to the selected request reception mode). A first one of the request reception modes may include certain operations that include pushing (e.g., placing or moving) the request packet into the request pool (e.g., a region of memory, for example the cache memory), and transmitting request packets with a threshold reliability (e.g., with a reliability exceeding a threshold) using congestion control and flow control policies, procedures and/or operations. After transmitting a request packet, the router may send (e.g., transmit) corresponding HBH feedback to the requesting node. A second one of the request reception modes may include dropping the request packet in an adaptive drop operation, for example, using an adaptive drop probability (or algorithm) in accordance with, based on and/or because of heavy traffic (e.g., traffic measurement values or traffic congestion values exceeding a traffic threshold value) and/or limited or reduced cache space (e.g., available cache space below a cache threshold value). The adaptive drop probability (or algorithm) may be a function of flow fairness index, congestion type, congestion metric and/or available cache space, among others, and may use a weighting.

[0176] Upon receiving a data packet, the router may select one of a plurality of data reception modes (e.g., may initiate one or more operations corresponding to a selected data reception mode). A first one of the data reception modes may transmit and cache the data packet. A second one of the data reception modes may transmit and discard the data packet. A third one of the data reception modes may cache the data packet prior to transmission of the data packet (e.g., caching the data packet, waiting a period of time, then transmitting the data packet from the cached memory). A fourth one of the data reception modes may drop the data packet. In certain representative embodiments, the first and/or second data reception modes may be used responsive to low or no congestion (e.g., below a threshold) and/or the third and/or fourth data reception modes may be used responsive to higher congestion (e.g., at or above a threshold). In certain representative embodiments, the dropping of data packets may be avoided using congestion control via request packets. In certain representative embodiments, flow fairness index may be defined as the ratio of the flow's rate and the corresponding TCP-friendly flow throughput. TCP-friendly flow throughput may be proportional to—— τ=, where RTT is the HBH RTT and p is a loss rate. Routing path selection and adaptation may be determined by routing algorithms and forwarding table updates. Link metric may change according to corresponding link condition, and may influence routing.

[0177] FIG. 10 is a block diagram illustrating representative input and output parameters of a router or intermediate node 1000 with respect to multiple senders 1001 and multiple receivers 1002.

[0178] Referring to FIG. 10, the flow control may be modeled by a plurality of parameters. The parameters may include: (1 ) link set L; (2) link capacity array T; (3) transmission session set S; (4) input transmission rate x r : for session r; (5) output transmission rate x° : for session r; (6) input transmission credit c r l : for session r; (7) output transmission credit c° : for session r; (8) buffer size q r : used by session r; (9) input link L; (r): for session r; (1 0) output link L 0 (r): for session r; (1 1 ) R¾ r =

( 1, where session r traverses into the router through link 1 , , .„_, n r i , ; and/or 12 R l ° r =

(0, otherwise

(1, where session r traverses away from the router through link 1

(0, otherwise

[0179] Although different representative optimizations for transmission and/or reception of data packets are shown below, it is contemplated that any number of such transmissions and/or receptions may be possible.

[0180] In certain representative embodiments, it is contemplated that for session r: (1 ) the output transmission rate x° and (2) the input transmission credit c r : may be measured or estimated. The output transmission rate x° and the input transmission credit c : may be maintained or kept equal or substantially equal to each other (such that x° = c r ). Because x and c° for one router can be equal to x° and c r for its corresponding neighboring router (e.g., one hop or next segment router), the output transmission credit c° for session r may be set or established (e.g., only this variable is to be set). By way of example, in the following Equations/Expressions:

R l c l + R°c°≤ T c°— c < (q r — q r ) p · u, r G S

{where R l is an L*S routing matrix with elements R r and R° is an L*S routing matrix with elements R° r ). c l = (c r , r E S), c° = (c°, r E S .

The target queue occupancy is for session r while the actual queue occupancy is q r , p and u are two parameters operating (e.g., working). The objective function may maximize∑ res ° .

[0181 ] In certain representative embodiments, output transmission rate x° may be smaller than c r , so, correspondingly, x r may be smaller than c° . The variables x° and c° may be set to different values and may represent different types of variables:

[0182] A representative optimization framework may determine x° by maximizing:

res

subject to (or based on): x°≤ c , r G S

R l x l + R°x° < T

x° < x + (q r ) p · u, r G S

x° > 0, r G S

and may use the following function to determine c° :

c° (n + 1) = F(c° n), x r (n), x° n), q(n))

q(n + 1) = q(n) + RTT * (V - x°) - D (-)

(where F() may be a congestion control algorithm and D() may be a packet drop algorithm.

[0183] In certain representative embodiments, an additive increase multiplicative- decrease (AIMD) algorithm may provide feedback control, in which linear growth of a congestion window may be combined with an exponential reduction in the congestion window responsive to congestion occurring. For example, the transmission rate (e.g., window size) may be increased to enable additional usable bandwidth, until a loss occurs. When loss is detected, the policy may be changed or adjusted to be a policy of multiplicative decrease, which may, cut the congestion window by a percentage after such a loss.

[0184] A loss generally refers to either a timeout or the event of receiving a predetermined number of duplicate acknowledgements (ACKs). Other representative policies and/or algorithms for fairness in congestion control may include additive increase additive decrease (AIAD), multiplicative increase additive decrease (M IAD) and multiplicative increase multiplicative decrease (MIMD).

[0185] By way of example, the output transmission credit for an AIMD algorithm may be set forth as follows: c°(n) + -^ , <Jr < thresholdl

c° (n), thresholdl≤ q r ≤ threshold2

c° (n)

, q r > threshold2 where thresholdl and threshold2 may be fixed or dynamic parameters that indicate router congestion. The target queue occupancy approximation may be derived from the output transmission credit algorithm and may be set forth as follows:

where q r * : is the target queue occupancy for session r and g is a gain parameter.

[0186] In certain representative embodiments, fairness may be considered in the optimization (e.g., optimization framework.) One type of fairness may include proportional fairness, which may be defined such that a vector of rates x = (x r r £ S) is proportionally fair if it is feasible and if, for any other feasible vector x* , the aggregate of proportional changes is zero or negative:

res

The objective function may be replaced as ∑ re5 log (x r ) . From the optimality criterion, if x is the optimal point, then, for any feasible vector y, V/ 0 (x) T (y - x) < 0, which is the same as the proportional fairness criterion. So the logarithmic utility function may be associated with proportional fairness. Vector x may be weighted proportionally fair if it is feasible and if, for any other feasible vector x* , the following inequality exists (e.g., always exist):

res

[0187] The corresponding object function may be∑ re5 w r log (x r ) . [0188] In certain representative embodiments, the maximum for the optimization may be as follows:

^ io-Jog (c°)

res

subject to:

R°c°≤T - R i c i = Τ ' c° < c r + (q r — q r ) p · u, r G S

c° > 0, r G S

[0189] The constraint R°c° < T may contain (e.g., at most) s inequalities (where s is the total number of sessions). Those inequalities may be divided into two categories. In a first category, the inequality may have one variable and, in a second category, the inequality may have at least two variables. In the first category, c°≤ Tj, and the optimal point for c° may be min {Tj, , c- + (q j * - q j p · u). In the second category, it is contemplated that S' may be set for sessions that traverse out of the router from the same link j, and the maximum for the optimization may be as follows:

^ io-.log (c°)

res'

subject to:

R°c° < T j

c° > 0, r G S'

[0190] In certain representative embodiments, a combination of HBH routing and receiver-driven routing may be implemented using a corresponding transport layer protocol and building (or generating) an enhanced router functionality, operation and/or model. Certain representative embodiments may enable an enhanced transport layer used for multi-homed mobile hosts to get access to Internet content.

[0191 ] It is contemplated that different objective functions may be used with different impacts on router behavior. In certain representative embodiments, a mobile host may consume data from a backbone server and/or upload data to the backbone server. When a mobile host uploads data to the backbone server, the backbone server may become the HoPCoP receiver. It is contemplated that certain controlling functionalities may continue to be located at (e.g., provided by) the mobile host. For example, the transport protocol may be one that is transpositional (e.g., the transport protocol may dynamically redistribute the transport controlling functionalities to the sender or the receiver depending on, for example, the direction of the traffic and/or the capability or type of the devices).

[0192] It is also contemplated that network coding may be adopted such that, when data packets are encoded using a network coding scheme, an ordered sequence of packets may not be provided. The receiver may receive a certain number of data packets and may decode them into a completed file, without considering the timeliness or sequence of incoming data packets. Network coding may establish new operations, policies and/or procedures for transport control, for example, because the packet order may not be used and reliability may be satisfied by the coding scheme itself.

[0193] In one embodiment, the method may comprise: receiving, by a first intermediate node, a signal indicating an allocation of data packets to be sent from the sending node to the receiving node; determining, by the first intermediate node, whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the first intermediate node; and, responsive to a determination that one or more data packets of the allocation of data packets is to be provided by the cached data packets, sending, by the first intermediate node, one or more of the cached data packets.

[0194] In an embodiment, the method may further include wherein the signal received by the intermediate node is received from one of the receiving node and a second intermediate node disposed between the first intermediate node and the receiving node.

[0195] In an embodiment, one or more of the above-noted methods may further include wherein the communication session is an internet protocol (IP) communication session.

[0196] In an embodiment, one or more of the above-noted methods may further include wherein the determining of whether one or more data packets of the allocation of the data packets is to be provided by cached data packets includes determining whether data packets of the communication session pending reception by the receiving node are cached by the first intermediate node; and the sending of one or more of the cached data packets includes sending one or more of the cached data packets that are pending reception toward the receiving node.

[0197] In an embodiment, one or more of the above-noted methods may further include wherein the signal received by the first intermediate node indicates at least a quantity of data packets of the communication session to be sent to the receiving node.

[0198] In an embodiment, one or more of the above-noted methods may further include wherein the signal received by the first intermediate node further indicates pending data packets of the communication session that are pending reception by the receiving node.

[0199] In an embodiment, one or more of the above-noted methods may further include sending, by the first intermediate node, data packets that are pending reception by the receiving node, including one or more of the cached data packets, based on the received signal by (1 ) selecting respective data packets that are pending reception by the receiving node, including at least one cached data packet, based on the indicated allocation in the received signal, and (2) sending the selected, respective, data packets toward the receiving node.

[0200] In an embodiment, one or more of the above-noted methods may further include caching, by the first intermediate node, one or more of the data packets of the communication session destined for the receiving node responsive to available cache space at the intermediate node exceeding a threshold amount.

[0201 ] In an embodiment, the method may include: receiving, by a first intermediate node, a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; determining, by the first intermediate node, whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the intermediate node; and responsive to a determination that one or more data packets of the allocation of data packets is not to be provided by the cached packets, sending, by the first intermediate node, an allocation signal toward one of the sending node and a second intermediate node disposed between the sending node and the first intermediate node in the communication session and receiving one or more data packets of the communication session. [0202] In an embodiment, the above-noted method may further include forwarding the one or more data packets of the communication session received from the one of the sending node and the second intermediate node to the receiving node.

[0203] In an embodiment, the method may include: sending upstream, by the intermediate node, a signal indicating an allocation of data packets of the communication session to be sent toward the receiving node; receiving, by the intermediate node, at least one data packet of the communication session; selecting, by the intermediate node, one of a plurality of operating modes including at least first and second operating modes; and responsive to selection of the first operating mode, caching, by the intermediate node, the received at least one data packet for forwarding toward the receiving node.

[0204] In an embodiment, one or more of the above-noted methods may further include wherein the selecting is based on a received signal from at least one of the receiving node and a downstream node and wherein, (1 ) the first operating mode includes caching the received one or more data packets by the intermediate node and forwarding them toward the receiving node; and (2) the second operating mode includes routing the received one or more data packets toward the receiving node without being cached.

[0205] In an embodiment, one or more of the above-noted methods may further include: receiving, by the intermediate node, a further signal sent by at least one of a downstream node in the communication session and the receiving node indicating another allocation of data packets to be sent toward the receiving node; and sending, by the intermediate node, cached data packets cached by the intermediate node, in accordance with the further signal toward the receiving node.

[0206] In an embodiment, one or more of the above-noted methods may further include wherein the sending of the signal upstream by the intermediate node is to at least one of an upstream node in the communication session and the sending node.

[0207] In an embodiment, one or more of the above-noted methods may further include wherein the communication session is an IP communication session.

[0208] In an embodiment, one or more of the above-noted methods may further include wherein the selecting of the operating mode is based on at least one of: (1 ) congestion of a transmission path downstream of the intermediate node; and (2) flow control in accordance with information sent by the downstream node or the receiving node. [0209] In an embodiment, one or more of the above-noted methods may further include: storing policies for processing data packets; inspecting, by the intermediate node, from the one or more data packets received, packet information used for packet processing; and packet processing, by the intermediate node, the inspected packets based on the packet information and the stored policies.

[0210] In an embodiment, one or more of the above-noted methods may further include wherein the caching of the at least one received data packet includes caching the at least one received data packet, responsive to available cache space at the intermediate node exceeding a threshold amount.

[021 1 ] In an embodiment, one or more of the above-noted methods may further include dropping one or more of the received data packets, responsive to available cache space being below a threshold amount.

[0212] In an embodiment, the method may include: determining, by the receiving node, a first allocation of data packets to be provided to the receiving node via the first communication session and a second allocation of data packets to be provide to the receiving node via the second communication session; sending, by the receiving node to the first intermediate node, a first signal indicating the first allocation of data packets of the first communication session to be sent to the receiving node; sending, by the receiving node to the second intermediate node, a second signal indicating the second allocation of data packets of the second communication session to be sent to the receiving node; and receiving, by the receiving node, the first allocation of data packets via the first communication session and the second allocation of data packets via the second communication session.

[0213] In an embodiment, the method may further include wherein the receiving of the first allocation of data packets is independent of the receiving of the second allocation of data packets.

[0214] In an embodiment, one or more of the above-noted methods may further include wherein a first transmission path for the first allocation of data packets to the receiving node is independent of a second transmission path for the second allocation of data packets to the receiving node.

[0215] In an embodiment, one or more of the above-noted methods may further include wherein the receiving node is multi-homed on a plurality of networks and the first transmission path and the second transmission path comprise different networks. [0216] In an embodiment, one or more of the above-noted methods may further include determining an estimated time of arrival at the receiving node of a respective data packet requested; and scheduling the respective data packet for the first or second allocations or a subsequent allocation based on the estimated time of arrival.

[0217] In an embodiment, one or more of the above-noted methods may further include wherein the receiving of the first and second allocations of data packets includes receiving at least one data packet of the first and second allocations that has been cached at one of the first intermediate node and the second intermediate node.

[0218] In an embodiment, one or more of the above-noted methods may further include implementing, by the receiving node, one of Hop Pull control Protocol (HoPCoP) or parallel HoPCoP (pHoPCoP).

[0219] In an embodiment, the method may include: receiving, by the intermediate node, a signal indicating an allocation of data packets to be sent from the sending node to the receiving node; determining, by the intermediate node, whether one or more data packets of the allocation of the data packets is to be provided to the receiving node subject to a policy preconfigured in the intermediate node; and responsive to a determination that one or more data packets of the allocation of data packets is not to be provided, ignoring the signal indicating an allocation of data packets to be sent from the sending node to the receiving node.

[0220] In an embodiment, the apparatus may include: a storage unit configured to cache received data packets; a transmitter/receiver unit configured to receive a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; and a processor configured to determine whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the storage unit, wherein, responsive to the processor determining that one or more data packets of the allocation of data packets is to be provided by the cached data packets, the transmitter/receiver unit sends one or more of the cached data packets.

[0221 ] In an embodiment, the apparatus may include: a storage unit configured to cache received data packets; a transmitter/receiver unit configured to receive a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; and a processor configured to determine whether one or more data packets of the allocation of the data packets is to be provided by cached data packets cached by the storage unit, wherein responsive to the processor determining that one or more data packets of the allocation of data packets is not to be provided by the cached data packets, the transmitter/receiver unit sends one or more of the data packets forwarded from upstream nodes.

[0222] In an embodiment, the apparatus may include: a storage unit configured to cache received data packets; a transmitter/receiver unit configured to send upstream in the communication session a signal indicating an allocation of data packets of the communication session to be sent toward the receiving node and to receive one or more data packets of the communication session; and a processor configured to determine whether to forward the received data packets toward the receiving node or to cache the one or more received data packets based on a received signal from the receiving node or a downstream node; wherein responsive to the processor determining that one or more data packets are to be cached by the storage unit, the storage unit caches the received one or more data packets subsequent to forwarding toward the receiving node.

[0223] In an embodiment, the apparatus may include: a receiving node for managing data packets associated with first and second communication sessions with one or more sending nodes using first and second intermediate nodes, comprising: a processor configured to determine a first allocation of data packets to be provide to the receiving node via the first communication session and a second allocation of data packets to be provide to the receiving node via the second communication session; and a transmitter/receiver unit configured to: (1 ) send to the first intermediate node a first signal indicating the first allocation of data packets of the first communication session to be sent to the receiving node; (2) send to the second intermediate node a second signal indicating the second allocation of data packets of the second communication session to be sent to the receiving node; and (3) receive the first allocation of data packets via the first communication session and the second allocation of data packets via the second communication session.

[0224] In an embodiment, the apparatus may include: a storage unit configured to cache received data packets; and a transmitter/receiver unit configured to receive a signal indicating an allocation of data packets of the communication session to be sent to the receiving node; and a processor configured to determine whether one or more data packets of the allocation of the data packets is to be provided to the receiver subject to a policy preconfigured in the intermediate node; and a processor configured to determine whether one or more data packets of the allocation of the data packets is to be provided to the receiver subject to a policy preconfigured in the intermediate node; wherein, responsive to the processor determining that one or more data packets of the allocation of data packets is not to be provided, the transmitter/receiver unit ignoring the signal indicating an allocation of data packets of the communication session to be sent to the receiving node.

[0225] The contents of the following publications: (1 ) "A transmission control scheme for media access in sensor networks," in Proceedings of ACM Mobicom'01 , Jul. 16-21 , 2004, Rome, Italy; (2) "Reliable and Efficient Hop-by-Hop Flow Control", ACM SIGCOMM, 1994, London, England; (3) "HxH: A Hop-by-Hop Transport Protocol for Multi-Hop Wireless Networks", in WICON 2008; (4) "On Hop-by-Hop Rate-Based Congestion Control," In IEEE/ACM TRANSACTIONS ON NETWORKING, VOL 4, NO 2, April 1996; (5) "Hop-by-Hop Congestion Control over a Wireless Multi-Hop Network". IEEE/ACM Transactions on Networking, 15(1 ), 2007; (6) "The transport layer revisited," in The 2nd International Conference on Communication Systems Software and Middleware, 2007. COMSWARE 2007., Jan 2007, (7) "A receiver-centric transport protocol for mobile hosts with heterogeneous wireless interfaces," ACM MobiCom, San Diego, CA, pp. 1 , 15, September 14-19, 2003; (8) "WebTP: A Receiver-Driven Web Transport Protocol", in Proceedings of IEEE INFOCOM'99; (9) "Multiple Sender Distributed Video Streaming", IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 6, NO. 2, APRIL 2004; (10) "Networking Named Content," CoNEXT '09, New York, NY, 2009, pp. 1-12; (1 1 ) "Rate control for communication networks: Shadow prices, proportional fairness and stability," Journal of Operations Research Society, 49(3):237-252, March 1998; (12) "Charging and rate control for elastic traffic". Eur Trans on Telecommun 8: 33±37.1997; and (13) "A Duality Model of TCP and Queue Management Algorithms", IEEE/ACM Trans. On Networking, October 2003, are each incorporated herein by reference.

[0226] Although features and elements are described above in particular combinations, one of ordinary skill in the art will appreciate that each feature or element can be used alone or in any combination with the other features and elements. In addition, the methods described herein may be implemented in a computer program, software, or firmware incorporated in a computer readable medium for execution by a computer or processor. Examples of non-transitory computer-readable storage media include, but are not limited to, a read only memory (ROM), random access memory (RAM), a register, cache memory, semiconductor memory devices, magnetic media such as internal hard disks and removable disks, magneto-optical media, and optical media such as CD-ROM disks, and digital versatile disks (DVDs). A processor in association with software may be used to implement a radio frequency transceiver for use in a WTRU, UE, terminal, base station, RNC, or any host computer.

[0227] Moreover, in the embodiments described above, processing platforms, computing systems, controllers, and other devices containing processors are noted. These devices may contain at least one Central Processing Unit ("CPU") and memory. In accordance with the practices of persons skilled in the art of computer programming, reference to acts and symbolic representations of operations or instructions may be performed by the various CPUs and memories. Such acts and operations or instructions may be referred to as being "executed," "computer executed" or "CPU executed."

[0228] One of ordinary skill in the art will appreciate that the acts and symbolically represented operations or instructions include the manipulation of electrical signals by the CPU. An electrical system represents data bits that can cause a resulting transformation or reduction of the electrical signals and the maintenance of data bits at memory locations in a memory system to thereby reconfigure or otherwise alter the CPU's operation, as well as other processing of signals. The memory locations where data bits are maintained are physical locations that have particular electrical, magnetic, optical, or organic properties corresponding to or representative of the data bits.

[0229] The data bits may also be maintained on a computer readable medium including magnetic disks, optical disks, and any other volatile (e.g., Random Access Memory ("RAM")) or non-volatile ("e.g., Read-Only Memory ("ROM")) mass storage system readable by the CPU. The computer readable medium may include cooperating or interconnected computer readable medium, which exist exclusively on the processing system or are distributed among multiple interconnected processing systems that may be local or remote to the processing system. It is understood that the representative embodiments are not limited to the above-mentioned memories and that other platforms and memories may support the described methods.

[0230] No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article "a" is intended to include one or more items. Where only one item is intended, the term "one" or similar language is used. Further, the terms "any of followed by a listing of a plurality of items and/or a plurality of categories of items, as used herein, are intended to include "any of," "any combination of," "any multiple of," and/or "any combination of multiples of the items and/or the categories of items, individually or in conjunction with other items and/or other categories of items. Further, as used herein, the term "set" is intended to include any number of items, including zero. Further, as used herein, the term "number" is intended to include any number, including zero.

[0231 ] Moreover, the claims should not be read as limited to the described order or elements unless stated to that effect. In addition, use of the term "means" in any claim is intended to invoke 35 U.S.C. §1 12, ]| 6, and any claim without the word "means" is not so intended.

[0232] Suitable processors include, by way of example, a general purpose processor, a special purpose processor, a conventional processor, a digital signal processor (DSP), a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs); Field Programmable Gate Arrays (FPGAs) circuits, any other type of integrated circuit (IC), and/or a state machine.

[0233] A processor in association with software may be used to implement a radio frequency transceiver for use in a wireless transmit receive unit (WTRU), user equipment (UE), terminal, base station, Mobility Management Entity (MME) or Evolved Packet Core (EPC), or any host computer. The WTRU may be used m conjunction with modules, implemented in hardware and/or software including a Software Defined Radio (SDR), and other components such as a camera, a video camera module, a videophone, a speakerphone, a vibration device, a speaker, a microphone, a television transceiver, a hands free headset, a keyboard, a Bluetooth® module, a frequency modulated (FM) radio unit, a Near Field Communication (NFC) Module, a liquid crystal display (LCD) display unit, an organic light-emitting diode (OLED) display unit, a digital music player, a media player, a video game player module, an Internet browser, and/or any Wireless Local Area Network (WLAN) or Ultra Wide Band (UWB) module. [0234] Although the invention has been described in terms of communication systems, it is contemplated that the systems may be implemented in software on microprocessors/general purpose computers (not shown). In certain embodiments, one or more of the functions of the various components may be implemented in software that controls a general-purpose computer.

[0235] In addition, although the invention is illustrated and described herein with reference to specific embodiments, the invention is not intended to be limited to the details shown. Rather, various modifications may be made in the details within the scope and range of equivalents of the claims and without departing from the invention