Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
LOCATION BASED COOPERATIVE CACHING AT THE RAN
Document Type and Number:
WIPO Patent Application WO/2016/114767
Kind Code:
A1
Abstract:
Example implementations described herein are directed to systems and methods by which cooperative strategies at the Radio Access Network (RAN) are modified to take into account cached content at the enhanced Node Bs in addition to channel quality information. This results in a reduction of the overall network traffic, and improves the Quality of Experience for the user.

Inventors:
AKOUM SALAM (US)
ACHARYA JOYDEEP (US)
Application Number:
PCT/US2015/011254
Publication Date:
July 21, 2016
Filing Date:
January 13, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HITACHI LTD (JP)
International Classes:
H04L1/00
Foreign References:
US20130176988A12013-07-11
US20100322171A12010-12-23
US20140071841A12014-03-13
Attorney, Agent or Firm:
MEHTA, Mainak, H. et al. (Cory Hargreaves & Savitch LLP,525 B Street, Suite 220, San Diego CA, US)
Download PDF:
Claims:
CLAIMS What is claimed is 1. A base station, comprising: a memory configured to store cached content; and a processor, configured to: communicate with one or more other base stations having cached content associated with the cached content stored in the memory; and join a coordinated multipoint (CoMP) set with ones of the one or more other base stations having the cached content associated with the cached content stored in the memory, for a given user equipment (UE). 2. The base station of claim 1, wherein the processor is further configured to transmit the cached content to the given UE according to a dynamic point selection (DPS) of the CoMP set. 3. The base station of claim 2, wherein the processor is configured to remove the base station from the CoMP set for the given UE after the base station transmits the cached content to the given UE. 4. The base station of claim 1, wherein the processor is configured to: determine a first delay based on requesting content from a content delivery network over a backhaul for transmission to the given UE from a current serving base station in the CoMP set; determine a second delay based on transmitting the stored cached content to the given UE from another base station in the CoMP set based on a channel quality to the given UE and a number of retransmissions required to transmit the stored cached content to the given UE from the another base station; request the content from the content delivery network and transmit the requested content to the given UE for the first delay being less than or equal to the second delay; and transmit the stored cached content in the memory to the given UE for the first delay being greater than the second delay. 5. The base station of claim 4, wherein the processor is configured to: determine the first delay based on a quality of the backhaul; and receive the content from the content delivery network through the backhaul for the first delay being less than or equal to the second delay. 6. The base station of claim 1, wherein the processor is configured to: select the ones of the one or more other base stations having the cached content associated with the cached content stored in the memory based on Reference Signal Receive Power (RSRP); join the coordinated multipoint (CoMP) set by forming the CoMP set with the selected ones of the one or more other base stations having the cached content associated with the cached content stored in the memory, for the given user equipment (UE). 7. A method for a base station, comprising managing cached content; communicating with one or more other base stations having cached content associated with the cached content managed by the base station; and joining a coordinated multipoint (CoMP) set with ones of the one or more other base stations having the cached content associated with the cached content stored in the memory, for a given user equipment (UE). 8. The method of claim 7, further comprising transmitting the cached content to the given UE according to a dynamic point selection (DPS) of the CoMP set. 9. The method of claim 8, further comprising removing the base station from the CoMP set for the given UE after transmitting the cached content to the given UE. 10. The method of claim 7, further comprising: determining a first delay based on requesting content from a content delivery network over a backhaul for transmission to the given UE; determining a second delay based on transmitting the stored cached content to the given UE based on a channel quality to the given UE; requesting the content from the content delivery network and transmit the requested content to the given UE for the first delay being less than or equal to the second delay; and transmitting the stored cached content in the memory to the given UE for the first delay being greater than the second delay. 11. The method of claim 10, further comprising: determining the first delay based on a quality of the backhaul; and receiving the content from the content delivery network through the backhaul for the first delay being less than or equal to the second delay. 12. The method of claim 7, further comprising: selecting the ones of the one or more other base stations having the cached content associated with the cached content stored in the memory based on Reference Signal Receive Power (RSRP); joining the coordinated multipoint (CoMP) set by forming the CoMP set with the selected ones of the one or more other base stations having the cached content associated with the cached content stored in the memory, for the given user equipment (UE). 13. A computer program for a base station, storing instructions for executing a process, the instructions comprising: managing cached content; communicating with one or more other base stations having cached content associated with the cached content managed by the base station; and joining a coordinated multipoint (CoMP) set with ones of the one or more other base stations having the cached content associated with the cached content stored in the memory, for a given user equipment (UE).
Description:
LOCATION BASED COOPERATIVE CACHING AT THE RAN Field [0001] The present disclosure relates generally to wireless systems, and more specifically, to cooperative caching for a random access network (RAN). Related Art [0002] Related art caching primarily involves web caching schemes in content delivery networks (CDNs) or private delivery networks (PDNs). An illustration of a related art Long Term Evolution (LTE) system with PDN or evolved packet core (EPC) caching and RAN caching is illustrated in FIG. 1. The system involves an EPC 100 that is connected to the internet 102 for receiving cached content which can be provided to one or more base stations in the RAN 101. Related art caching algorithms are directed to reducing the number of hops required to fetch content, load balancing to reduce the response time, and optimize the usage of the network. It may also be technically simpler to do caching at the EPC or the mobile CDN, rather than caching at the RAN as contents are delivered to the user equipment (UE) via enhanced node Bs (eNodeBs) by using the General Packet Radio Service (GPRS) tunneling protocol (GTP) with encapsulation in related art implementations. Such implementations may make present difficulties for content-aware caching at the RAN level. [0003] Moving the application processing resources closer to the edge of the network, i.e. RAN, however, may have an effect of reducing network traffic, improving quality of experience, and reducing the traffic congestion problems related to backhaul capacity and delay at the RAN network. The related art implementation issues related to GTP tunneling may be overcome using byte caching. Combining both EPC caching and RAN caching may improve mobile traffic. [0004] Related art solutions targeting RAN caching are mostly directed to what to cache where, taking into account the storage capacity at the base stations, the file popularity, and the backhaul capabilities. However, the related art solutions are based on heuristics and approximate solutions. Femtocaching was introduced in the related art to evaluate the content delivery through distributed caching helpers in femtocell networks. Another implementation in the related art applies a game theoretical formulation in which a many- to-many matching game is formulated between small cell base stations and a set of videos to cache based on minimizing the download time for the requesting users and the backhaul load is reduced at the small cell base stations. Another implementation in the related art exploits user files correlations to create popularity matrix of the different files and push them proactively onto small cells. Other caching implementations in the related art take into account device-to-device (D2D) technology, where a set of influential users is determined using the centrality metric, and the content dissemination process is determined. SUMMARY [0005] Network operators and vendors are struggling to find ways to meet the exploding demand for data and multimedia content, triggered by the proliferation of tablets and smart phones. The ever increasing demand for multimedia services creates traffic congestion, and reduction in the quality of service, not only at the RAN side, but also at the Core Network (CN) and PDN sides. One way to deal with the traffic explosion is to reduce the duplicate content transmissions, triggered by users requesting the same content, via adopting intelligent caching strategies inside the PDNs as well as at the RAN side. Caching may reduce the traffic exchanged at the inter- and intra-internet service providers (ISP) levels and may also reduce the response time or latency needed to fetch a file. Caching not only alleviates congestion at the network, but also reduces the energy consumption, and reduces the peak backhaul capacity required at the RAN side. [0006] Caching has been applied in wired networks to reduce the number of hops required to fetch content, as well as to do load balancing between different servers. Caching at the RAN, however, has not been considered in the related art due to challenges related to storage space at the base stations, and GTP-tunneling at the core network. In example implementations of the present disclosure, there are location-based cooperative RAN caching strategies whereas cooperation decisions, usually based on short term and long term physical (PHY) layer metrics, take into account the content requested by the users, the cacheability of the content at the affected base stations, and the type of the base station. [0007] Example implementations may involve new systems and methods for forming Coordinated Multi Point (CoMP) sets taking into account, in addition to Radio Resource Measurements (RRM) measurements and Channel State Interference (CSI) measurements, the availability of the requested content at the neighboring base station. In example implementations, there is an optimization module at the cooperating set taking into account the backhaul quality and delay as well as the channel quality and the content at the transmission points. [0008] In example implementations, there is a CoMP strategy whereas Dynamic Point Selection (DPS) is applied at the CoMP cooperating set to reduce the response time needed to fetch a file at the UE, taking into account the CoMP cooperating set, the CoMP scenario, the file popularity, and the UE type. [0009] In example implementations there is a CoMP strategy for a heterogeneous network scenario, taking into account the backhaul capacity, the storage size, and the type of content. [0010] Aspects of the present disclosure include a base station, which may involve a memory configured to store cached content; and a processor, configured to communicate with one or more other base stations having cached content associated with the cached content stored in the memory; and join a coordinated multipoint (CoMP) set with ones of the one or more other base stations having the cached content associated with the cached content stored in the memory, for a given user equipment (UE). [0011] Aspects of the present disclosure further include a method for a base station, which can include managing cached content; communicating with one or more other base stations having cached content associated with the cached content managed by the base station; and joining a coordinated multipoint (CoMP) set with ones of the one or more other base stations having the cached content associated with the cached content stored in the memory, for a given user equipment (UE). [0012] Aspects of the present disclosure further include a computer program for a base station, storing instructions for executing a process, which can include managing cached content; communicating with one or more other base stations having cached content associated with the cached content managed by the base station; and joining a coordinated multipoint (CoMP) set with ones of the one or more other base stations having the cached content associated with the cached content stored in the memory, for a given user equipment (UE). The computer program can be stored on a non-transitory computer readable medium and executed by one or more processors.

BRIEF DESCRIPTION OF DRAWINGS

[0013] FIG. 1 illustrates a Long Term Evolution (LTE) system with PDN or evolved packet core (EPC) caching and RAN caching.

[0014] FIG. 2 illustrates a flow diagram of CoMP set formation, in accordance with an example implementation.

[0015] FIG.3 illustrates a cache CoMP measurement set formation module, in accordance with an example implementation.

[0016] FIG. 4 illustrates a flow diagram of the cache CoMP cooperating set transmission module, in accordance with an example implementation.

[0017] FIG. 5 illustrates an example of a cache cooperating set having three transmission points, in accordance with an example implementation.

[0018] FIG. 6 illustrates a flowchart from transmitting content, in accordance with an example implementation.

[0019] FIG. 7 illustrates an example of an application of FIG. 6, in accordance with an example implementation.

[0020] FIG. 8 illustrates an example apparatus implementation for a core network, in accordance with an example implementation.

[0021] FIG. 9 illustrates an example base station upon which example implementations can be implemented.

[0022] FIG. 10 illustrates an example user equipment upon which example implementations can be implemented.

DETAILED DESCRIPTION

[0023] The following detailed description provides further details of the figures and example implementations of the present application. Reference numerals and descriptions of redundant elements between figures are omitted for clarity. Terms used throughout the description are provided as examples and are not intended to be limiting. For example, the use of the term “automatic” may involve fully automatic or semi-automatic implementations involving user or administrator control over certain aspects of the implementation, depending on the desired implementation of one of ordinary skill in the art practicing implementations of the present application. The terms enhanced node B (eNodeB), small cell (SC), base station (BS) and pico cell may be utilized interchangeably throughout the example implementations. The terms“traffic” and“dat”a may also be utilized interchangeably throughout the example implementations. The implementations described herein are also not intended to be limiting, and can be implemented in various ways, depending on the desired implementation. [0024] In example implementations, there are changes to the network topology and the cooperation and transmission strategies based, not only on the long term and short term PHY layer metrics, but also on the availability of content at the base stations. An opportunistic cooperative Multiple Input Multiple Output (MIMO) caching approach in the related art only considers joint transmission and changing the transmission strategy according to the availability of content. However, example implementations described herein consider changing the network topology to account for content availability and reduce the response time for the requested file. [0025] FIG. 2 illustrates a flow diagram of CoMP set formation, in accordance with an example implementation. [0026] At 200, the UE initially measures Reference Signal Receive Power (RSRP) or Reference Signal Received Quality (RSRQ) of different neighboring base stations, and sends the measurements to the eNB. This is implemented for CoMP resource management set formation or Radio Resource Management (RRM) measurement set formation. The measurements convey the long-term downlink channel quality from the various transmitting points. Based on the measurements, potential points can be selected for CoMP transmission to the UE. The selection of these points is illustrated at 201 for forming the cache CoMP measurement set. [0027] For the formation of the cache CoMP measurement set, the base station selects the CoMP measurement set for UE from the CoMP resource management set based on the RSRP/RSRQ measurements 201-1, and/or finds CoMP caching set for UE based on content available at the CoMP resource management set 201-2. The sets are then processed to an optimization module 201-3, wherein a CoMP cooperating set is formed and transmission is performed 202. [0028] The CoMP RRM measurement set includes transmission points (e.g., 32) wherein downlink channel quality to the UE is measured via RSRP/RSRQ. Out of the transmission points in the RRM set, a cached-CoMP measurement set is formed based on the required content of the UE, taking into account the storage or cache content at each transmission point, in addition to the downlink channel quality from the transmission point to the UE (RSRP/RSRQ) and the backhaul quality of each transmission point. [0029] FIG. 3 illustrates a cache CoMP measurement set formation module 300, in accordance with an example implementation. [0030] At 301, the RRM measurement set is first parsed to determine which of the transmitting points contains any cached components of the UE requested content. Note that due to GTP tunneling, the caching can be byte caching or a form of packet caching. At 302, a determination is made as to whether the RRM set contains cached content for the UE. If none of the transmission points contains UE content (No), the CoMP measurement set proceeds without any change and CoMP strategy as applied in LTE-A without caching is implemented without change as shown at 303. [0031] If however the RRM set contains cached content for the UE (Yes), the set of all the transmission points with available content is collected at 304. The subset of the RRM set containing UE information need not contain all the content required by the UE. This subset can further contain duplicate copies of some components of the UE content, the content is then differentiated by the transmission point backhaul quality for example, or channel quality to the UE. [0032] At 305, the newly formed subset is then downsized into the cached CoMP measurement set by selecting, for example, the transmission point that have the information, have a RSRP measurement meeting a desired threshold, having a backhaul quality meeting a desired threshold, and no duplications. This selection is done based on an optimization problem that takes into account the critical content and the channel quality to the UE and to the transmission point and the number of UEs being served by that particular transmission point. For example, if the transmission point is serving a lot of UEs such that its resources are scarce, that constraint is taken into account in the optimization problem. [0033] At 306, the cached-CoMP measurement set is then used to determine the CoMP cooperating set. [0034] Note that the cached CoMP measurement set need not be restricted to a maximum of three transmission points as in Release 11 (Rel. 11) CoMP. The number of transmission points in the cached CoMP measurement set is determined such that the feedback required from the UE is minimized, but the cached content is maximized. [0035] As illustrated at 306 in FIG. 3, another aspect in the overall system is the CoMP cooperating set formation. The CoMP cooperating set includes points that participate in data transmission to the UE. An example of a CoMP cooperation set transmission based on dynamic point selection (DPS) is shown in FIG.4 and FIG.5. [0036] FIG. 4 illustrates a flow diagram of the cache CoMP cooperating set transmission module 400, in accordance with an example implementation. At 401, DPS transmission to the UE from the cache CoMP set is conducted. Transmission points are dynamically selected from the cache CoMP measurement set at 402, according to the channel quality in the corresponding subframe. As long as the cached content has not been completely delivered (No), the point is kept in the cooperating set 403, and the flow proceeds to 401. When the cached content for that particular UE is completely delivered at a given transmission point (Yes), the transmission point is removed from the cache-CoMP cooperating set at 404 resulting in less feedback requirements for the UE because of the smaller number of transmission points in the cooperating set, and the flow proceeds back to 401. [0037] FIG. 5 illustrates an example of a cache cooperating set having three transmission points, in accordance with an example implementation. In this example and based on the flow in FIG. 4, for a given frame, the cached content of base station A is delivered after the fifth subframe. Base station A is subsequently removed from the cache-cooperating set, and DPS is performed for the remaining two transmission points in the cooperating set. [0038] FIG. 6 illustrates a flowchart from transmitting content, in accordance with an example implementation. The transmission can be handled by a content transmission module 600. At 601, the delay is estimated from requesting the content from the CDN while accounting for backhaul delay. The delay will be estimated based on the type of link between the base station and the core network, and the level of congestion at the core network and the RAN. For example, in the absence of congestion, this delay is the time it takes a packet to arrive from the server to the base station after requesting the information. At 602, the total delay for transmitting the cached content to the UE with potential retransmissions due to bad channel quality is estimated. This includes an estimate of the delay at the backhaul between the different base stations in the CoMP set, and an estimate of the delay given the RSSI from the base station to the UE and given previously transmitted packets from this base station with the same channel quality. At 601 and 602 a worst case scenario delay can be estimated whereas the maximum delay is estimated given the backhaul and the channel quality. At 603, a comparison is made between the two delays. If the delay from the CDN is more than the delay of transmission (Yes), then the cached content is transmitted at 605. Otherwise (No), the content requested from the CDN is transmitted at 604. [0039] FIG. 7 illustrates an example of an application of FIG. 6, in accordance with an example implementation. In the example of FIG. 7, the application requested by the UE 700 is delay sensitive and one of the cache cooperating set transmission points experiences a bad channel quality to the UE, but contains part of the content requested by the UE. The transmission point in this case makes a decision (e.g. by the content transmission module 600 in FIG. 6) between (a) transmitting the local cached content at the BS level, and enduring retransmission delays due to bad channel quality, (b) requesting the content from the core network 100, while enduring backhaul delays. An optimization is done so that the overall delay in delivering the information to the user is reduced as described above in FIG. 6. Note that this decision does not exclude getting the information for the UE from another base station containing the information that is not necessarily the original serving base station, but that is inside the CoMP set (e.g., through the X2/Xn interface), instead of getting it from the core network, if delay is reduced. [0040] Forming the cache CoMP cooperating sets is further illustrated in Table 1, where different base stations from the cache CoMP set are chosen to serve different UEs depending on the channel quality towards the UE and the cached content at each base station. In the example given in Table 1, there are six base stations in the cache- measurement set (BS A-F) for two UEs 1 and 2. Out of these base stations, for this particular transmission, these base stations form different CoMP- cache cooperating sets to transmit to the different UEs. Note that this example is given for illustration purposes only and is not intended to be limiting. Table 1: CoMP cooperating set formation and UE assignment to each group [0041] FIG. 8 illustrates an example apparatus implementation for a core network, in accordance with an example implementation. The apparatus implementation may be in the form of a Mobility Management Entity (MME), a packet gateway (P-GW), a serving gateway (S-GW), a home subscriber server (HSS), a policy control and charging rules function (PCRF) or a device configured to perform the functions of the core network 100, or a combination of devices thereof, and implemented in the form of a server or computer depending on the desired implementation. The apparatus 800 may include a CPU 801, a memory 802 and a RAN interface 803. The CPU 801 may invoke one or more functions that facilitate the apparatus to provide content to one or more base stations of associated RANs. The memory 802 may be configured to store information to manage functionality of the apparatus and the associated RANs. [0042] CPU 801 may include one or more functions such as UE ID manager 801-1, Mobility Management 801-2 and Offload Function 801-3. UE ID manager 801-1 may be configured to refer to Subscriber Database 802-3 in the memory 802 to manage UEs that are associated with the apparatus 800. Mobility Management 801-2 may utilize RAN interface 803 to communicate with the RAN and associated base station to process the receiving or transferring of UEs. Offload function 801-3 may be configured to receive a request to load balance the UEs associated with the RAN and refer to subscriber database 802-3 to determine UEs to offload. [0043] Memory 802 may manage information such as RAN management 802-1, UE Management 802-2, and subscriber database 802-3. RAN management may indicate a list of RANs that is managed by the apparatus 800. UE Management 802-2 can include UE latency and throughput information for the UEs managed by the apparatus 800. Subscriber database 802-3 may include UE service class information and desired Quality of Experience (QoE) levels. [0044] FIG. 9 illustrates an example base station upon which example implementations can be implemented. The block diagram of a base station 900 in the RAN of the example implementations is shown in FIG. 9, which could be a macro base station, a pico base station, an eNodeB and so forth. The base station 900 may include the following modules: the Central Processing Unit (CPU) 901, the baseband processor 902, the transmission/receiving (Tx/Rx) array 903, the X2/Xn interface 904, and the memory 905. The CPU 901 is configured to execute one or more modules or flows as described, for example, in FIGS. 3, 4 and 6 to transmit cached content for a given UE by communicating with one or more other base stations through the X2/Xn interface 904 that have the cached content associated with the cached content stored in the memory 905 for the given UE; and join a CoMP set to provide the content to the given UE either by obtaining the content through the CDN such as apparatus 800, or by transmitting the cached content from memory 905 as part of the CoMP set. [0045] The baseband processor 902 generates baseband signaling including the reference signal and the system information such as the cell-ID information. The Tx/Rx array 903 contains an array of antennas which are configured to facilitate communications with associated UEs Associated UEs may communicate with the Tx/Rx array to transmit signals containing channel quality information, precoding matrix index, received signal strength and so forth. The X2/Xn interface 904 is used to exchange information between one or more base stations and/or the apparatus of FIG. 8 via a backhaul to join/leave a CoMP set, obtain cached content from other base stations for transmitting to the given UE, obtain the content from the CDN to transmit to the UE, and so on. The memory 905 can be configured to store and manage cached content as provided by the CDN. Memory 905 may take the form of a computer readable storage medium or can be replaced with a computer readable signal medium as described below. [0046] FIG. 10 illustrates an example user equipment upon which example implementations can be implemented. The UE 1000 may involve the following modules: the CPU module 1001, the Tx/Rx array 1002, the baseband processor 1003, and the memory 1004. The CPU module 1001 can be configured to perform one or more functions, such as execution of one or more applications (e.g., voice, internet, video streaming, etc.) as well as the requesting and receiving of cached content from the associated base station. The Tx/RX array 1002 may be implemented as an array of one or more antennas to communicate with the one or more base stations. The memory 1004 can be configured to store cached information from the base station. The baseband digital signal processing (DSP) module 1003 can be configured to perform one or more functions, such as to conduct measurements to generate the position reference signal for the serving base station to estimate the location of the UE. [0047] Finally, some portions of the detailed description are presented in terms of algorithms and symbolic representations of operations within a computer. These algorithmic descriptions and symbolic representations are the means used by those skilled in the data processing arts to convey the essence of their innovations to others skilled in the art. An algorithm is a series of defined steps leading to a desired end state or result. In example implementations, the steps carried out require physical manipulations of tangible quantities for achieving a tangible result. [0048] Unless specifically stated otherwise, as apparent from the discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing,”“computing,”“calculating,”“determ ining,”“displaying,” or the like, can include the actions and processes of a computer system or other information processing device that manipulates and transforms data represented as physical (electronic) quantities within the computer system’s registers and memories into other data similarly represented as physical quantities within the computer system’s memories or registers or other information storage, transmission or display devices. [0049] Example implementations may also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may include one or more general-purpose computers selectively activated or reconfigured by one or more computer programs. Such computer programs may be stored in a computer readable medium, such as a computer-readable storage medium or a computer-readable signal medium. A computer-readable storage medium may involve tangible mediums such as, but not limited to optical disks, magnetic disks, read-only memories, random access memories, solid state devices and drives, or any other types of tangible or non-transitory media suitable for storing electronic information. A computer readable signal medium may include mediums such as carrier waves. The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Computer programs can involve pure software implementations that involve instructions that perform the operations of the desired implementation. [0050] Various general-purpose systems may be used with programs and modules in accordance with the examples herein, or it may prove convenient to construct a more specialized apparatus to perform desired method steps. In addition, the example implementations are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the example implementations as described herein. The instructions of the programming language(s) may be executed by one or more processing devices, e.g., central processing units (CPUs), processors, or controllers. [0051] As is known in the art, the operations described above can be performed by hardware, software, or some combination of software and hardware. Various aspects of the example implementations may be implemented using circuits and logic devices (hardware), while other aspects may be implemented using instructions stored on a machine-readable medium (software), which if executed by a processor, would cause the processor to perform a method to carry out implementations of the present application. Further, some example implementations of the present application may be performed solely in hardware, whereas other example implementations may be performed solely in software. Moreover, the various functions described can be performed in a single unit, or can be spread across a number of components in any number of ways. When performed by software, the methods may be executed by a processor, such as a general purpose computer, based on instructions stored on a computer-readable medium. If desired, the instructions can be stored on the medium in a compressed and/or encrypted format. [0052] Moreover, other implementations of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the teachings of the present application. Various aspects and/or components of the described example implementations may be used singly or in any combination. It is intended that the specification and example implementations be considered as examples only, with the true scope and spirit of the present application being indicated by the following claims.