Login| Sign Up| Help| Contact|

Patent Searching and Data

Document Type and Number:
WIPO Patent Application WO/2001/017183
Kind Code:
A system, method and article of manufacture are provided for contract negotiation in a bandwidth market environment. First, bandwidth on a network is allocated among a plurality of users. An amount of unused bandwidth of a first user is identified. A request for bandwidth on the network is received from a second user. Then, a negotiation between the first and second users is allowed to determine transaction terms for reallocation of the unused bandwidth from the first user to the second user. upon acceptance of the transaction terms by the first and second users, contract information relating to the transaction terms is sent to the first and second users.

Socher, Larry (2734 Valestra Circle Oakton, VA, 22124, US)
Application Number:
Publication Date:
March 08, 2001
Filing Date:
August 31, 2000
Export Citation:
Click for automatic bibliography generation   Help
ANDERSEN CONSULTING, LLP (1661 Page Mill Road Palo Alto, CA, 94304, US)
International Classes:
H04L12/14; H04L12/54; H04L12/911; H04L12/919; H04L12/927; H04L12/70; (IPC1-7): H04L12/56; H04L12/14
Other References:
HOL K: "BIT BY BID BY BIT DEMAND AND SUPPLY OF BANDWIDTH THROUGH ELECTRONIC AUCTIONS", UTRECHT, NL, AUG. 24 - 28, 1999,LONDON: IBTE,GB, 24 August 1999 (1999-08-24), pages 143 - 147, XP000847185
FULP E W ET AL: "Paying for QoS: an optimal distributed algorithm for pricing network resources", INTERNATIONAL WORKSHOP ON QUALITY OF SERVICE (IWQOS),XX,XX, 18 May 1998 (1998-05-18), pages 75 - 84, XP002154150
Attorney, Agent or Firm:
Smith, Guy Porter (Oppenheimer Wolff & Donnelly LLP 2029 Century Park East, Suite 3800 Los Angeles, CA, 90067-3024, US)
Download PDF:
CLAIMS What is claimed is:
1. A method for contract negotiation in a bandwidth market environment comprising the steps of : (a) allocating bandwidth on a network among a plurality of users; (b) identifying an amount of unused bandwidth of a first user; (c) receiving a request for bandwidth on the network from a second user; (d) allowing a negotiation between the first and second users for determining transaction terms for reallocation of the unused bandwidth from the first user to the second user; and (e) sending contract information relating to the transaction terms to the first and second users after acceptance of the transaction terms by the first and second users.
2. A method as recited in claim 1, wherein the contract information defines at least one of the amount of unused bandwidth, a duration of use of the unused bandwidth, a service level, and a price.
3. A method as recited in claim 1, and further comprising the step of sending the contract information to a third party, where the third party requests bandwidth from the second user.
4. A method as recited in claim 3, wherein the contract information includes a contract identifier.
5. A method as recited in claim 1, and further comprising the step of charging a transaction fee for allowing the negotiation between the first and second users.
6. A method as recited in claim 1, wherein the step of allowing the negotiation between the first and second users occurs in real time.
7. A computer program embodied on a computer readable medium for contract negotiation in a bandwidth market environment comprising: (a) a code segment that allocates bandwidth on a network among a plurality of users; (b) a code segment that identifies an amount of unused bandwidth of a first user; (c) a code segment that receives a request for bandwidth on the network from a second user; (d) a code segment that allows a negotiation between the first and second users for determining transaction terms for reallocation of the unused bandwidth from the first user to the second user; and (e) a code segment that sends contract information relating to the transaction terms to the first and second users after acceptance of the transaction terms by the first and second users.
8. A computer program as recited in claim 7, wherein the contract information defines at least one of the amount of unused bandwidth, a duration of use of the unused bandwidth, a service level, and a price.
9. A computer program as recited in claim 7, and further comprising a code segment that sends the contract information to a third party, where the third party requests bandwidth from the second user.
10. A computer program as recited in claim 9, wherein the contract information includes a contract identifier.
11. A computer program as recited in claim 7, and further comprising a code segment that charges a transaction fee for allowing the negotiation between the first and second users.
12. A computer program as recited in claim 7, wherein the negotiation between the first and second users occurs in real time.
13. A system for contract negotiation in a bandwidth market environment comprising: (a) logic that allocates bandwidth on a network among a plurality of users; (b) logic that identifies an amount of unused bandwidth of a first user; (c) logic that receives a request for bandwidth on the network from a second user; (d) logic that allows a negotiation between the first and second users for determining transaction terms for reallocation of the unused bandwidth from the first user to the second user; and (e) logic that sends contract information relating to the transaction terms to the first and second users after acceptance of the transaction terms by the first and second users.
14. A system as recited in claim 13, wherein the contract information defines at least one of the amount of unused bandwidth, a duration of use of the unused bandwidth, a service level, and a price.
15. A system as recited in claim 13, and further comprising logic that sends the contract information to a third party, where the third party requests bandwidth from the second user.
16. A system as recited in claim 15, wherein the contract information includes a contract identifier.
17. A system as recited in claim 13, and further comprising logic that charges a transaction fee for allowing the negotiation between the first and second users.
18. A system as recited in claim 13, wherein the negotiation between the first and second users occurs in real time.
A SYSTEM, METHOD, AND ARTICLE OF MANUFACTURE FOR AUTOMATED NEGOTIATION OF A CON TRACT DURING A TRANSACTION INVOLVING BANDWIDTH FIELD OF THE INVENTION The present invention relates to negotiating contracts and more particularly to automating a negotiation for formation of a contract for the sale, trade, and purchase of bandwidth.

BACKGROUND OF THE INVENTION The market for data networking services is rapidly changing. Flat rate bandwidth pricing structures that mimic telephony models are becoming obsolete. As the Regional Bell Operating Companies (RBOCs) are currently discovering, they can no longer afford to offer flat monthly fees to their customers for analog phone lines that are used for local data calls, most commonly when connected to the Internet. This is because residential customers who have no economic incentive to disconnect from their on-line service providers are keeping circuits open much longer than typical voice calls. As a result, the traditional Erlang model that carriers have relied on for decades is no longer reliable. In order to avoid huge losses, the RBOCs will be forced to abandon their flat rate structure in favor of usage based pricing.

In addition, carriers are at a loss over how to price new network technologies, illustrated by the wide range of pricing for ATM services. New features, such as robust Quality of Service (QoS) guarantees, will demand more flexible pricing structures. Although network providers are doing their best to postpone adjustment of their pricing models, they can not continue to keep high demand services such as Switched Virtual Circuits (SVCs) off the market just to maintain the status quo.

As more network options become available, competing technologies and providers are starting to drive down bandwidth prices. Customers are increasingly better educated, and it is becoming much harder for vendors to differentiate their product offerings. This is illustrated by the on-line

access market. Internet Service Providers (ISPs) are currently having a very difficult time keeping subscribers. As customers view their access services as a commodity, they are quick to abandon their ISP for a lower cost provider. The situation has become so bad that many vendors have abandoned the residential market.

This trend to commodity for networking services and bandwidth is also evident in the market for frame relay services. As the market has become more competitive and pricing drops, vendor pricing is quickly converging. Frame relay providers are no longer able to differentiate between services, despite attempts to provide complex Service Level Agreements (SLAs).

Not only will communication system operators see a rapidly changing market in which bandwidth is commoditized, but it will also be faced with the difficult task of predicting demand.

As newer network technologies and higher bandwidth availability allow a wider variety of applications to operate over the communications network, predicting demand for this bandwidth is more complex. To make the situation worse, communication system operators need to determine demand by location in order maximize revenues. It will be very difficult for communication system operators to determine an optimized pricing structure, while keeping its model simple and understandable. Pricing determinants in the model are much more complex than seen in today's networks.

In addition, communication system operators face a difficult situation with its distributors.

Unless distributors can guarantee service, customers will be unwilling to buy. In order to make sure that they can service their customers, distributors will be forced to purchase excess capacity since demand will be difficult to predict. Distributors will only commit to this excess capacity, however, if communication system operators price their bandwidth low enough. This means that unless another alternative is found, communication system operators will be forced to price their bandwidth in such a way that it will leave money on the table.

A history of bandwidth pricing leading up to today's environment will now be set forth. One of the major challenges of the communication industry is determining how to price wholesale bandwidth. As a substantial part of the business case will rest upon the pricing model, it is important that bandwidth is priced in such a manner that maximizes revenue over the life of the system.

In order to establish a wholesale pricing model, communication system operators must first look at how bandwidth has been priced in the past. This will provide the background for analyzing how the industry has reached its current state. These past pricing models should then be compared to current vendor pricing to help determine industry best practices.

Although a pricing model based on the foregoing factors would be sufficient for a network that is being rolled out this year, because of the rapid growth in new technologies and increasing competition among network providers, it is not adequate for a network that will go operational five years from now. To establish a pricing model that can be used to justify a network that will go live in the future, it is important to look beyond current industry pricing practices.

The following additional factors need to be considered: future availability of bandwidth; customer demand for that bandwidth; future internetworking between competing vendors' networks; and customer expectations for bandwidth pricing customer procurement patterns.

Two major factors have historically driven bandwidth pricing: standard telephony pricing and networking technologies. As Wide Area Network (WAN) vendors are typically the same vendors selling long-distance telephony service, most of the data pricing structures have evolved from telephony models. Although this helped keep pricing plans and billing systems relatively simple, it has resulted in a number of limitations. While these limitations haven't been too disabilitating in the past, they are starting to have a greater impact as technologies evolve.

Historically, the introduction of new network technologies, such as packet switched networks, often require changes to existing pricing structures.

The first real data communications services were mainly dedicated leased lines. Conditioned analog and later Digital Data Services (DDS) were installed to support what was typically host terminated traffic. These circuits were usually used to access an IBM mainframe using BSC or SNA protocols.

Initial speeds started around 300 bits per second and eventually grew to higher speed T1 or E1 services (1.544 Mbps and 2.048 Mbps). In today's world, customers can lease high speed T3 circuits operating at 45 Mbps.

As circuit speeds increased, the industry saw a rise in time division multiplexing (TDM) bandwidth managers, that sub-divided these higher speed circuits into multiple smaller circuits that could be used for different types of traffic (e. g. voice, data, etc.). This allowed the cost of these circuits to be distributed over multiple applications. Although multiplexing did allow better utilization of high speed circuits, it was not a very efficient mechanism for dividing up the bandwidth. When using time division multiplexing, each application receives dedicated bandwidth regardless of whether or not it is actually using it. This means that if you assign a 64 Kbps channel to a data application from a T1 circuit, that 64 Kbps will go unused when the application has no data to transfer. There is no reassignment of bandwidth to other applications.

Dedicated leased lines can be considered analogous to"hotlines"in the telephony world. A dedicated circuit remains open throughout the phone network so that if a person picks up the line on one end they could automatically talk to someone on the other end. No call needs to be placed to establish the circuit. As this was well understood in the telephony world, it was not difficult to come up with a pricing model to account for dedicated leased lines. The network vendors treated these leased lines in much the same fashion as they would handle dedicated voice circuits, charging a fixed monthly fee based on the locations, distance between the end points, and the bandwidth of the circuit. Pricing for a circuit is established by looking at the NPA/NXX combinations of the end points, and determining the cost of different bandwidth connections between the endpoints.

When the US government broke up AT&T in the late 1980s, an interesting thing happened to the leased line pricing structure. As the Regional Bell Operating Companies (RBOCs) now had control of the local loop (from the customer premises to the long distance provider's Point of Presence [POP]), there were now multiple components to a long distance leased line--the two local loops and the Interexchange Carrier (IXC) connection. Although the long distance network providers were still willing to offer"fractional"Tl services (between 64 Kbps and 1.54 Mbps), the Local Exchange Carriers (LECs) only offer 64 Kbps (DSO) or T1 (DS1) circuits.

This means that if a customer wants to lease a dedicated leased line between 64 Kbps and T1 speeds, they must lease an entire T1 to the POP from each of the RBOCs. As the IXC will turnkey the connection, these costs are typically directly passed through to the customer. Some customers will multiplex out the additional capacity and use it for other purposes, such as voice long distance services. As dedicated leased lines were close enough to conventional telephony pricing, vendors had no problem using similar pricing models to support early data networks.

As users began to need access to more than one host, they started to use the dial-up capabilities of the Public Switched Telephone Network (PSTN). Typically armed with an analog modem, these users would dial into a host either directly (e. g. Vax or Unix) or through a protocol converter (e. g. IBM mainframe). Although some synchronous modem standards such as V. 32 have been supported, the majority of this traffic has been asynchronous. With the popularity of on-line services and the rise of the Internet, the growth of asynchronous dial-up has recently skyrocketed. This growth has also been fueled by increased modem speeds (from 300 bps to 56 Kbps) and improved communications methods such as the Point to Point Protocol (PPP). PPP provides very good support for higher level protocols such as TCP/IP and IPX over asynchronous connections. Although the growth in modem speeds has been tremendous, we do appear to be hitting the practical limitations of the Plain Old Telephone System (POTS).

As dial-up connections use the same PSTN connections as normal analog voice calls, telecommunications providers typically price these connections the same as normal phone calls.

In the United States, local phone calls are usually free for residential customers and a nominal charge for businesses. Long distance calls have typically been time of day and distance sensitive, although distance based pricing seems to be breaking down quickly in favor of flat rate structures. Like leased lines, the long distance carrier must pay both Local Exchange Carriers (LECs) for their portions of the call.

While data analog data pricing seems to be a logical extension to normal voice calls, an interesting phenomenon is taking place in the local market. Many Internet Service Providers and on-line access providers have started to offer flat rate monthly pricing to customers. This means that the customer pays the same rate whether they are connected one hour or 500 hours a month.

Because it is sometimes difficult to connect to these services, many customers get a second telephone line, dial into their provider, and stay connected for long periods of time. As there is no additional connect time costs or local phone call charges, there is no economical incentive for the customer to get off the phone line. However, because the local phone system requires a dedicated a 64 Kbps circuit to support the phone call, this open circuit signifies lost resources for the LEC.

Although the revenues received from the additional phone lines going to their residential customers was initially attractive to the RBOCs, an interesting effect has been happening. As more and more users remain connected to their on-line service providers for increasingly longer durations, the Erlang model that has been used for decades to predict PSTN capacity is rapidly

breaking down. Rather than calls lasting a couple of minutes, people keep the phone line open for hours at a time whether or not they are using the system. As usage patterns change dramatically, the Erlang model needs some serious adjustments. RBOCs are quickly finding that their infrastructure is no longer adequate for supporting this new generation of on-line customers. If the RBOCs do not upgrade their infrastructure, phone calls will eventually start blocking and they will be unable to complete calls. Customers will not tolerate these outages in service.

Even if they do upgrade their infrastructure to support this new environment, because the RBOCs are not seeing additional revenue for these local calls, they cannot expect to receive the return on investments that they have realized in the past. The RBOCs are in a very difficult position, and will need a fundamental change to relax the pressure. One possible solution that we are already seeing in ISDN service is charging for local data calls.

Although pricing analog data traffic the same as normal telephone calls initially appeared very attractive to network providers, it has proven very problematic. As the breakdown of the Erlang model suggests, carriers should be very careful of relying on traditional voice pricing structures for handling data. RBOCs are now faced with the possibility of having to take away flat rate data services from their customers in order to avoid huge losses.

Network providers started using packet switched networks in order to take advantage of the bursty nature of most data traffic. Based on the premise that not all users transmit at once, packet switch networks use statistical multiplexing techniques to interleave traffic from multiple users over shared network connections. The CCITT introduced its X. 25 network interface specifications to provide a standardized framework to take advantage of packet switched networks. The X. 25 specifications were designed to simulate dedicated leased lines using Permanent Virtual Circuits (PVCs) and dial-up connections using Switched Virtual Circuits (SVCs). In order to support switched services, the X. 25 protocol implements a network layer function which provides routing and addressing (X. 121) capabilities.

As bit errors were still common on communications circuits during the inception of X. 25, the protocol had a number of error correction capabilities (LAPB/HDLC) and congestion management techniques (modulo 8 or 128 acknowledgments). X. 25 is capable of handling varying maximum packet sizes, which can be optimized based on the quality of the circuit and the type of traffic being transmitted (usually 128,256, and 512 octet frames). Typical X. 25

speeds range from 2.4 Kbps to 512 Kbps (with an occasional T1 or E1). Packet Assemblers/Disassemblers (PADs) are often used to support legacy synchronous and asynchronous circuit based traffic. In addition, PADs also support asynchronous devices that dial into the PAD and establish a call across the network by providing an X. 121 address.

X. 25 has typically been priced by a combination of access costs and usage charges. The customer pays a monthly charge for access to the X. 25 port on the network. This charge is calculated based on the access speed of the port (e. g. 128 Kbps). In addition, the cost of the local loop (either 64 Kbps or T1) that the X. 25 provider must pay the LEC will be passed through to the customer. The total fixed monthly recurring cost is therefore the port cost plus the cost of the local loop. In the United States, PVC and SVC pricing is typically not distance based. For this reason, in most domestic X. 25 networks, the vendor will not typically charge for PVCs or SVCs, although some vendors may charge nominal administration costs. However, when leaving the United States, one will often find that PVCs and SVCs are based on the distance between the two end points. In these cases, one may see high monthly surcharges for PVCs and call establishment costs for SVCs.

As an X. 25 host can transfer from no data up to the port speed, the amount that devices use the network may vary dramatically. For this reason, X. 25 network providers typically include a variable monthly charge based on usage-usually measured in kilochars (or 1,024 octets). Some countries base their measurement on the number of packets, or kilopackets, assuming an average packet size for calculations. The more that one uses the network, the more one pays. As the user is charged for the amount of data transferred, they are more likely to avoid needless transmissions. If one user is idle, this means that additional bandwidth is available to other subscribers. This allows the provider to"oversubscribe"access to the network, as statistical multiplexing will allow multiple users to share the same backbone bandwidth. What results is a network in which bandwidth is efficiently allocated; where the economics provide an incentive to the customer not to use unnecessary bandwidth.

Many vendors offer X. 75 gateways between their X. 25 network and other providers. These gateways usually provide PVC services and often offer SVC capabilities. The X. 121 domain structure ensures that addresses will be unique across networks. X. 25 network providers usually negotiate bilateral agreements to establish pricing between two networks.

A dramatic improvement has been seen in circuit quality with the growth of digital technologies.

Bit Error Rates (BER) have dropped significantly as circuit quality improves. In addition, higher layer communications protocols such as TCP introduce sophisticated sequencing, error correction, and congestion control mechanisms, rapidly diminishing the need for the network to handle these functions. The introduction of higher quality circuits and sophisticated upper layer protocols eliminates the usefulness of many of X. 25's capabilities. By removing the network layer and avoiding the costly overhead of acknowledging data transmissions, frame relay provides a streamlined alternative to its packet switched predecessor.

Frame relay connections typically range from 56 Kbps to T1 (or E1) speeds. Some vendors are currently evaluating T3 (45 Mbps) service. Like X. 25, Frame Relay Access Devices (FRADs) allow legacy applications (e. g. BSC and SNA) to operate over frame relay connections by emulating dedicated leased lines. Some of these FRADs even support voice over frame relay.

One new feature that frame relay introduced was the concept of Committed Information Rate (CIR). CIR is the amount of data that the network agrees to transfer between the ingress and egress points under normal conditions; or Bc (committed burst size). This is a loose guarantee of transmission rate. Frame relay also allows the user to burst beyond the Committed Information Rate up to an Excess Burst Size, or Be, at times when additional network capacity is available.

Most network providers allow the user to burst up to the port speed. All data in excess of the committed burst size Bc is marked Discard Eligible (DE) by the network. In the event that the network experiences congestion, switches will drop frames that are marked discard eligible first and give priority to committed traffic (Bc). Although frame relay CIR starts to offer the user a rudimentary form of Quality of Service (QoS), the approach is very crude and falls dramatically short of user requirements.

Most vendors provide pricing models based on CIR. Vendor pricing usually consists of a recurring access cost. The customer pays a monthly charge for access to a frame relay port on the network. This charge is calculated based on the access speed of the port (e. g. 128 Kbps). In addition, the cost of the local loop (either 64 Kbps or T1) that the frame relay provider must pay the LEC will be passed through to the customer. The total fixed monthly recurring cost is therefore the port cost plus the cost of the local loop. In addition, the vendor will usually charge an additional monthly cost for each Permanent Virtual Circuit (PVC). The cost of this PVC increases with its Committed Information Rate. The higher the CIR, the greater the cost.

PVC pricing is typically not distance based in the United States. However, once one leaves the United States, PVC pricing raises dramatically based on distance and location, particularly when crossing international boundaries.

Although pricing based on CIR appears to make a lot of sense on the surface, it can be problematic. As different switch vendors have drastically different approaches to implementing frame relay recommendations, it is very difficult to do an effective comparison between vendor offerings One interesting thing missing from frame relay network pricing models is usage based charges.

All frame relay pricing is based on port access costs and PVC CIRs. There is typically no variable usage component to customer costs. This means that the customer pays a fixed monthly fee only. There are two possible explanations for this approach. First, in order to simplify their monthly billing, customers may have demanded flat monthly pricing. Second, by avoiding variable usage charges, carriers could follow a similar model to their leased line pricing allowing them to use the same billing systems.

Another interesting item missing from most frame relay networks is Switched Virtual Circuits (SVCs). The original frame relay specifications eliminated the network layer functionality found in X. 25 in order to improve efficiency. Without the network layer, it is not possible to dynamically establish and route a call through the network.

There are a number of possible explanations for this lack of SVC support. First, as the carriers' billing systems were not designed to handle usage based charges for frame relay services, the dynamic nature of SVCs make it virtually impossible to bill. As SVCs are temporary by definition, they automatically imply a variable or usage based charge. Second, as frame relay typically does not have usage charges, carriers have not structured their network to collect the usage information that would be required to bill SVCs. Although most switches can make this information available, some mechanism is needed to capture the data, transfer it back to a management system, consolidate it, and process it in the billing system in order to handle variable usage charges. This could require drastic changes to the billing system. Third, unlike older X. 25 networks that were well equipped to handle Switched Virtual Circuits, the lack of congestion control makes it very difficult for a frame relay network to handle SVCs and still maintain its CIR guarantees. Without the flow control mechanisms provided by X. 25, the only methods available for frame relay to throttle a transmission is the Backward Explicit Congestion

Notification (BECN) and Forward Explicit Congestion Notification (FECN). As neither of these techniques has been implemented well, it is very difficult for a frame relay network to control the transmissions of the attached devices. For this reason, a network has limited means of managing the Committed Information Rates that it has guaranteed its Virtual Circuits. The greater the uncertainty of the environment, the harder it becomes to predict the required network infrastructure to guarantee transmissions. SVCs can increase uncertainty exponentially, and can therefore present a significant challenge for frame relay network providers. In addition, users typically will want to be able to negotiate a CIR for their SVCs, making network design even more complex.

As the frame relay market in the United States matures, vendors are quickly looking for ways to differentiate their service offerings in order to capture market share. It is becoming increasingly difficult to distinguish one vendor's service from another. In order to combat this trend, vendors are beginning to offer a variety of Service Level Agreements (SLAs) to differentiate their products.

The introduction of value added services and SLAs to differentiate vendor offerings has significant implications. It suggests that network providers are trying to avoid having their frame relay services commoditized. Using complex SLAs, they are trying to differentiate their service offerings enough to avoid the market treating their product as a commodity. As this will have major implications for future network offerings, we will return to this discussion later in this document.

With the increased popularity of the Internet and its TCP/IP protocols, some interesting changes have been seen in the data world. Rather than create their own private networks using traditional network options, customers have started to take advantage of the Internet to transmit their data.

By connecting their offices directly to an Internet Service Provider (ISP) using either leased lines or frame relay connections, customers can take advantage of an almost ubiquitous, shared global network. By connecting all of their offices to local ISPs and using the Internet to communicate with one another, customers can avoid having to create costly wide area networks. Not only can they transfer data between their corporate offices, but they can also communicate with other companies connected to the Internet. In addition, as many ISPs have POPs located in major metropolitan areas, business travelers can dial-into these POPs and access their corporate networks when they are on the road. Recent improvements in security software which provides

firewall and encryption capabilities have reduced privacy concerns about using the Internet for sensitive data.

Although the service levels provided by private leased line and frame relay networks may be sacrificed, customers usually save a significant amount of money by using the Internet as a backbone. As many applications such as email can tolerate lower service levels, the cost savings that the customer experiences is typically worth the lower service guarantees inherent by using the Internet.

As customers become increasingly frustrated with some of the reliability problems experienced on the Internet, a number of carriers are starting to offer shared, public IP networks. By isolating these networks from the Internet, vendors can provide greater service level guarantees while keeping costs lower than traditional leased line and frame relay services. These networks offer customers a variety of different service level options that are not available on the Internet including guaranteed availability, traffic priority, and security services. As Internet performance and reliability decreases as a result of its tremendous growth in traffic, these private IP networks are quickly becoming a popular alternative.

Internet Service Providers (ISPs) typically price their services based on the port speed that you connect. A router at the customer's site will usually connect to the ISP's router using either dedicated leased lines or frame relay connections. Connections usually range from 64 Kbps to T3 speeds. The greater the port speed, the greater the fixed monthly cost that the ISP charges. In addition to the port costs, the ISP will pass through the charges for the leased line or frame relay connection. As ISPs typically have POPs in most major metropolitan areas, these circuits usually avoid IXC charges, and are therefore less expensive than typical WAN connections.

Vendors usually charge for connect time to their POPs for dial-up services. The greater the port speed, the more the user pays for connect time. In order to keep competitive, many vendors have started to offer flat monthly rates with unlimited connect time. Although it is rare, some international providers are starting to charge by the amount of data transmitted.

Most Internet Service Providers also offer an array of value added services to their customers.

These services usually consist of directory servers and news feeds, but are becoming increasingly more sophisticated. The line between bandwidth and content is slowly diminishing.

Although the rate structure usually incorporates significant premiums for increased service levels, corporate intranet pricing is very similar to Internet pricing models. Some vendors will prioritize their customers'traffic higher for an additional premium.

With the introduction of fiber optic networks, circuit speeds and reliability have increased dramatically. In today's world, it is not uncommon to see high-speed SONET connections operating at OC3 (155 Mbps) and OC12 (622 Mbps) speeds. At the same time, we have seen a tremendous increase in computing power. This combination of increased computing power and high bandwidth connectivity is slowly creating a demand for a new generation of multimedia applications. Due to their unique requirements, many of these applications are very sensitive to the performance characteristics of the underlying network. For example, the half-second transmission latency that a telephony voice call sees when bouncing off a Geo-Stationary Orbiting (GSO) satellite is extremely irritating to a caller. In a similar fashion, interpacket delay, or jitter, can be just as annoying during a video conference call.

Due to stricter application performance requirements and the dramatic increases in circuit speeds, a new network technology was required to handle the next generation multimedia applications. This network technology needed to be fast enough to support huge data transfer rates while at the same time maintain enough flexibility to handle a wide variety of application requirements. Broadband ISDN, otherwise known as Asynchronous Transfer Mode (ATM), was designed to meet this challenge.

By replacing X. 25 and frame relay's variable length packet sizes with fixed length cells, switching decisions could be moved from software to hardware allowing extremely fast processing. After much debate between the Americans (64 octets) and Europeans (32 octets), a compromise was met with a cell payload size of 48 octets and a five octet header. This small cell size allows for minimal interpacket delay, which is required for voice and video.

In addition to being very fast, ATM also needs to be able to handle a wide range of Quality of Service (QoS). Some applications such as voice require minimal interpacket delay but can lose packets. Other applications cannot afford to lose packets but may tolerate some delay (e. g. data transfer). ATM introduces sophisticated ATM Transfer Capabilities (ATC) in order to meet these different traffic requirements.

The ATM Forum defines five ATM Transfer Capabilities: Constant Bit Rate (CBR), e. g. voice; Real Time-Variable Bit Rate (RT-VBR), e. g. compressed voice; Non-Real Time Variable Bit Rate (NRT-VBR), e. g. data with minimum bandwidth requirement; Available Bit Rate (ABR), e. g. games (can tolerate slowdown); and Unspecified Bit Rate (UBR), e. g. IP.

Vendors appear to be having a tough time figuring out how to price ATM services. Very few vendors actually publish their price lists and pricing models vary significantly across carriers. It appears that many vendors are reluctant to disclose detailed pricing information since vendors approach ATM pricing using very different cost components.

Further, most carriers do not appear to have usage based pricing (e. g. per cell or kilochar).

Although not well established, ATM pricing models seem to be very close to their frame relay counterparts; including fixed monthly costs based on port access speeds. Like frame relay, some vendors charge additional fixed monthly fees for PVCs based on their QoS (e. g. CBR, VBR, or ABR).

Interestingly, what appears to be a simple RFP resulted in very different pricing proposals from ATM providers. Based on the wide difference in approaches, it appears that ATM vendors have not yet figured out how to price their services. As we have seen with other technologies, ATM pricing will probably come closer together as customers become more sophisticated and the market matures. However, as bandwidth becomes more available and competition increases, ATM vendors will eventually be forced to lower their prices in order to capture or maintain market share.

As is seen in the frame relay market, vendors are reluctant to offer SVCs to their ATM customers. It is very likely that this is a result of the same forces at play in the frame relay market-inadequate billing systems, lack of data collections applications, and the difficulty of engineering for SVCs. Time will tell whether or not ATM providers will start to offer SVCs as customer demand increases.

Over the past few decades, wide area network vendors have continued to upgrade their backbone infrastructures. Millions of miles of fiber have been run between major metropolitan areas creating high-bandwidth backbone networks. Despite the huge amount of available capacity that has resulted from these massive infrastructure upgrades, a significant obstacle still remains- how to traverse the"last mile"to commercial and residential customers. In most cases, it is not

economically feasible to run fiber to every small business and household. Numerous"last mile" technologies will now be set forth.

Integrated Services Digital Network (ISDN) is rapidly becoming a popular method for increasing bandwidth to residential and business customers. ISDN operates over standard copper phone lines up to 4 km. As local carriers upgrade their switches in their Central Offices (CO), more and more residential customers have Basic Rate ISDN (BRI) capabilities, which can operate up to 128 Kbps. Businesses often can get Primary Rate ISDN (PRI) services, which operate up to T1 (1.544 Mbps) and E1 (2.048 Mbps) speeds.

Most ISDN vendors price their offerings based on their telephony services. This suggests that ISDN prices will drop in conjunction with voice services. However, unlike standard telephony services, vendors typically add additional usage charges for ISDN data calls. This avoids a scenario in which users have no economic incentive to release a circuit.

Digital Subscriber Line technologies support high-speed data networking over standard copper wiring. High-speed Digital Subscriber Line, or HDSL, supports data rates from 384 Kbps to 2 Mbps up to 3 to 5 km. Three copper pairs are required to support 2 Mbps in both directions.

As most users typically receive larger amounts of data than they send, Asymmetric Digital Subscriber Line (ADSL) is a more attractive alternative. Like HDSL, it also uses standard copper wiring. However, ADSL runs at 560 Kbps upstream and 6 Mbps downstream up to 2 to 4 km. Another alternative is Very-high speed Digital Subscriber Line (VDSL). VDSL can operate at rates as high as 52 Mbps up to 1 km.

Cable modems use the existing coaxial cable TV network infrastructure to provide data services to customers. Although cable modems are theoretically supposed to operate at speeds up to 10 Mbps, they more typically run at 64 Kbps upstream and 2 Mbps downstream.

A few operators are testing high-speed wireless technologies. One of the leading contenders is Local Multipoint Distribution Services (LMDS). LMDS is a form of cellular radio that operates at frequencies of 28 and 40 GHz. LMDS service runs up to 30 Mbps over a 3 km radius.

Very few vendors have started offering LMDS and other broadband wireless services. However, companies are currently trying to launch LMDS services in the United States. The FCC is currently planning to auction off LMDS frequency ranges.

Other wireless options include Fixed Wireless Services and Mobile Wireless Services. Due to technical limitations, no vendors are offering mobile wireless services over 28.8 Kbps. A number of other vendors will offer a variety of satellite data networking services.

GSO satellite services typically offer dedicated bandwidth similar to leased lines and have pricing models based on flat rate monthly service. Most of the MEO and LEO satellite providers have either not yet determined or disclosed their pricing models.

It is interesting to note that, with the exception of cable modems, all services have a usage charge. As cable modems are currently in a trial period, it is very likely that usage based pricing will be adopted when the service is fully rolled out.

For the past decade, industry experts have predicted a dramatic decrease in network and bandwidth pricing. Many networking visionaries have suggested that improvements in technologies and increases in competition will result in a world in which bandwidth will essentially be free.

Although bandwidth pricing will come down significantly, it will probably not reach the point where it is free. However, the notion of bandwidth becoming a utility is very real. It is likely that bandwidth will someday be treated in a similar fashion to power, water, and sewer services.

Like the other utilities, bandwidth will be metered and commoditized. Just as deregulation is rapidly driving to an efficient market in the power industry in which megawatts are traded among suppliers, similar deregulation in the telecommunications industry suggests that we will start to see bandwidth bought and sold as a commodity.

In order to capture the residential market for on-line access, many service providers changed their pricing model to a flat rate structure. Rather than being charged for connect time, users could choose to pay a flat monthly rate for unlimited access to the service provider. Although this helped the service providers capture additional market share, it opened up Pandora's box for ISPs. As customers no longer had an economic incentive to disconnect from the service providers, they could essentially stay on-line and use the service for as long as they liked. Based

on the public's response, the flat rate price was extremely attractive. As customers would stay on-line for longer periods, it was very difficult to connect to the service providers. As such, the service providers were forced to upgrade their infrastructure and install additional ports and circuits in order to support the new usage patterns. This changed the economics of being an on- line service provider.

In order to maintain market share, many ISPs were forced to respond with similar pricing plans.

Subscribers were very quick to switch to a lower cost provider, demonstrating very little loyalty to their Internet Service Providers. The ISPs started to feel increased pressure on profits. As profits decreased, many vendors chose to exit the residential market and focus their efforts on more profitable corporate accounts.

The above scenario illustrates two important points. First, the demand for on-line access is very elastic. The lower the cost of connect time, the greater the demand. The same thing applies to bandwidth. The lower the cost of bandwidth, the more bandwidth customers will purchase.

Corporations have been very quick to increase their Internet connection speeds as ISP access prices have decreased. Very few corporations connect at lower than T1 speeds in today's world.

Second, as data networking technologies mature, they have a tendency to become commodity services. The willingness of customers to switch to lower cost, on-line service providers solely to save money suggests that data services can be commoditized. Customers did not perceive enough difference in the services offered between one provider and another, and would therefore switch solely based on price. This illustrates that market for data networking services is subject to the same economic forces that we commonly see in our utilities and commodities markets.

Network access and bandwidth will ultimately become a commodity.

A major trend that network providers are facing is an increasingly smarter consumer. As the on- line access market suggests, the better educated the consumer the more likely they are to switch services. When on-line customers realized that they could get essentially the same services from other access providers at a lower price, they were very quick to switch providers. A similar trend has taken place in the long-distance telephony market. In countries with competitive long- distance providers, customers jump from one carrier to another in order to get the best deals.

Things have gotten so bad in some countries that the carriers are starting to pay people to switch over.

Another example of smarter consumers can be found in the cellular phone market. A number of vendors are starting to market pricing structures that do not round up to the next minute.

Appealing to more sophisticated customers, some companies are hoping that people will switch to their service in order to avoid paying for time that they have not used. Some long-distance phone companies are using similar approaches for capturing market share. The smarter consumer will make it more difficult for network providers to differentiate their offerings.

In the future, the data communications market should see some considerable changes.

Apparently, with more and more vendors entering the market, competition is driving prices significantly lower. As competition increases, network vendors'margins will start to shrink.

After a while, some vendors will drop their rates so low that they will take a loss in order to retain or capture market share. Not being able to withstand losses, smaller carriers will either go out of business or be bought out by larger providers. Alternatively, they may choose to refocus their efforts on more lucrative markets.

As the market stabilizes, vendors will be forced to adjust their pricing models in order to make money. Vendors will have a couple of alternatives to maintain profitability. First, network providers could continue to offer their services with low, flat-rate pricing structures and cut back on their investment in infrastructure. Rather than increase the number of dial-up ports or the bandwidth in their backbone networks, vendors could continue to add users without adding capacity. What will result is more congested networks with lower service levels. As the on-line access market and Internet has already demonstrated, users may not be able to reliably connect to their service providers. In addition, once connected competition for bandwidth will be so high that performance will become intolerable. Eventually they will start to lose their customers to other providers. However, as other providers will be facing the same economic pressures as the original vendor, they too will be forced to curve the growth in their infrastructure. Eventually customers will become so discouraged that they will stop using networking services.

Second, as network performance levels continue to degrade, customers will start to seek premium options with higher service level guarantees. As the increase in the popularity of private IP networks suggests, users are willing to pay a premium for higher quality services.

This notion of getting what you pay for will be critical in the future of data networking.

Although bandwidth pricing will drop significantly over the next few years, the upcoming generation of multimedia applications will have much greater bandwidth requirements. It is

difficult to tell whether the price decreases in bandwidth will be enough to offset increased demand. However, one thing is certain. Customers will continue to have high monthly networking bills, and will continue to look for ways to decrease their costs.

While customers paying more for premium services may appear to be a swing back in the pendulum, there will be two fundamental differences from the past. First, newer network technologies such as ATM provide robust Quality of Service to customers. As long as all vendors can demonstrate that they are living up to their QoS commitments, they will be unable to use service level guarantees as a means of differentiating their products.

Second, the customer will be much better educated. While they will be willing to pay a premium for a higher quality of service, they will not want to pay for bandwidth that they do not use. As customers must connect to a network at a speed capable of supporting their peak traffic loads, the majority of time they are not using their full capacity. Although two customers may require similar bandwidth connections to a network to support peak loads, they may use the network very differently. If networks are priced on a flat rate structure based on port access speeds, a customer that needs OC3 access to an ATM network for five minutes a day for downloading sales information will pay the same amount as a customer who needs OC3 capacity for 8 hours a day to backup their mainframes. This pricing structure is not fair to the sales force customer who only uses a fraction of the bandwidth as the mainframe customer.

As customers start to become more sophisticated, they will begin to realize that a carrier who prices its services based on flat rate port access costs, may be overcharging for the amount of bandwidth actually used. Alternatively, one customer may be subsidizing another customer who uses the network more then they do. As these lower-use customers realize that they are subsidizing their higher use partners, they will eventually demand a fair pricing model that accurately reflects their network usage. Carriers will be forced to offer usage based pricing. If carriers do not provide usage based pricing, their customers will seek out other network providers who are willing to offer pay-as-you-go services.

As the breakdown of the Erlang model that the RBOCs are experiencing suggests, it may be advantageous for network vendors to discard the flat rate pricing structures of today for usage based pricing models. As soon as consumers start to realize that you get what you pay for, Internet Service Providers will also be able to move to usage based pricing. Signs of this are

already showing in the high-speed residential market and the increased popularity of private IP networks.

Internetworking is the joining of two networks to exchange information between users. As the popularity of the Internet suggests, the desire to communicate and exchange information beyond one's natural boundaries is ubiquitous. Although it started as a small packet switched network established by ARPA to connect research facilities, the Internet has grown into a giant network connecting millions of hosts in over 190 countries.

The amazing thing about the Internet is that it has evolved over time through a number of bilateral agreements between different networks to route one another's data. No single organization or company controls the network. To a degree, it is managed by consensus.

While the Internet has done an excellent job in reaching millions of users and opening up new markets and opportunities, it does have its problems. First, the Internet is very fragile. Second, the current version (v4) of the IP protocol is limited in its support for QoS and high-bandwidth applications. Although IPv6 addresses many of Version 4's limitations, all Internet hosts and routers would need to be upgraded to support the new standard. In addition, even with its new features, IPv6 would have difficulty handling isosynchronous traffic such as voice and video.

Third, performance on the Internet has suffered due to poorly planned growth. Network Access Points (NAPs), where Internet Service Provider networks typically meet to exchange traffic, are quickly getting overrun. In many instances, large ISPs are negotiating bilateral agreements with other ISPs to route traffic between their networks bypassing the NAPs. In addition, ISPs are having a tough time upgrading their own network backbones and POPs to handle their growing customer base.

At the same time that Internet performance is decreasing, customers have come to rely more on the network. By standardizing on the IP protocol and connecting millions of hosts to one another, the Internet has set the model by which customers expect networks to talk to one another. As we are already starting to see, many customers will eventually migrate away from the Internet to other network options to support their next generation of multimedia applications.

However, as customers migrate over to these private IP and ATM networks, they will not want to give up the ubiquitous access that they have become accustomed to with the Internet. What good is a video-phone if you cannot call someone because they are on another vendor's network? The Internet has set the internetworking standard for future networks to emulate.

Although it will be much more challenging to provide internetworking capabilities between these new environments, the connecting of different vendor networks is not a new concept. Most networking technologies have well defined standards for connecting between networks. For the past two decades, X. 25 networks have been joined to one another using X. 75 gateways. Both frame relay and ATM have similar concepts as X. 25 with their respective Network to Network Interfaces (NNI). There is even a Frame Relay User to Network Interface (FUNI) for connecting frame relay to ATM networks.

Although technology is in the very early stages of interconnecting frame relay and ATM networks, progress is starting to be seen. Besides potential regulatory barriers, there is no reason that all frame relay and ATM networks in the future could not be joined to one another creating one global network in much the same way that the Internet is connected today. The trick is to avoid the same mistakes as the Internet. It will be important for network providers to connect their networks in such a way that they can maintain their agreed upon service levels to their customers. Unlike the Internet, network vendors are much better equipped to maintain their SLAs with ATM's robust QoS capabilities.

Communication system operators will face a number of challenges in determining its pricing structure. Trying to predict the best means for maximizing revenue in a networking market that is undergoing fundamental changes will be a difficult task.

In order to determine the best appropriate pricing model, communication system operators may focus on a number of key factors. The most fundamental is the desire to keep the pricing model as simple as possible. Communication system operators would like to avoid pricing structures that require sophisticated sales representatives and complex billing systems.

Another stated goal of communication system operators is to price and sell bandwidth at the wholesale level. Rather than selling directly to end users, communication system operators may rely on a network of value added resellers to package and resell its services to customers. These distributors will purchase bandwidth from various communication system operators, incorporate additional value added services, and resell these services to their customers. Communication system operators are trying to structure their networks so that each distributor views their purchased bandwidth as their own private resource, and will manage and control the allocation of access and bandwidth to its users through a Distributed Virtual Network Services (DVNS).

Ideally, distributors will view their piece of the network as their own Virtual Private Network (VPN).

As communication system operators are ultimately in business to make money, one obvious goal is to maximize revenues by charging what the market will bear. If communication system operators underprice their services, it could potentially forgo profits. At the same time, communication system operators would like to minimize risk. There will very likely be a trade- off between the amount of risk that communication system operators are willing to assume and the profits that it realizes.

Perhaps the greatest marketing challenge for communication system operators is determining demand. Communication system operators and other broadband network operators could open up a wide range of multimedia applications and services in the marketplace. Demand for these services, which ultimately will dictate price, will vary greatly between applications. While one customer may be willing to pay a certain price a minute to download a software application at 2 Mbps speeds, they will probably not accept the same charge for downloading a video-on-demand movie. Likewise, very different pricing will be expected for Internet access. As some value added content distributors, such as video-on-demand providers, will need to recoup their wholesale bandwidth costs from the fees that they charge for content (i. e. movie rental), they will have very different pricing structures in comparison to distributors who could pass these costs directly through to the customer for services such as video conferencing.

As it will be very hard to determine the profile of services that will eventually be offered by distributors and the demand for these services, basing pricing on existing models could prove extremely difficult. While a flat rate structure may eliminate this problem and meet the goal of simplicity, communication system operators could potentially be leaving money on the table.

To further complicate the problem, the wide range in types of customers also makes demand more unpredictable. Although data network providers currently know that peak loads are typically during business hours, the variety of applications that communication system operators will support suggests different usage patterns. As high bandwidth applications such as video-on- demand may be more popular with residential customers, peak traffic loads for a location may be in the evenings and on weekends instead of during business hours.

Another problem that communication system operators face is the difficulty in predicting the supply of bandwidth. As discussed previously, communication system operators will be competing with a number of other"last mile"technologies. Although a better picture of which technologies will be successful and their likely pricing structures is now obtainable, a lot can change in the next few years. Communication system operators may be faced with low-cost competitors that force it to alter its pricing model. As different networking technologies will be available in different geographic regions, it is very likely that communication system operators will be faced with very different demand based on location. Customers in one location may have numerous, low-cost alternatives such as ADSL and cable modems, while those in another place may not have any options. This inconsistency in demand may make it difficult for communication system operators to use a flat pricing model across regions. Due to competitive alternatives, customers in the first location may be unwilling to pay the same price for bandwidth as customers in the second place who do not have the same options.

In the forgoing example, if communication system operators were to price bandwidth at the same rate for customers in the first location as those in the second location, it has two options. It could either price the traffic for the customers in the second location at the lower rate of the first location, in which case communication system operators would be leaving money on the table.

Alternatively, communication system operators could use the higher rate of the second location, in which case customers in the first location would select other options and communication system operators could forgo revenues in the first location. In order to maximize revenues, communication system operators would be better off pricing bandwidth at the highest rates that the markets will bear in both the first and second location. This means that even though it costs the same to deliver the service to the first and second locations, communication system operators should price services differently based on the market for bandwidth at each location. Location based pricing is the only way to maximize revenues.

It is interesting to note that there will be multiple forces driving location demand for the services of communication system operators. Although larger population density will typically result in higher demand for bandwidth, those that have alternative networking options will not command the same prices. Customers in certain locations, with their numerous network options, will be less willing to pay higher prices for communication system operators services than customers in other locations. Less developed countries and rural areas in which the investment in"last mile" infrastructure may not be cost effective for carriers, will bear higher prices than well developed

metropolitan areas. Communication system operators will be faced with a difficult problem in determining the appropriate pricing structure for each of its Location Area Codes (LACs).

The fundamental driver in economics is scarcity. The entire concept of supply and demand rests on the assumption that a resource is scarce. As the network will have 4.75 Gbps cross links between Space Vehicles (SVs), it is not likely that there will be bandwidth constraints across the constellation. However, due to atmospheric barriers and spectrum limitations, communication system operators will be constrained on the up and down links. This suggests that pricing for communication system services should be driven by demand within locations, and that distance between two end points should not be a factor. As many tethered networks are subject to distance constraints due to the extra infrastructure (i. e. cables and switches) required to complete a circuit between locations, some distance sensitive pricing may still be seen. However, as vendors continue to lay fiber and increase their backbone capacities, distance based pricing is rapidly breaking down. The major pricing component in the new model is port access speeds.

The domestic market in the United States has already gone to an access based rather than distance based price structure for X. 25, frame relay, and ATM services. It is very likely that the international market will follow as more and more fiber is run between countries. Although communication system operators may still see distance sensitive international pricing when it goes live in the future, the ability to offer its services based on access costs alone could prove a competitive advantage over other services.

In addition to location and bandwidth based pricing, communication system operators will also be concerned with other factors such as duration of call, priority of traffic, latency requirements, time of day, and bulk commitment levels. Although all may be important in determining the pricing model, duration of call and priority of traffic are discussed below.

Based on the wide array of services that distributors offer, call patterns vary significantly from one provider to another. In today's networks, a pair of routers typically keep a connection open for a long period of time (e. g. months) to exchange data. On the other hand, a phone call between two people usually lasts for a handful of minutes and may be established any time that the individuals decide to talk. Customers that use these services may demand purchasing and pricing methods that differ based on the application. Network managers expect a very different pricing structure then a residential telephony user.

In order to simplify the analysis, communication system operators may want to break these usage patterns into two classifications of calls: long term (or fixed) and temporary (or variable).

Pricing models and purchasing channels could vary based on these different categories of services.

As communication system operators will offer robust QoS capabilities similar to ATM, it may be important for communication system operators to price bandwidth differently based on its priority. The pricing structure will most likely be based on priority, where: Priority 1 refers to guaranteed bandwidth, typically used for CBR traffic reserved in advance; Priority 2 refers to purchased bandwidth, typically used for VBR and ABR traffic with a reserved level of guarantee; and Priority 3 refers to excess shared bandwidth, typically UBR traffic with best effort delivery Priority 3 (P3) bandwidth is probably the easiest to understand. As P3 offers no guarantees and is based on best effort delivery, users could be charged solely for the amount that they use, based on cell or bit counts.

Priority 2 (P2) traffic is similar to frame relay. Using statistical multiplexing techniques, communication system operators can oversubscribe Priority 2 bandwidth allocation knowing that users transfer data at different times. Certain levels of guarantees may be offered based on Peak Cell Rate and Sustainable Cell Rate. This is somewhat analogous to frame relay's Committed Information Rate. Communication system operators could charge a flat rate for P2 connections based on the PCR and SCR. The higher the bandwidth or commitment level for PCR and SCR, the greater this cost. Unlike frame relay, communication system operators should also include a usage charge based on cell or bit counts. Without this charge, there is no economic incentive to limit usage. As discussed previously, this could result in one user subsidizing another user's traffic. The flat rate P2 charges could be priced low enough to attract and keep customers who are used to a frame relay pricing model.

As communication system operators must reserve Priority 1 bandwidth in order to guarantee transmission, calls using PI resources are analogous to dedicated leased lines or time division multiplexing. Like leased lines, these calls could be priced based on the amount of bandwidth reserved and the duration of the call. As the bandwidth must be reserved irrespective of whether or not it is actually used, cell or bit count is not relevant for pricing.

Communication system operators are also examining whether or not to charge for access to DAMA channels, which are required for bandwidth allocation and maintenance. Connect time charges for access to DAMA channels would encourage users not to sit idly on DAMA channels, using up valuable resources of communication system operators.

Communication system operators may find it difficult to figure out how to price bandwidth in each of its LACs. Priority of traffic, latency requirements, time of day, duration of call, bulk commitment levels, and other factors may all influence the price. As a result, communication system operators will have a very difficult time determining its wholesale pricing. While trending may help marketing adjust its pricing across LACs, the combination of sporadic data usage patterns and a large number of pricing determinants will make this analysis very complex.

Just as communication system operators will be faced with the difficult task of figuring out how to price bandwidth, distributors will be equally challenged trying to determine how much bandwidth to purchase. In the current model, communication system operators are planning on pre-selling bandwidth for extended periods of time in large chunks to distributors. Pricing will be determined based on the guarantee level, or priority, of the bandwidth. Given this model, distributors will be forced to make difficult bandwidth procurement decisions. If a distributor misjudges how much bandwidth to purchase, it may find itself with either too much bandwidth that it is unable to sell to its users, or alternatively with not enough bandwidth to satisfy customer demand. Not only must distributors accurately predict their customer's bandwidth requirements, but they must do so by location. As bandwidth will be bought and sold based on uplink and downlink capacity and priority within a Location Area Code (LAC), the distributors will have to estimate their bandwidth purchases for each LAC in which they have customers. Bandwidth purchased in one LAC is useless to customers in another area. The purchasing of bandwidth with the current model could be an extremely difficult task for a distributor. As some distributors may be more content focused, they may not have the skills necessary to estimate their overall network demand.

The more difficult it is for distributors to determine demand, the less likely they will be willing to commit to large bandwidth purchases. In order for communication system operators to encourage distributors to purchase large blocks of wholesale bandwidth, it will be forced to offer extremely low prices for volume commitments. Discount pricing will have to be low enough that distributors can cover their risk of not selling the bandwidth. As customers will not tolerate a scenario where their distributor does not have enough bandwidth to satisfy their demand, communication system operators will have to create an environment that bandwidth is priced low

enough to allow distributors to purchase excess capacity. As capacity will go unused, this suggests that communication system operators will be leaving money on the table by selling bulk bandwidth at low rates.

By pricing bulk bandwidth low enough to entice distributors, communication system operators will not be maximizing its potential revenues. Although communication system operators will have successfully shifted the risk of owning excess capacity from itself to its distributors, it will do so at a cost-lower wholesale bandwidth prices. Overall risk will be mitigated at the expense of lost revenues.

Just as we have seen with ATM and frame relay, communication system operators will have a difficult time dealing with Switched Virtual Circuits. However, as much of communication system operator's business will be based on multimedia applications such as video conferencing which require Switched Virtual Circuits, communication system operators will be forced to support SVCs. In order to offer SVCs, it is essential that communication system operator's pricing model and billing system be structured to handle usage based pricing.

In addition, communication system operator's resource allocation model must be sophisticated enough to handle the dynamic assignment of bandwidth to requesting Customer Premise Equipment (CPE). This must be accomplished without jeopardizing the negotiated service levels of other calls. While today's ATM networks have this capability, most vendors do not offer SVC services for reasons discussed earlier.

As SVC traffic patterns are typically more difficult to predict and can change very quickly, a flexible pricing system will be needed to handle SVCs. Due to their shorter duration, applications that use SVCs typically display more sensitivity to price. If long term bandwidth purchases results in prices that are too high, SVC application bandwidth demand may drop rapidly, resulting in excess capacity. Alternatively, if bandwidth pricing is too low, users may flood the network with call requests resulting in large numbers of blocked calls. Both scenarios result in lost revenue.

Although the same phenomenon exists with PVC based applications, the dynamic nature of SVCs makes demand fluctuations much more volatile. Applications that use SVCs are typically much more elastic to price, as usage patterns can easily be adjusted based on the cost of connection. A good example of this phenomenon is the increase in long-distance calls made by

residential voice customers as a result of aggressive pricing discounts. As soon as long-distance rates go down, customers increase their use.

As much of communication system operator's traffic will be targeting SVC oriented customers (i. e. the Direct to Home market), communication system operators will need to provide their distributors with more dynamic mechanisms for procuring bandwidth for its customers. Long term bandwidth purchases may not be able to capture short term fluctuations in demand. A dynamic pricing model is ultimately needed to handle these fluctuations in demand.

Assuming communication system operators were able to predict an increase in demand for services the day before the election and raised prices, they may have a difficult time prioritizing bandwidth requests between customers. While communication system operators could maintain a pecking list of which distributors get preference over others, a more efficient mechanism would be to allow distributors to bid on the price.

Thus, there is a need for a Bandwidth Market in which distributors can buy and sell excess bandwidth. If communication system operators were to provide a place for distributors to trade bandwidth, this market could reduce their risk significantly, encouraging them to purchase extra capacity. In addition, it will also allow communication system operators to price bandwidth efficiently using the simple economic principles of supply and demand.

The Bandwidth Market would also remove the need for distributors to negotiate hundreds of bilateral agreements with one another and by introducing bandwidth contracts, simplifies the billing and settlement processes.

SUMMARY OF THE INVENTION A system, method and article of manufacture are provided for contract negotiation in a bandwidth market environment. First, bandwidth on a network is allocated among a plurality of users. An amount of unused bandwidth of a first user is identified. A request for bandwidth on the network is received from a second user. Then, a negotiation between the first and second users is allowed to determine transaction terms for reallocation of the unused bandwidth from the first user to the second user. Upon acceptance of the transaction terms by the first and second users, contract information relating to the transaction terms is sent to the first and second users.

In one aspect of the present invention, the contract information defines the amount of unused bandwidth, a duration of use of the unused bandwidth, a service level, and/or a price.

Optionally, a transaction fee may be charged for allowing the negotiation between the first and second users. Further, the step of allowing the negotiation between the first and second users may occur in real time.

In another aspect of the present invention, the contract information is sent to a third party after the third party requests bandwidth from the second user. Furthermore, the contract information may include a contract identifier.

BRIEF DESCRIPTION OF THE DRAWINGS The invention will be better understood when consideration is given to the following detailed description thereof. Such description makes reference to the annexed drawings wherein: Figure 1 is a schematic diagram of a hardware implementation of one embodiment of the present invention; Figure 2 is a representation of a bandwidth market in accordance with one embodiment of the present invention; Figure 3 is a flowchart illustrating a contract negotiation in accordance with one embodiment of the present invention; Figure 4 is a flowchart depicting a method for automatically identifying an amount of unused bandwidth of a user; Figure 5 is a flowchart illustrating another method of identifying the amount of bandwidth of a user; Figure 6 is a flowchart illustrating a method for exchanging money for bandwidth; Figure 7 is an illustration a summary of a contract negotiation process; Figure 8 is an illustration of a more detailed contract negotiation process; Figure 9 is a flow chart illustrating a method of performing clearing and settlement functions in a bandwidth market environment; Figure 10 illustrates in overview a system arrangement for implementing the over the counter (or other) bandwidth market system of the instant invention; Figure 11 is a flow chart of data processing for qualifying for execution of an order communicated from a branch order entry clerk or account executive;

Figure 12 illustrates data processing for executing and accounting for orders that have been qualified for execution by the order qualifying data processing of Figure 11; Figure 13 is the left portion of a flow chart for the data processing of block 1214 of Figure 12 for updating the inventory cost (average price per unit of bandwidth AVCST (BWTH)) of the bandwidth BWTH and the running profit PR (BWTH) realized from the execution of each trade; Figure 14 is the right portion of a flow chart for the data processing of block 1214 of Figure 12 for updating the inventory cost (average price per unit of bandwidth AVCST (BWTH)) of the bandwidth BWTH and the running profit PR (BWTH) realized from the execution of each trade; Figure 15 is a flow chart illustrating data processing upon receipt of a new market maker quotation from the bandwidth market system; Figure 16 is a block diagram of a bill pay system relying on postal mailed payments; Figure 17 is a block diagram of a bill pay system wherein consumers pay bills using a bill pay service bureau which has the consumers as customers; Figure 18 is a block diagram of a bill pay system where billers initiate automatic debits from consumers'bank accounts; Figure 19 is a flow chart illustrating an open market environment for electronic content; Figure 20 illustrates one manner of performing operations 1902 through 1906 of Figure 19; Figure 21 illustrates an encryption technique designed to ensure payment by the customer; and Figure 22 illustrates an alternative to operation 2114 of Figure 21.

DETAILED DESCRIPTION In accordance with at least one embodiment of the present invention, a system is provided for affording various features which support a bandwidth market that allows buyers of bandwidth to buy, sell, and/or trade excess bandwidth. The market may be enabled using a hardware implementation such as that illustrated in Figure 1. Further, various functional and user interface features of one embodiment of the present invention may be enabled using software programming, i. e. object oriented programming (OOP).

HARDWARE OVERVIEW A representative hardware environment of a preferred embodiment of the present invention is depicted in Figure 1, which illustrates a typical hardware configuration of a workstation having a central processing unit 110, such as a microprocessor, and a number of other units interconnected via a system bus 112. The workstation shown in Figure 1 includes Random Access Memory (RAM) 114, Read Only Memory (ROM) 116, an I/O adapter 118 for connecting peripheral devices such as disk storage units 120 to the bus 112, a user interface adapter 122 for connecting a keyboard 124, a mouse 126, a speaker 128, a microphone 132, and/or other user interface devices such as a touch screen (not shown) to the bus 112, communication adapter 134 for connecting the workstation to a communication network (e. g., a data processing network) and a display adapter 136 for connecting the bus 112 to a display device 138. The workstation typically has resident thereon an operating system such as the Microsoft Windows NT or Windows/95 Operating System (OS), the IBM OS/2 operating system, the MAC OS, or UNIX operating system.

SOFTWARE OVERVIEW Object oriented programming (OOP) has become increasingly used to develop complex applications. As OOP moves toward the mainstream of software design and development, various software solutions require adaptation to make use of the benefits of OOP. A need exists for the principles of OOP to be applied to a messaging interface of an electronic messaging system such that a set of OOP classes and objects for the messaging interface can be provided.

OOP is a process of developing computer software using objects, including the steps of analyzing the problem, designing the system, and constructing the program. An object is a software package that contains both data and a collection of related structures and procedures.

Since it contains both data and a collection of structures and procedures, it can be visualized as a self-sufficient component that does not require other additional structures, procedures or data to perform its specific task. OOP, therefore, views a computer program as a collection of largely autonomous components, called objects, each of which is responsible for a specific task. This concept of packaging data, structures, and procedures together in one component or module is called encapsulation.

In general, OOP components are reusable software modules which present an interface that conforms to an object model and which are accessed at run-time through a component integration architecture. A component integration architecture is a set of architecture mechanisms which allow software modules in different process spaces to utilize each other's capabilities or functions. This is generally done by assuming a common component object model on which to build the architecture. It is worthwhile to differentiate between an object and a class of objects at this point. An object is a single instance of the class of objects, which is often just called a class. A class of objects can be viewed as a blueprint, from which many objects can be formed.

OOP allows the programmer to create an object that is a part of another object. For example, the object representing a piston engine is said to have a composition-relationship with the object representing a piston. In reality, a piston engine comprises a piston, valves and many other components; the fact that a piston is an element of a piston engine can be logically and semantically represented in OOP by two objects.

OOP also allows creation of an object that"depends from"another object. If there are two objects, one representing a piston engine and the other representing a piston engine wherein the piston is made of ceramic, then the relationship between the two objects is not that of composition. A ceramic piston engine does not make up a piston engine. Rather it is merely one kind of piston engine that has one more limitation than the piston engine; its piston is made of ceramic. In this case, the object representing the ceramic piston engine is called a derived object, and it inherits all of the aspects of the object representing the piston engine and adds further limitation or detail to it. The object representing the ceramic piston engine"depends from"the

object representing the piston engine. The relationship between these objects is called inheritance.

When the object or class representing the ceramic piston engine inherits all of the aspects of the objects representing the piston engine, it inherits the thermal characteristics of a standard piston defined in the piston engine class. However, the ceramic piston engine object overrides these ceramic specific thermal characteristics, which are typically different from those associated with a metal piston. It skips over the original and uses new functions related to ceramic pistons.

Different kinds of piston engines have different characteristics, but may have the same underlying functions associated with them (e. g., how many pistons in the engine, ignition sequences, lubrication, etc.). To access each of these functions in any piston engine object, a programmer would call the same functions with the same names, but each type of piston engine may have different/overriding implementations of functions behind the same name. This ability to hide different implementations of a function behind the same name is called polymorphism and it greatly simplifies communication among objects.

With the concepts of composition-relationship, encapsulation, inheritance and polymorphism, an object can represent just about anything in the real world. In fact, the logical perception of the reality is the only limit on determining the kinds of things that can become objects in object- oriented software. Some typical categories are as follows: Objects can represent physical objects, such as automobiles in a traffic-flow simulation, electrical components in a circuit-design program, countries in an economics model, or aircraft in an air-traffic-control system.

Objects can represent elements of the computer-user environment such as windows, menus or graphics objects.

An object can represent an inventory, such as a personnel file or a table of the latitudes and longitudes of cities.

An object can represent user-defined data types such as time, angles, and complex numbers, or points on the plane.

With this enormous capability of an object to represent just about any logically separable matters, OOP allows the software developer to design and implement a computer program that is a model of some aspects of reality, whether that reality is a physical entity, a process, a system,

or a composition of matter. Since the object can represent anything, the software developer can create an object which can be used as a component in a larger software project in the future.

If 90% of a new OOP software program consists of proven, existing components made from preexisting reusable objects, then only the remaining 10% of the new software project has to be written and tested from scratch. Since 90% already came from an inventory of extensively tested reusable objects, the potential domain from which an error could originate is 10% of the program. As a result, OOP enables software developers to build objects out of other, previously built objects.

This process closely resembles complex machinery being built out of assemblies and sub- assemblies. OOP technology, therefore, makes software engineering more like hardware engineering in that software is built from existing components, which are available to the developer as objects. All this adds up to an improved quality of the software as well as an increase in the speed of its development.

Programming languages are beginning to fully support the OOP principles, such as encapsulation, inheritance, polymorphism, and composition-relationship. With the advent of the C++ language, many commercial software developers have embraced OOP. C++ is an OOP language that offers a fast, machine-executable code. Furthermore, C++ is suitable for both commercial-application and systems-programming projects. For now, C++ appears to be the most popular choice among many OOP programmers, but there is a host of other OOP languages, such as Smalltalk, Common Lisp Object System (CLOS), and Eiffel. Additionally, OOP capabilities are being added to more traditional popular computer programming languages such as Pascal.

The benefits of object classes can be summarized, as follows: Objects and their corresponding classes break down complex programming problems into many smaller, simpler problems.

Encapsulation enforces data abstraction through the organization of data into small, independent objects that can communicate with each other. Encapsulation protects the data in an object from accidental damage, but allows other objects to interact with that data by calling the object's member functions and structures.

Subolassing and inheritance make it possible to extend and modify objects through deriving new kinds of objects from the standard classes available in the system. Thus, new capabilities are created without having to start from scratch.

Polymorphism and multiple inheritance make it possible for different programmers to mix and match characteristics of many different classes and create specialized objects that can still work with related objects in predictable ways.

Class hierarchies and containment hierarchies provide a flexible mechanism for modeling real-world objects and the relationships among them.

Libraries of reusable classes are useful in many situations, but they also have some limitations. For example: Complexity. In a complex system, the class hierarchies for related classes can become extremely confusing, with many dozens or even hundreds of classes.

Flow of control. A program written with the aid of class libraries is still responsible for the flow of control (i. e., it must control the interactions among all the objects created from a particular library). The programmer has to decide which functions to call at what times for which kinds of objects.

Duplication of effort. Although class libraries allow programmers to use and reuse many small pieces of code, each programmer puts those pieces together in a different way.

Two different programmers can use the same set of class libraries to write two programs that do exactly the same thing but whose internal structure (i. e., design) may be quite different, depending on hundreds of small decisions each programmer makes along the way. Inevitably, similar pieces of code end up doing similar things in slightly different ways and do not work as well together as they should.

Class libraries are very flexible. As programs grow more complex, more programmers are forced to reinvent basic solutions to basic problems over and over again. A relatively new extension of the class library concept is to have a framework of class libraries. This framework is more complex and consists of significant collections of collaborating classes that capture both the small scale patterns and major mechanisms that implement the common requirements and design in a specific application domain. They were first developed to free application programmers from the chores involved in displaying menus, windows, dialog boxes, and other standard user interface elements for personal computers.

Frameworks also represent a change in the way programmers think about the interaction between the code they write and code written by others. In the early days of procedural programming, the programmer called libraries provided by the operating system to perform certain tasks, but basically the program executed down the page from start to finish, and the programmer was solely responsible for the flow of control. This was appropriate for printing out paychecks, calculating a mathematical table, or solving other problems with a program that executed in just one way.

The development of graphical user interfaces began to turn this procedural programming arrangement inside out. These interfaces allow the user, rather than program logic, to drive the program and decide when certain actions should be performed. Today, most personal computer software accomplishes this by means of an event loop which monitors the mouse, keyboard, and other sources of external events and calls the appropriate parts of the programmer's code according to actions that the user performs. The programmer no longer determines the order in which events occur. Instead, a program is divided into separate pieces that are called at unpredictable times and in an unpredictable order. By relinquishing control in this way to users, the developer creates a program that is much easier to use. Nevertheless, individual pieces of the program written by the developer still call libraries provided by the operating system to accomplish certain tasks, and the programmer must still determine the flow of control within each piece after it's called by the event loop. Application code still"sits on top of'the system.

Even event loop programs require programmers to write a lot of code that should not need to be written separately for every application. The concept of an application framework carries the event loop concept further. Instead of dealing with all the nuts and bolts of constructing basic menus, windows, and dialog boxes and then making all these things work together, programmers using application frameworks start with working application code and basic user interface elements in place. Subsequently, they build from there by replacing some of the generic capabilities of the framework with the specific capabilities of the intended application.

Application frameworks reduce the total amount of code that a programmer has to write from scratch. However, because the framework is really a generic application that displays windows, supports copy and paste, and so on, the programmer can also relinquish control to a greater degree than event loop programs permit. The framework code takes care of almost all event handling and flow of control, and the programmer's code is called only when the framework needs it (e. g., to create or manipulate a proprietary data structure).

A programmer writing a framework program not only relinquishes control to the user (as is also true for event loop programs), but also relinquishes the detailed flow of control within the program to the framework. This approach allows the creation of more complex systems that work together in interesting ways, as opposed to isolated programs, having custom code, being created over and over again for similar problems.

Thus, as is explained above, a framework basically is a collection of cooperating classes that make up a reusable design solution for a given problem domain. It typically includes objects that provide default behavior (e. g., for menus and windows), and programmers use it by inheriting some of that default behavior and overriding other behavior so that the framework calls application code at the appropriate times.

There are three main differences between frameworks and class libraries: Behavior versus protocol. Class libraries are essentially collections of behaviors that you can call when you want those individual behaviors in your program. A framework, on the other hand, provides not only behavior but also the protocol or set of rules that govern the ways in which behaviors can be combined, including rules for what a programmer is supposed to provide versus what the framework provides.

Call versus override. With a class library, the code the programmer instantiates objects and calls their member functions. It's possible to instantiate and call objects in the same way with a framework (i. e., to treat the framework as a class library), but to take full advantage of a framework's reusable design, a programmer typically writes code that overrides and is called by the framework. The framework manages the flow of control among its objects. Writing a program involves dividing responsibilities among the various pieces of software that are called by the framework rather than specifying how the different pieces should work together.

Implementation versus design. With class libraries, programmers reuse only implementations, whereas with frameworks, they reuse design. A framework embodies the way a family of related programs or pieces of software work. It represents a generic design solution that can be adapted to a variety of specific problems in a given domain.

For example, a single framework can embody the way a user interface works, even though two different user interfaces created with the same framework might solve quite different interface problems.

Thus, through the development of frameworks for solutions to various problems and programming tasks, significant reductions in the design and development effort for software can be achieved. A preferred embodiment of the invention utilizes HyperText Markup Language (HTML) to implement documents on the Internet together with a general-purpose secure communication protocol for a transport medium between the client and a company. HTTP or other protocols could be readily substituted for HTML without undue experimentation.

Information on these products is available in T. Berners-Lee, D. Connoly,"RFC 1866: Hypertext Markup Language-2.0" (Nov. 1995); and R. Fielding, H, Frystyk, T. Berners-Lee, J. Gettys and J. C. Mogul,"Hypertext Transfer Protocol--HTTP/1.1: HTTP Working Group Internet Draft" (May 2,1996). HTML is a simple data format used to create hypertext documents that are portable from one platform to another. HTML documents are SGML documents with generic semantics that are appropriate for representing information from a wide range of domains.

HTML has been in use by the World-Wide Web global information initiative since 1990.

HTML is an application of ISO Standard 8879; 1986 Information Processing Text and Office Systems; Standard Generalized Markup Language (SGML).

To date, Web development tools have been limited in their ability to create dynamic Web applications which span from client to server and interoperate with existing computing resources.

Until recently, HTML has been the dominant technology used in development of Web-based solutions. However, HTML has proven to be inadequate in the following areas: * Poor performance; w Restricted user interface capabilities; Can only produce static Web pages; yack of interoperability with existing applications and data; and Inability to scale.

Sun Microsystem's Java language solves many of the client-side problems by: Improving performance on the client side; Enabling the creation of dynamic, real-time Web applications; and Providing the ability to create a wide variety of user interface components.

With Java, developers can create robust User Interface (UI) components. Custom"widgets" (e. g., real-time stock tickers, animated icons, etc.) can be created, and client-side performance is

improved. Unlike HTML, Java supports the notion of client-side validation, offloading appropriate processing onto the client for improved performance. Dynamic, real-time Web pages can be created. Using the above-mentioned custom UI components, dynamic Web pages can also be created.

Sun's Java language has emerged as an industry-recognized language for"programming the Internet."Sun defines Java as"a simple, object-oriented, distributed, interpreted, robust, secure, architecture-neutral, portable, high-performance, multithreaded, dynamic, buzzword-compliant, general-purpose programming language. Java supports programming for the Internet in the form of platform-independent Java applets."Java applets are small, specialized applications that comply with Sun's Java Application Programming Interface (API) allowing developers to add "interactive content"to Web documents (e. g., simple animations, page adornments, basic games, etc.). Applets execute within a Java-compatible browser (e. g., Netscape Navigator) by copying code from the server to client. From a language standpoint, Java's core feature set is based on C++. Sun's Java literature states that Java is basically,"C++ with extensions from Objective C for more dynamic method resolution." Another technology that provides similar function to JAVA is provided by Microsoft and ActiveX Technologies, to give developers and Web designers wherewithal to build dynamic content for the Internet and personal computers. ActiveX includes tools for developing animation, 3-D virtual reality, video and other multimedia content. The tools use Internet standards, work on multiple platforms, and are being supported by over 100 companies. The group's building blocks are called ActiveX Controls, which are fast components that enable developers to embed parts of software in hypertext markup language (HTML) pages. ActiveX Controls work with a variety of programming languages including Microsoft Visual C++, Borland Delphi, Microsoft Visual Basic programming system and, in the future, Microsoft's development tool for Java, code named"Jakarta."ActiveX Technologies also includes ActiveX Server Framework, allowing developers to create server applications. One of ordinary skill in the art readily recognizes that ActiveX could be substituted for JAVA without undue experimentation to practice the invention.

THE BANDWIDTH MARKET Objectives of a Bandwidth Market

Four important trends seem to dominate the future of data networking. First, customers will have a growing number of options when selecting a network vendor and technology. New higher bandwidth network technologies are being introduced to take advantage of existing carrier infrastructures. Second, as vendors continue to build out their network infrastructure, bandwidth is becoming increasingly available and at lower costs. As more bandwidth becomes available, competition among vendors to capture market share will result in dramatically lower prices.

This increase in competition among network vendors will undoubtedly change the market significantly. Third, vendors will start to shift to usage-based pricing structures. As competition increases, they will not be able to continue to reap profits from flat rate pricing schemes. Fourth, the popularity of the Internet is driving a trend in internetworking. As the technology moves forward, more and more networks will be joined to one another, resulting in seamless transport between networks.

One objective of the instant bandwidth market is to provide a more efficient mechanism for buying and selling network bandwidth. By providing a market in which distributors can trade bandwidth, the fundamental forces of supply and demand drive the appropriate prices for the bandwidth providers'services.

Place to Buy and Sell Bandwidth Without a bandwidth market, if a customer subscribes to a distributor who offers a service that typically requires lower data rates, such as Internet access, the distributor may not have purchased enough bandwidth for other, more bandwidth intensive applications. If a customer decides that they want to use a bandwidth provider for higher bandwidth or more demanding QoS applications such as video conferencing, their distributor may not be able to provide high enough access rates or guaranteed service levels within the customer's LAC. Without knowing all possible services that its customers may use, a distributor is unable to purchase appropriate bandwidth and service levels to satisfy all requests.

In a similar situation, if a consumer in a first location wants to make a video call to someone in a second location and pay for the call, unless their DVNS has purchased bandwidth in the LAC of the second location, a bandwidth provider can not complete the call. This limitation has serious implications. First, many applications can not to traverse DVNS boundaries, forcing a customer to only communicate with others who share their same distributor. Second, as most distributors are probably be focused on offering a single service (e. g. DSS TV or Internet access), their

customers can not access other services on a bandwidth provider's network. The bandwidth provider can become a network dedicated to singular functions. People who wish to access multiple services may need to subscribe to more than one distributor, and may require additional CPEs.

In order to allow customers to access any location or service on bandwidth providers'networks, it is necessary for distributors to be able to buy and sell bandwidth. If a customer wants to make a video call to a location in which its distributor does not have bandwidth, the DVNS should be able to purchase bandwidth from another distributor who has excess capacity. Ideally, this could be done on a real-time basis so that customers can immediately access the location or service.

Not only does this provide a mechanism for customers to cross DVNS geographic and service boundaries, but it also provides a way for distributors to sell off their excess bandwidth. As distributors can now sell off unused bandwidth in a secondary market, they are more likely to purchase additional wholesale capacity. Like other commodities, bandwidth could be traded among distributors, ultimately resulting in an efficient market.

In addition to reducing risk for distributors, a bandwidth provider could also use the market to post excess wholesale capacity.

Efficiently Priced Bandwidth Another major benefit of a bandwidth market is its ability to efficiently price bandwidth. As distributors buy and sell capacity, the price of the bandwidth moves towards a market equilibrium where supply hits demand.

As bandwidth is traded by service level guarantees and LAC, this eliminates some of the complex analysis that distributors need to perform in order to determine fair market prices. In addition, by analyzing sales in the bandwidth market, a bandwidth provider is able to accurately gauge demand and price bandwidth in each location. Taken a step further, a bandwidth provider could post all of its bandwidth on a wholesale market instead of negotiating directly with each distributor. Distributors could then bid for this bandwidth, resulting in efficient wholesale pricing in which the bandwidth provider maximizes its revenues.

A market for trading bandwidth virtually eliminates the difficult pricing problems faced by a bandwidth provider's marketing department. In addition, distributors have the ability to sell off

excess bandwidth, reducing their risk significantly. To further reduce risk, a futures market could be established allowing distributors to hedge bandwidth purchases. This would allow distributors who are mainly interested in selling value added service to their customers to avoid fluctuations in bandwidth prices.

Assists with Roaming The ability of a DVNS to purchase another distributor's bandwidth has an additional impact for roaming, which is sometimes referred to as"nomadicity."One of the goals of a bandwidth provider may be to allow a customer to take their Subscriber Identifier Module, or SIM card, and plug it into another CPE when they are on the road. This would allow a business traveler to plug his or her SIM card into a hotel's CPE to access the bandwidth provider's network. However, if the hotel CPE does not have enough bandwidth available to support the business traveler's application, its DVNS could purchase the extra capacity on the bandwidth market.

Avoid Expensive Bilateral Agreements Without the instant bandwidth market, distributors have to negotiate independent contracts with each DVNS that manages CPEs with content that their customers access. This may require hundreds of bilateral agreements among distributors. As these bilateral agreements may be difficult and costly to negotiate, many distributors would not allow their customers to access certain services unless their is enough critical mass to warrant a contract.

In addition, distributors would have to negotiate with other distributors every time that they want to resell their excess capacity. This too could result in hundreds of bilateral agreements, and could be costly to negotiate and administer. Dispute resolution could also present a major problem.

With the bandwidth market, distributors can avoid costly bilateral agreements. The market provides an efficient means of trading bandwidth among distributors. As distributors would enter into a contract with the market, they do not have to negotiate with each DVNS that they ultimately trade with. The bandwidth market also serves as an equalizer, giving small distributors the same ability to purchase bandwidth as larger providers. By allowing a smaller DVNS to purchase bandwidth, it could provide its customers with the same access as larger

distributors. A bandwidth provider could benefit by selling wholesale capacity on the bandwidth market, avoiding periodic negotiations with hundreds of distributors.

Establish Bandwidth Contracts Another benefit of the bandwidth market is its handling of contracts. To allow the market to operate efficiently, bandwidth could be packaged and traded as contracts. In order to package bandwidth, it may be necessary for the bandwidth market to define products. These products are based on a combination of bandwidth (or cell counts), location, service level guarantees, time of day, duration, and other factors. Although establishing these structures is a complex task, it is much easier for the bandwidth market to go through the steps of defining these packages once, rather than distributors worrying about them every time that they negotiate with other providers.

This simplifies the sales process dramatically.

Once a contract has been purchased, the DVNS issues a Contract ID to its customer's CPE during call setup. In addition to defining bandwidth, service level guarantees, and duration, the contract also determines who pays for the call (e. g. calling party pays, collect call, etc.). As the contract is purchased at an agreed upon price, this price provides rating information that can be used for billing purposes. When the contract is executed, the CPE reports usage data back to the DVNS. This usage data includes the Contract ID, allowing the DVNS or a settlements process to correlate the call back to the original transaction. As the contract is recorded at the time of the transaction, this information could be forwarded to the distributors, the bandwidth provider, and a clearinghouse for processing. This simplifies the revenue allocation process, by providing clear information for rating, billing, and settling the call.

One advantage of having call setup based on contracts is that the CPE is given a well defined call duration and total cell or bit count. As the DVNS steps out of the picture after call setup, the CPE is responsible for making sure that it does not exceed these agreed upon thresholds. If the CPE reaches the maximum duration or cell count, it automatically terminates the call. While ATM does require the CPE to shape its traffic to conform with the Peak and Sustainable Cell Rates and the transfer capabilities agreed upon during call setup, it does not define the call duration or maximum traffic transfer. The bandwidth contract does a nice job in filling this gap, and is ideal for supporting pre-paid calling structures (e. g. credit or debit card).

Another benefit of the bandwidth contract is that it specifies agreed upon service levels for call setup. After the call has been completed, usage data can be analyzed to see if these service levels have been met. This allows a bandwidth provider and its distributors to provide customers with Service Level Agreements that may have penalty clauses for violations.

Bandwidth Market Structure The following is a discussion of exemplary embodiments of the bandwidth markets.

Bandwidth Provider vs. External Secondary Market Even if a bandwidth provider is not willing to develop and operate a bandwidth market for buying and selling its services, it is very possible that a third party may fill the gap. If a bandwidth provider is unable to accurately price bandwidth, inefficiencies in pricing bandwidth result in arbitrage opportunities. Like airline ticket aggregators, speculative distributors could start buying up under priced bandwidth and selling it to other distributors. These transactions are initially handled by bilateral agreements. As transaction volumes increase, distributors start to form groups of trading partners. Applications are developed to streamline the process of trading bandwidth. Eventually, a bandwidth market for a bandwidth provider's services will evolve out of these alliances.

If a bandwidth provider decides not to develop the bandwidth market, the alternative market that develops may have some negative aspects. First, the larger the number of bilateral trading agreements, the greater the likelihood for error. If two distributors misunderstand one another, it is possible that each DVNS may assign the same bandwidth allocation to different customers.

This could result in an oversubscription of services.

Second, trading alliances may exclude smaller or less political DVNSs, ending up with an"Old Boys Network"of distributors dominating the market. This could result in a small oligopoly dictating bandwidth pricing, potentially forcing other distributors out of business. As one would expect, losing control of pricing for a bandwidth provider's services could have dangerous consequences.

Rather than let another organization establish a market for trading bandwidth, a bandwidth provider could develop the bandwidth market itself. In addition to keeping control of bandwidth

pricing, the bandwidth provider could earn additional revenue by charging trading transaction fees. When coupled with clearing functions, this market could prove an important selling point for attracting distributors. By offering a simple and controlled mechanism for reselling excess bandwidth, the bandwidth provider reduces the risk faced by distributors of purchasing too much bandwidth. A distributor may be more likely to offer the bandwidth provider's services if they know that a bandwidth market is available to buy and sell excess capacity, and it is managed and operated by the wholesale provider.

Open and Closed Markets The most efficient way to trade bandwidth is to have one market for all participants. However, in order to encourage"Charter Customers,"bandwidth providers may need to offer special rate structures and benefits to potential distributors. Once the bandwidth market is established, many of the benefits, such as bandwidth contracts and CPE traffic shaping, will be useful even for distributors who have large discount structures. Rather than create custom purchasing mechanisms for these customers, they could use the same processes and applications that the market uses for buying and selling bandwidth, even if they are dealing exclusively with only one bandwidth provider. Because of the standardized process for selling bandwidth, a bandwidth provider can avoid having to develop custom interfaces for each of its large distributors.

There may still be a need for the establishment of bilateral agreements between a bandwidth provider and its"Charter Customers."In addition, many distributors who specialize in the same value added services but in different regions, may want to establish private trading blocks. For example, as video conferencing providers have similar bandwidth needs, they may want to establish a closed market for trading among themselves. This would allow them to focus on similar type bandwidth contracts in different regions of the world, and is somewhat analogous to cellular roaming agreements.

For these reasons, the bandwidth market may be segmented into multiple trading floors or markets. As shown in Figure 2, the top level segment would be a Pre-Sold bandwidth market 200. This would be the vehicle that a bandwidth provider could use to privately sell long term contracts to larger distributors. Bandwidth sales in this market can be pre-negotiated between the bandwidth provider and the distributor. The market would be used to record and track these transactions.

The next segment would be the Open Market Bandwidth Sales 202. This market would be used by distributors to post the excess bandwidth that they wish to sell. In addition, any bandwidth that a bandwidth provider has not sold under pre-negotiated agreements could be posted to this market. As this market segment is open to all of the distributors, it should be an efficient market in which pricing is established at the point where supply hits demand.

Figure 3 illustrates one method of providing an open market environment in accordance with the principles set forth hereinabove. In operation 300, bandwidth is allocated on a network among a plurality of users, i. e., distributors. For example, bandwidth could be allocated based on an amount of bandwidth the users purchase. Or bandwidth could be allocated based on a contract, such as an allotment of a predetermined amount of bandwidth per period, e. g., month, year, etc.

In operation 302, an amount of unused bandwidth of a first user is identified. Figure 4 illustrates a method of automatically identifying the first user's unused bandwidth. In this example, unused bandwidth is identified by monitoring bandwidth use of the first user to determine an amount of bandwidth used by the first user in operation 400. In operation 402, the amount of bandwidth used by the first user is compared to the total amount of bandwidth the first user has been allocated. The amount of unused bandwidth is determined in operation 404 by subtracting the amount of bandwidth used by the first user from the total amount of bandwidth allocated to the first user. The first user would then be notified of the amount of unused bandwidth in operation 406 and sent a request in operation 408 asking whether the first user would like to sell or trade the unused bandwidth.

Figure 5 illustrates another exemplary method of identifying the amount of bandwidth of the first user, as set forth in operation 302 of Figure 3. In operation 500, the first user is sent a request asking whether the first user has any unused bandwidth that the first user would like to trade or sell. A response from the first user indicating an amount of unused bandwidth that the first user would like to trade or sell is received in operation 502. The availability of the amount of unused bandwidth that the first user would like to sell or trade is verified in operation 504.

Referring again to Figure 3, a request for bandwidth on the network is received from a second user in operation 304. The request may be received before or after the amount of unused bandwidth is identified in operation 302, above. The request may be directly received from the second user or an agent of the second user. Alternatively, the second user, or all of the users, may be notified of the amount of unused bandwidth available. The second user may be notified

in any of a multitude of ways. For example, a listing of available unused bandwidth that is for sale or trade by any number of users may be compiled and displayed or sent to some or all of the users of bandwidth. The listing could be complex or as simple as a listing on a web site with the price and name and contact information of the first user. Once the user is notified, a response from the second user as to the amount of unused bandwidth the second user would like to purchase or trade for would be awaited and received.

In operation 306, the unused bandwidth of the first user is reallocated to the second user. In other words, the second user is given control of the unused bandwidth to use, reserve, or trade or sell. The bandwidth provider may be contacted and told to reallocate the bandwidth by terminating the first user's access to the unused bandwidth and giving the second user access to the bandwidth. Alternatively, access codes that would have been used by the first user to access predetermined amounts of bandwidth corresponding to the unused bandwidth being reallocated may be turned over to the second user to permit the second user to access the bandwidth.

In one embodiment of the present invention, the unused bandwidth that is reallocated to the second user in operation 306 of Figure 3 is done so in exchange for money paid by the second user to the first user. In Figure 6, a method of exchanging money for bandwidth is illustrated. In operation 600, notification of an agreement to sell bandwidth for an amount of money is received. Information concerning the manner of payment is received in operation 602. This information includes how the second user is going to pay for the bandwidth. For example, access information of a bank account or of a credit line could be received from the second user, which would be used to perform an electronic transaction of money from the second user's account to the first user. In operation 604, the transfer of money is verified such as by receiving an electronic receipt from the bank of the first user which acknowledges a deposit of the money.

In operation 606, the unused bandwidth of the first user is reallocated to the second user.

Further, a transaction fee may be charged for reallocating the unused bandwidth. The transaction fee may be a percentage of the total value of the bandwidth traded or sold, a flat fee charged per transaction, or a flat fee charged per unit of bandwidth.

In another embodiment of the present invention, the unused bandwidth of the first user is packaged with unused bandwidth of another user and reallocated to the second user under the terms of a contract, as discussed above in the"Establish Bandwidth Contracts"section

hereinabove. This would allow a second user who requires more unused bandwidth than the first user has available to satisfy the second user's requirements.

Looking again to Figure 2, the remaining segments at the lowest level are the Closed Markets 204. These markets would be established to allow vendors who offer similar services (e. g. DSS TV, ISPs, etc.) to trade among themselves. In some instances, a bandwidth provider may be given the right to post excess bandwidth that fits predefined contract profiles to some of these markets. The bandwidth market could be modeled on either an auction concept or as commodities markets.

All transactions in the foregoing markets can result in a bandwidth contract. These contracts provide an effective mechanism for tracking bandwidth sales, and are very useful during the rating and settlements processes.

Static vs. Real-time Bandwidth Purchases In an exemplary model, a bandwidth provider requires distributors to purchase wholesale Priority 1 and 2 traffic at least 24 hours before the time that it is needed. This means that distributors may be forced to estimate their bandwidth requirements for the following day. As the Internet outage during the last US presidential elections suggests, bandwidth demand may fluctuate significantly for a number of external reasons. In certain instances, it may not be possible to the distributors to predict demand. While the bandwidth market may provide a good mechanism for determining price when the next day's demand is known, it does not help in situations of great uncertainty.

In addition to causing problems for distributors in tracking and estimating customer demand, the 24 hour advanced bandwidth model could be problematic for the bandwidth provider. Just as distributors may not be able to predict the next day's demand, the bandwidth provider may not be able to determine the optimal price of the bandwidth. While a static bandwidth market based on contracts negotiated 24 hours in advance of their actual execution may certainly help determine pricing, the 24 hour requirement may result in some inefficiencies. On a similar note, although the bandwidth market reduces risk by providing a mechanism for reselling excess capacity, the 24 hour rule results in a one day liability to distributors.

Another problem with the 24 hour requirement is that it does not allow a customer to transparently access irregular services or locations. If a customer decides that they want to call an unusual location or access a service that has not been pre-negotiated by their DVNS, they may have to call up their distributor to have them acquire the appropriate service for the next day. A customer in the United States may not be willing to contact its service provider 24 hours in advance to setup a video conference call to someone in Botswana.

What is needed to solve these problems is the capability to purchase bandwidth in real-time. By giving the distributor the ability to buy and sell bandwidth in real-time, an efficient market can be created in which revenues are maximized. The value of the bandwidth is allowed to"float" based on supply and demand. This would also be much more efficient than a static market, where the price is set or buyers are allowed to bid over time with the highest bid taking the bandwidth, because the bandwidth could be purchased immediately and perhaps below the price that would otherwise be asked for the bandwidth in a static market. The mechanics for negotiating real-time bandwidth contracts is outlined hereinafter.

While a real-time bandwidth market is very desirable, it does not negate the benefits of a static bandwidth market. Although a static bandwidth market does not result in the same pricing efficiencies that can be realized in a real-time market, it still offers benefits to the bandwidth provider and its distributors. As the market provides a mechanism to buy and sell excess bandwidth, distributors may be more apt to commit to large, long-term commitments. In addition, although customers may need to call a day in advance to access a service or location, this is better the not having access. Distributors are also relieved of the burden of having to negotiate hundreds of bilateral agreements. As discussed below, the bandwidth contracts that are traded in the market are very useful for rating and settlements processing.

Contract Negotiation During Call Setup In order to support a real-time bandwidth market, it may be necessary to include contract negotiation in the call setup process. Figure 7 illustrates a contract negotiation process. In operation 700, bandwidth on a network is allocated, i. e., sold or traded in allotments, among a plurality of users. In operation 702, an amount of unused bandwidth of a first user is identified.

A request for bandwidth on the network is received from a second user in operation 704. It should be noted that operations 700-704 may be accomplished by any means including those specified hereinabove with respect to operations 300-304 of Figure 3.

Then, a negotiation between the first and second users is allowed in operation 706 to determine transaction terms for reallocation of the unused bandwidth from the first user to the second user.

In its simplest form, one embodiment of the present invention would simply receive pricing information from one user and send it to the other user, and vice versa, over and over until each user is satisfied with the terms for the transaction. Upon acceptance of the transaction terms by the first and second users, contract information relating to the transaction terms is sent to the first and second users in operation 708. Optionally, the terms may set forth in a contract format which the first and second users may agree to form a contract. Alternatively, acceptance of the terms of the transaction may be an acceptance of a contract including the terms of the transaction, and the contract information is a recitation of the terms of the contract.

In one embodiment of the present invention, the contract information defines the amount of unused bandwidth, a duration of use of the unused bandwidth, a service level, and/or a price.

Optionally, a transaction fee may be charged for allowing the negotiation between the first and second users. Further, the step of allowing the negotiation between the first and second users may occur in real time. In another aspect of the present invention, the contract information is sent to a third party after the third party requests bandwidth from the second user. Furthermore, the contract information may include a contract identifier.

Figure 8 outlines the exemplary contract negotiation of Figure 7 in more detail. In Step #1, a DVNS 800 that has purchased too much bandwidth packages their excess capacity and posts it to one of the segments on the bandwidth market 802. When a customer call request comes in and the distributor 804 does not have the bandwidth available (Step #2), its DVNS 806 first determines the appropriate call parameters. It then bids on and purchases bandwidth from the bandwidth market 802 (Step #3). The bandwidth market 802 completes and records the transaction (Step #4), and forwards the contract information, including bandwidth, location, service levels, and Contract ID, to each DVNS 800,806 involved in the transaction (Step #5).

The information is also forwarded to the rating, clearing, and settlements processes in the Network Business Center (CNBC) 808. When the information is successfully received by the DVNS 806, the contract information, including the Contract ID, is forwarded to the CPE 804 along with other call setup information (Step #6). After the call is established, the CPE 804 periodically sends cumulative Raw Usage Data (RUD) information to its DVNS 806 (Step #7).

Either at the end of the call or an appropriate interval, the DVNS 806 cuts an Event Data Record

(EDR) and forwards it to the Network Business Center (CNBC) 808 (Step #8) for rating and settlements processing (Step #9).

One of the advantages of the present process is that the CPE can use standard call setup signaling assuming that it is similar to the Q. 2931 method used by ATM. When a call request arrives at the DVNS, the call parameters and bandwidth requirements are assessed. The customer is first validated by the DVNS, which also checks to see if they are allowed to request this service. If the request is valid and the DVNS has available resources as a result of other contracts (which may be from long-term bandwidth purchases made in the Pre-Sold bandwidth market), the DVNS may complete the call and pass the Contract ID back to the CPE in a User-Defined Information Element (IE) using standard Q. 2931 signaling. If the DVNS does not have the appropriate bandwidth available, it may temporarily suspend the call setup process and purchase the bandwidth using the process outlined above. Assuming the DVNS successfully purchases the resources, it may forward the Contract ID specified by the transaction to the CPE and complete the call setup process. All calls may require a Contract ID to complete. If for some reason the bandwidth is not available, the DVNS may reject the call and notify the CPE that resources were not available.

It should be noted that ATM is designed to allow the customer to renegotiate call parameters, such as Peak Cell Rate and Sustainable Cell Rate, even after the call is established. If a bandwidth provider plans to fully support ATM, a different contract may be required to satisfy an upgrade request. This may require that the DVNS has the ability to renegotiate in. the middle of a call. As the call may have two or more Contract IDs, the DVNS could close out an EDR record and treat the remainder of the connection as a new call assigning a new EDR.

In evaluating the real-time purchasing of bandwidth during call establishment, one pertinent area is the time required to complete a transaction. As many protocols such as ATM have timeout values for call setup, it may be necessary to stay within these specifications. These timeout values are typically high to accommodate network congestion. In addition, some of these values can be tuned by vendor equipment.

Setting CPE Thresholds One of the advantages of the bandwidth market and bandwidth contracts is the ability to control CPE usage patterns. As a DVNS may need to assign a contract to complete all call setups,

information in this contract can be passed to the CPE. Based on a variety of factors such as the customer's credit limit, the CPE can be instructed to terminate a call when it hits certain thresholds. These thresholds could be based on call duration or cell counts. This may be an excellent mechanism for supporting pre-paid billing.

In addition, the DVNS may be configured with certain cost thresholds for a particular customer.

When establishing an account, the customer could instruct the DVNS not to allow video conference calls if the rate is greater than $1.00 a minute. If the DVNS is unable to satisfy a call request within certain pre-defined thresholds, the CPE may be instructed that the resources are not available, possibly notifying the customer the reason that the call could not be setup (e. g. rates too high). With little effort, this could be extended to allow the customer to configure the information directly into the CPE, which in turn would pass it to the DVNS in User Defined Information Elements during call setup.

Hot Billing Another advantage of bandwidth contracts are their ability to support hot billing. By requiring a contract in order to complete a call, the DVNS could take advantage of pricing information inherent in the agreement. If the DVNS forwards this pricing information along with the contract to the CPE, this would allow the CPE to notify the user on a real-time basis how much they have spent during the call. In addition, because the rating information accompanies the bandwidth contract, the DVNS can calculate the cost of the service and debit the user immediately.

It is important to note that taxation may need to be evaluated if the bandwidth provider supports this model.

Clearinghouse Function In addition to providing markets to buy and sell bandwidth, the bandwidth provider may also provide a clearinghouse function. As all usage data may be tagged with a Contract ID, the contracts generated in the bandwidth market may be excellent tools for rating calls and determining revenue allocation. These contracts may be forwarded to the rating and settlements engines, providing important information needed for each of these processes. Usage data may be correlated to the appropriate contract, which may provide rating information, service level

guarantees, and revenue allocation information. This information may be used by the rating and net settlements processing.

As different bandwidth market segments may have different contract structures, it may make sense for the clearinghouse function to mimic the bandwidth market structure. Pre-Sold Bandwidth of a bandwidth provider could be cleared by a Pre-Sold Bandwidth Clearing function.

Likewise, the Open and Closed markets could have their own clearing functions. Eventually these may feed into one larger clearing process, which provides net settlements functions between a bandwidth provider and its distributors.

One of the key functions of the clearinghouse is to offer a mechanism to bill back services between distributors. If a DVNS in Thailand purchases bandwidth from an American distributor in order to complete a video call to the United States, the American distributor needs some mechanism for receiving payment from the Thai DVNS. As all distributors must deal with a bandwidth provider at some level, it makes sense for the bandwidth provider to provide clearing functions between distributors. The clearing function may allow the US DVNS to bill the Thai DVNS for the bandwidth that it used. The Thai DVNS may then bill its customer for the call.

By leveraging a bandwidth provider's fiduciary relationship with each DVNS, the bandwidth market, when coupled with a clearinghouse function, provides a mechanism for one distributor to indirectly bill another distributor's customers.

Figure 9 illustrates a method of performing clearing and settlement functions in a bandwidth market environment. First, terms regarding a reallocation of bandwidth from a seller to a buyer are received in operation 900. These terms may be received from input of the seller and buyer.

Alternatively, the terms may be taken from a set of guidelines concerning the transaction. In any case, the terms may set forth, for example, the purchase price, time for transfer of the bandwidth, penalties, latency requirements, etc. See the discussion with reference to Figures 10 through 14 below for more detail.

Then, in operation 902, an amount of money the buyer owes the seller for the reallocated bandwidth is determined based on the terms regarding the reallocation of bandwidth. Most often, this may be calculated as the price per unit of bandwidth times the number of units of bandwidth being sold and taking into account any penalties and discounts. If amounts of bandwidth of more than one seller are sold together such as under a contract as discussed above,

the amount of money the buyer owes each seller is calculated. More detail is provided below in the discussion referencing Figures 10 through 14.

Finally, in operation 904, the buyer is notified of the amount of money the buyer owes the seller.

Notification may be made in a variety of ways. One is through email. Another is via facsimile.

Yet another way is an automated voice message sent via telephone. Also, a printout with the amount on it (i. e., a bill) may be sent to the buyer via a delivery service such as the United States Postal Service.

Optionally, the present invention may verify that the terms regarding the reallocation of bandwidth have been complied with. This could include verifying the amount of bandwidth that the seller is offering for sale. This could also include verifying that the seller has relinquished control of the bandwidth. Further, the buyer's access to the newly purchased bandwidth could be verified.

In one embodiment of the present invention, usage data may be received from the buyer and used to determine the amount of money the buyer owes the seller for the reallocated bandwidth. In such an embodiment, the buyer could be allowed to purchase bandwidth according to the buyer's requirements. The buyer would then only be liable for the amount of bandwidth actually used, plus incidental costs.

The usage data may also be used to determine the cost per unit of bandwidth. Bandwidth used during peak hours is most often more valuable than, say, bandwidth used in the middle of the night. Thus, the usage data could include times of use of the bandwidth as well as the particular amount of bandwidth used during peak hours.

Optionally, the usage data may be correlated with corresponding terms via a contract identifier (Contract ID as discussed above) associated with the usage data. The contract identifier would allow the DVNS or a settlements process to correlate the use of bandwidth back to the original transaction to ensure that the proper party is being billed.

A transaction fee may be charged for performing the determination of the amount of money the buyer owes the seller for the reallocated bandwidth. The transaction fee may be a percentage of the total value of the bandwidth traded or sold, or may be a flat fee charged per transaction.

Additionally, as the present invention manages both the bandwidth market and Clearinghouse functions, it is also the natural choice for arbitrating disputes between distributors.

In an exemplary embodiment of the present invention, operations 900 and 902 of Figure 9 are handled by a data processing based apparatus which makes an automated trading market for one or more amounts of bandwidth. The system retrieves the best obtaining bid and asked prices from a remote data base covering the ensemble of institutions or others making a market for the relevant amounts of bandwidth. Data characterizing each bandwidth buy/sell order requested by a customer is supplied to the system. The order is qualified for execution by comparing its specific content fields with predetermined stored parameters. The stored parameters include items such as the operative bid and asked current market prices, the amount of bandwidth available for customer purchase or sale as appropriate, and the maximum acceptable single order size.

As used herein, the terms"buy"and"sell"refer to customer and distributor purchases and sales.

It should be noted that when a customer purchases an amount of bandwidth, the market maker sells the amount of bandwidth from its position, either reducing a long position, increasing a short position, or both where the amount of bandwidth sold to the customer exceeds the initial long position. When a customer sells bandwidth, the market maker adds bandwidth to its position and/or reduces a short position in the bandwidth.

The system may be implemented by any digital data processing equipment per se well known to those skilled in the art, e. g., any common bus system interconnecting a digital processor, manual data entry terminal apparatus, one or more memories (one of which contains the controlling program), and output signaling apparatus such as a cathode ray tube and printer. The system may be coded in any program language per se well known to those skilled in the art. The process variables may be of any form which conform to the constraints of the particular language being used and the below listed variables are for purposes of illustration only.

In the operation of an illustrative system, the below listed process variables may be utilized: Variable Functional Description Order Variable BWTH An order field identifying a particular amount of bandwidth a customer

wishes to buy or sell.

AMT Amount of bandwidth BWTH in a transaction.

CUSTID Customer identification.

B/S Buy vis-a-vis sell bit, iden- tifying whether the customer wishes to buy or sell bandwidth BWTH PR/M An order variable field con- taining a customer price for a limit order (minimum price for a sale of bandwidth or a maximum price he will pay for a purchase)-or a code designating a market order where the customer will accept the currently pre- vailing market price.

SP Special instructions field (e. g., special commission structure or the like.

. O. RN Order number (usually sequen- tial).

. O. RIGID Identification of the origi- nator of the transaction (e. g., a branch office or account executive).

Market Trade Criteria BSTB (BWTH) Best bid price for the bandwidth BWTH as retrieved from the Bandwidth Market, i. e., the highest price some market maker is willing to pay for the amount of bandwidth. This is an indexed variable, or array, having one element for each amount of bandwidth handled by the system proprietor. The other arrays below are similarly indexed by BWTH.

BSTA (BWTH) Best asked price for the amount of bandwidth BWTH supplied by Bandwidth Market, i. e., the lowest price a market maker is willing to sell the bandwidth BWTH.

BSZ (BWTH) Buy size, which is the amount of bandwidth (the array index BWTH) available for customer purchase at a partic- ular price from the system proprietor SSZ (BWTH) The amount of bandwidth BWTH that the market maker will accept from cus- tomer sales at a particular price (a sell size array).

. O. RSZ (BWTH) The maximum acceptable order size which the system operator will accept for the bandwidth BWTH.

Profitability Variables AVCST (BWTH) Average cost of the amount of bandwidth


P. O. S (BWTH) The amount of bandwidth (current position) of each type of bandwidth BWTH held by the market maker. P. O. S (BWTH) is positive for a long position and nega- tive for a short position.

LP. O. S (BWTH) The previous (last) position of the market maker in the bandwidth BWTH before execution of a current trade in BWTH.

PR (BWTH) Profit to date made by the system operator on purchases or sales of bandwidth BWTH.

Figure 10 illustrates in overview a system arrangement for implementing the over the counter (or other) bandwidth market making system of one embodiment of the instant invention. For specificity and without limitation, over the counter bandwidth trading is presumed and it will further be assumed that the market making institution (system proprietor) is a brokerage firm.

The market making system includes composite digital computing apparatus 1000 which includes a processor and ancillary memory. The memory constituents of processor 1000 store the system controlling program, and an appropriate scratch pad memory stores all necessary processing operands. Digital computer 1000 is connected by an output line 1002 to a customer account processor 1004, for example the brokerage firm computer which handles all of the customer account records and files including customer balances, bandwidth positions, trade records, and the like. It should be understood that CPU 1000 and customer account processor 1004 could be combined in single, integrated computing equipment.

The processor 1000 communicates over a link 1006 with a trader terminal position 1008 containing an output signaling device such as a cathode ray tube display, and data input apparatus such as a keyboard. Trader terminal 1008 has two portions. A terminal position section Tl communicates with the processor 1000; and a section T2 is connected by link 1010 to a bandwidth market system 1014. The trader terminal 1008 communicates its current bid and asked prices for bandwidth s in which it makes a market to bandwidth market via link 1010--as do other market makers bridged (1012) to link 1010. The terminal portions Tl and T2 may be one integrated smart terminal (computer) assembly, or two separate devices available to the trader at the station 1008.

The processor 1000 receives and stores the best (highest) bid (processing variable

BSTB (BWTH)) for each amount of bandwidth (BWTH) in which it makes a market, and the best (lowest) asked price BSTA (BWTH) from the bandwidth market system 1014 via a communications path 1016. The best bid and best asked prices as reported by the bandwidth market form the so-called"insider market"for over the counter amounts of bandwidth. Processor 1000 communicates to the bandwidth market system 1014 via a link 1018 each reportable, executed trade for various informational and regulatory purposes. Link 1018 may also report trades to the Consolidated Tape Authority (CTA) and the NASD National Market System (NMS) for subsequent reporting to the financial industry and general public. Communications path 1018 also connects processor 1000 with the NASD small order execution system (SOES) and computer assisted execution system (CAES) which can participate in relatively small order execution.

Input/output network 1020 provides data communication with the various branch offices 1024 of the brokerage house. Line 1020 permits communication with either the branch order entry clerk or directly to the account executives at each branch. While only one branch 1024 is shown in Figure 10, it is to be understood that a multiplicity of branches 1024 are in data communication with processor 1000. Computer 1000 also communicates with third party financial houses 1026 via a two-way data link 1022 (e. g., including INSTINET).

To characterize the Figure 10 arrangement in overview, the operative (best bid, best asked inside market) prices for each amount of bandwidth in which the system proprietor makes a market are communicated over link 1016 from bandwidth market and repose in memory at processor 1000.

The market maker has a position in each amount of bandwidth in which he makes a market and the particulars of that position also repose in memory within the composite processor 1000.

Orders for trades in the relevant amounts of bandwidth are funneled to the processor 1000 in real time as they occur. Orders can be received in several ways. For example and most typically, orders may be generated by the brokerage firm's account executives at the branches 1024 and communicated to the CPU 1000 via the communication path 1020. Orders are also supplied to the processor 1000 from third party financial sources 1026 (e. g., other brokerage firms, directly from computer equipped customers, banks or the like) over communication network 1022. Each of the orders includes appropriate data fields outlined above and more fully discussed below, such as an identification of the office and customer or other originator of order, bandwidth identification, price particulars and so forth.

The processor 1000 first determines whether or not each received order can be executed, i. e.,

"qualifies"the order. There are various reasons why an order may not be executed by the market maker. Thus, for example, the customer may seek to sell an amount of bandwidth above the current bid price or to purchase the amount of bandwidth below the current asked price. A customer may seek to trade an amount of bandwidth which exceeds the amount which the particular market maker is willing to accommodate, either in gross or for any one order. Orders not executable, i. e., orders not qualified, are either stored in memory in the processor 1000 for later execution if they become qualified (such as by a favorable change in the market price for an amount of bandwidth which can then accommodate the customer's price limits) or are forwarded to other market makers for potential execution over communication links 1018 or 1022.

Assuming that an order is executable, the processor 1000"executes"the order, appropriately adjusting all balances. Information characterizing the executed order is sent to computer 1004 for customers of that brokerage house or reported to the appropriate other institution via links 1018 or 1022. The specifics of appropriate transactions may also be reported to the NASD for informational purposes and to the Consolidated Tape Authority and so forth and may become ticker entries.

The bandwidth market system 1014 is apprised of the current quotations from all traders making a market in the subject amounts of bandwidth via communication path 1010. The insider market (best bid and asked prices) are communicated to the market maker's processor 1000 via link 1016. When the insider market price changes (a variation in the best bid or best asked price), the processor 1000 in accordance with the instant invention signals the trader at station 1008 who is then given the opportunity to readjust his quantity or other market-characterizing criteria.

Following each price change, all non-executable orders stored in the processor 1000 memory are reviewed to determine whether they have become executable and, if so, they are in fact executed.

Processing then continues as above described to accommodate the real time order inflow.

With the above overview in mind, attention is now directed to Figure 11 which is a flow chart of data processing for qualifying for execution an order communicated from a branch order entry clerk or account executive. Proceeding from a start node 1100, the data fields comprising this next-recorded order is loaded (block 1102). The order data fields include the name of the amount of bandwidth (BWTH); the total amount of bandwidth for the transaction (AMT); customer identification (CUSTID); a buy vis-a-vis sell bit (B/S); the customer's price limit if he wants one or, if not, a market order designator (PR/M); special instructions if any (SP); an order number (. O. RN); and an originator (e. g., office, account executive, or third party institution) identification

(. O. RIGID).

The computer includes a number of stored variables characterizing the market for the bandwidth BWTH which the customer wishes to trade, and the market maker's own criteria for his participation in BWTH trading. Thus, for example, the computer stores the best bid BSTB (BWTH); the best asked price BSTA (BWTH); the buy size BSZ (BWTH), i. e., the total amount of bandwidth BWTH the market maker is willing to sell for customer purchase at the current price; the market maker's sell size SSZ (BWTH); the maximum single order size for bandwidth BWTH which the market maker will accept. O. RSZ (BWTH); the present amount of bandwidth BWTH long or short in the market maker's position P. O. S (BWTH)--long being positive and short being negative; the average cost per unit of bandwidth AVCST (BWTH) for the bandwidth BWTH long or short in the market maker's portfolio; and a running profit total PR (BWTH) of the market maker in the bandwidth BWTH. Block 1104 functioning next determines if order processing is operative in the normal, automated market mode for the particular amount of bandwidth BWTH. If not (please see below with respect to Figure 14), program flow branches to block 1106 to store the order for later retrieval or manual execution.

Program flow then returns to start node 1100 for retrieval of the next order. Assuming normal automated mode processing (YES output of test 1104), program flow continues to test 1108 to verify the incoming data (order) to assure correct reception and internal consistency. If an error occurred, an error message is produced (block 1110) and program flow returns to the start node 1100 for entry of the incoming next order. In the usual case, the order is verified at test 1108, and program flow continues to block 1112 to determine if the order is a market order or has a limit price (test of the PR/M variable).

If the order is not a market order but rather is to be executed at or better than a customer specified price (N. O. branch from test 1112), program flow proceeds to block 1116 which distinguishes a customer buy (B/S=B) from a sell order (B/S=S). If it is a buy order (YES, (BUY) branch from test 1116), block 1118 determines if the price at which the order is to be executed (contents of PR/M) is greater than or equal to the prevailing asked price (BSTA (BWTH)) of the bandwidth. If the purchase price of the order to be executed is greater than the best asked price (YES branch of test 1118), block 1120 determines if the amount of bandwidth AMT in the trade is less than or equal to the amount of bandwidth available for purchase from the market maker, i. e., less than the buy size BSZ (BWTH). If so (YES branch of test 1120), the amount of bandwidth AMT in the transaction is compared to the maximum acceptable single order size. O. RSZ (BWTH)--step 1130. Assuming this final criteria is satisfied

(N. O. exit), the order is qualified for execution, and program flow continues to block 1132 where a variable storing the last position in bandwidth BWTH, LP. O. S (BWTH) is set equal to P. O. S (BWTH). The program thereafter proceeds to order execution as detailed in Figure 12 and discussed below.

If the price or buy size tests performed at blocks 1118 and 1120 fail (N. O. branch), or if the order size test performed at block 1130 indicates the order is too large (YES branch), the order is not qualified for and will not be executed. When any of these conditions obtain, program flow branches to block 1126 to store the order for possible later execution if market conditions or market maker criteria change. An appropriate report is generated at block 1128 via terminal 1008 (Figure 10) to characterize non-executed order. Thereafter program flow returns to node 1100 to process the next received order. The human market system controller receiving the report may of course over-ride and complete the trade by hand or manual entry--e. g., by authorizing more bandwidth (increasing BSZ (BWTH)) if that criterion inhibited order execution.

The foregoing analysis has considered a limit buy order. Returning now to block 1116, program flow for a customer sale will next be considered. If the buy/sell flag signals a sale, program flow branches to block 1122 where the PR/M limit price is compared to the best bid price (PR/M. ltoreq. BSTB (BWTH)). If so (YES branch), the amount of bandwidth AMT in the order is compared against the available sell size (AMT. ltoreq. SSZ (BWTH)). If there is sufficient bandwidth in the sell size (YES branch), block 1130 determines if the amount of bandwidth (AMT) is greater than the maximum permissible single order size (. O. RSZ (BWTH)). If the amount of bandwidth AMT does not exceed. O. RSZ (BWTH) all criteria are satisfied and the sell order will be executed. Processing proceeds to block 1132 where the"last"position intermediate processing variable LP. O. S (BWTH) is set equal to P. O. S (BWTH), and order execution proceeds as set forth in Figure 12. If any price or sell size test performed at blocks 1122, or 1130 fails, program flow branches to block 1126 for storage and reporting (block 1128).

The above description details order qualification for a limit price transaction. In a trade that is to be executed at market, the price tests performed at block 1118 for a buy and block 1122 for a sale are by-passed. Accordingly, when block 1112 determines that the order is to be executed at market (PR/M=market), block 1114 is reached and branches the program to size test 1124 for a customer sale and test 1120 for a customer purchase. The system then operates in the manner above described, qualifying the order for execution if the two operative size criteria are satisfied or, otherwise, storing the order and reporting (step 1128).

Figure 12 illustrates data processing for executing and accounting for orders that have been qualified for execution by the order qualifying data processing of Figure 11. A block 1200 determines whether the order is a customer purchase or sale. If the buy/sell digit signals indicate a customer buy, program flow branches to block 1202 for decrementing the amount of bandwidth remaining available for customer purchase (BSZ (BWTH)) from the market maker.

BSZ (BWTH) is decremented by the amount of bandwidth (AMT) purchased by the customer, i. e., BSZ (BWTH) =BSZ (BWTH)-AMT. The market maker's position in the bandwidth is algebraically decremented by the amount of bandwidth purchased, P. O. S (BWTH) =P. 0. S (BWTH)-AMT (step 1204). If at block 1200 it is determined that the order is a sell, block 1206 decrements sell size SSZ (BWTH) by the amount of bandwidth sold to the customer, SSZ (BWTH) =SSZ (BWTH)-AMT. The market maker's position P. O. S (BWTH) in the bandwidth is updated by algebraically incrementing the amount of bandwidth sold by the customer, P. O. S (BWTH) =P. O. S (BWTH) +AMT (step 1208).

After the position P. O. S (BWTH), buy size BSZ (BWTH), and sell size SSZ (BWTH) variables have been updated, program flow continues to block 1210 where messages confirming execution of the trade are furnished to the customer account processor 1004 which sends out confirmations of the transaction and otherwise performs the necessary accounting functions for the customer account. The branch clerk or account executive 1024 is also notified of order execution via link 1020. The order variables CUSTID, SP,. O. RN and. O. RIGID are used to appropriately distribute trade reporting, proper commission computation and the like. Further, the transaction price is typically communicated to the bandwidth market system 1014 and the various tape services for reporting. The updated internal market maker variables (e. g., SSZ (BWTH), BSZ (BWTH), LP. O. S (BWTH), P. O. S (BWTH)) are stored in memory for use in subsequent order transactions (step 1212). Program flow proceeds to block 1214 to update the market maker's average per unit of bandwidth inventory cost AVCST (BWTH) and profit PR (BWTH) internal management variables for the bandwidth BWTH, the data processing for which is described below in conjunction with Figures 12 and 13. After inventory updating and profit accounting, data processing exits at node 1216 ready to process the next trade.

Figures 13 and 14 are the left and right portions of a flow chart for the data processing of block 1214 (Figure 12) for updating the inventory cost (average price per unit of bandwidth AVCST (BWTH)) of the bandwidth BWTH and the running profit PR (BWTH) realized from the execution of each trade. To this end, the last position of the market maker LP. O. S (BWTH) before

the just executed trade is tested to determine whether the market maker was previously long or short in the bandwidth BWTH (step 1303). If LP. O. S (BWTH). gtoreq. 0 then the market maker's previous position was long and program flow proceeds to block 1302 where the present (post trade) position of the market maker P. O. S (BWTH) is tested to determine if it is long (P. O. S (BWTH). gtoreq. 0? =YES) or short (N. O.). If the market maker's present position is short (N. O. branch), the transaction was a branches to block 1304 to update profit PR (BWTH) for bandwidth BWTH, as by: PR (BWTH) =PR (BWTH) + (LP. O. S (BWTH) * (BSTA (BWTH)-AVCST (BWTH))). Eq. 1.

In the right side of the programming statement of Equation 1, the variable BSTA (BWTH)- AVCST (BWTH) is the profit (or loss) margin on the sale representing the difference between the current asked price BSTA (BWTH) at which the trade occurred and the average cost per unit of bandwidth AVCST (BWTH) of the bandwidth. When multiplied by the amount of bandwidth previously in the long position (LP. O. S (BWTH)), the right factor following the plus sign in the statement of Equation 1 is the profit (or loss) for the transaction. When added to the previous running profit total PR (BWTH), the final result stored in PR (BWTH) is an updated running total of the profit of the market maker in the bandwidth BWTH since the PR (BWTH) storage array element was last cleared.

Thereafter for the assumed event, program flow proceeds to block 1306 where the average cost per unit of bandwidth of the new short position in the bandwidth is calculated. In this instance, the average cost of the bandwidth is equal to the operative asked price, i. e., AVCST (BWTH) =BSTA (BWTH). Figure 13 programming then exits at the PROCEED node.

If at block 1302 the market maker's present position is long (P. O. S (BWTH). gtoreq. 0? =YES), program flow continues to test 1308 where the buy/sell digit determines whether the transaction is a customer purchase or sale. If the trade is a customer sale thus increasing the initially long LPOS (BWTH) position, it is an inventory transaction and program flow branches to block 1310 to update the average cost of the BWTH bandwidth position: AVCST (BWTH) = ( (AMT*BSTB (BWTH)) + (AVCST (BWTH) *LP. O. S (BWTH)))/P. O. S (BWTH).

Eq. 2.

In the statement of Equation 2, AMT*BSTB (BWTH) is the cost of the bandwidth just purchased

from the customer and AVCST (BWTH) *LP. O. S (BWTH) is the cost of the previous LP. O. S (BWTH) inventory. Thus, by dividing the sum of the new and former purchases by the amount of bandwidth held P. O. S (BWTH) the new average cost AVCST (BWTH) is determined.

If at block 1308 the transaction was determined to be a customer purchase (market maker sale), program flow proceeds to block 1312 where the market maker's profit is updated: PR (BWTH) =PR (BWTH) + (AMT* (BSTA (BWTH)-AVCST (BWTH))). Eq. 3.

The above Figure 13 processing has reviewed the three possibilities beginning with a long (positive) market maker bandwidth position entering a transaction as signaled by the contents of LP. O. S (BWTH). Comparable functioning obtains if the contents of LP. O. S (BWTH) in test 1300 are negative, signaling an initial short position (N. O. output of test 1300). Assuming such an initial short position, program flow passes to that shown in Figure 14 which is the analog of that shown in Figure 13.

In brief, a test 1400 of Figure 14 determines whether the present position P. O. S (BWTH) is short or long. If the present position is also short (P. O. S (BWTH) <0), program flow proceeds to block 1402 where the buy/sell bit is read. If the buy/sell digit indicates a customer buy, the transaction represents an inventory accumulation (the previous short position in LP. O. S (BWTH) being increased in P. O. S (BWTH)) and program flow branches ("YES") to block 1404 where the average cost of the bandwidth is updated: AVCST (BWTH) = ( (AMT*BSTA (BWTH)) + (AVCST (BWTH) *LP. O. S (BWTH)))/P. O. S (BWTH).

Eq. 4.

If at block 1402 the transaction is determined a sell, block 1406 updates the profit total: PR (BWTH) =PR (BWTH) + (AMT* (BSTB (BWTH)-AVCST (BWTH))). Eq. 5.

As a final possibility in Figure 14, if at block 1400 the market maker's present position is long (P. O. S (BWTH) <0? =N. O.), the transaction was necessarily a customer sale (market maker purchase), and program flow branches to block 1408 where the profit PR (BWTH) is updated: PR (BWTH) =PR (BWTH) + (LP. O. S (BWTH) * (BSTB (BWTH)-AVCST (BWTH))). Eq. 6.

The average cost per unit of bandwidth of the new P. O. S (BWTH) short position is the best bid (transaction) price (AVCST=BSTB (BWTH))-block 1410. This concludes the profit and cost updating for the transaction.

In most instances, more than one institution makes a market in a particular amount of bandwidth.

Any market maker may change its bid or asked price at any time, transmitting the change to the bandwidth market system via link 1010 as above discussed. In such an instance, it may be necessary to update the market maker's own prices--as where the change affects the insider market (best current bid and asked) to afford the customer execution at the best prevailing price.

Figure 15 is a flow chart illustrating data processing upon receipt of a new market maker quotation from the bandwidth market system 1014. Beginning at an interrupt entry node 1500, the system is placed in non-automatic execution mode (step 1502) which prevents automatic execution of any orders in the particular amount of bandwidth (BWTH) until the market maker has had a chance to respond to the new market prices. If at block 1504 it is determined that the best bid BSTB (BWTH) or best asked BSTA (BWTH) price has changed, program flow proceeds to block 1506 where the best bid BSTB (BWTH) and/or best asked price BSTA (BWTH) are updated to the new values received from bandwidth market.

The system then interactively communicates with the trader terminal 1008 in block 1508. A prompt appears on trader Tl terminal 1008 requesting input regarding possible changes in the maximum acceptable order size (. O. RS (BWTH)), the amount of bandwidth available for customer purchase (BSZ (BWTH)), and the amount of bandwidth acceptable for customer sales (SSZ (BWTH)). After input of the requested parameters (or initializing to default values), any orders previously stored in memory are reprocessed (block 1510) as these orders may now be qualified for execution due to the change in price or other parameters. After stored orders are reviewed and executed if possible, data processing is restored to automatic mode (block 1512)-- as by simply setting a variable AUT. O. to a predetermined state (e. g., "AUT. O. "), and interrupt mode is exited at node 1514. If at block 1504 it is determined that the insider market price was not changed by the new market maker quotation, program flow branches directly to block 1512 to restore automatic mode and exit interrupt mode.

The market making system of the above-described invention has thus been shown to automatically accommodate a random, real time order flow for bandwidth purchases or sales.

Incoming orders are first examined to assure that they satisfy currently operative criteria

regarding bandwidth price, bandwidth availability and bandwidth order size. Those orders being qualified under the existing criteria are executed and profit and inventory price internal management storage elements are appropriately updated to reflect the several transactions experienced by the system. Orders not qualified for execution are stored and re-examined from time to time for possible later executability. The system proceeds automatically without human intervention, save to update operative market maker order qualification criteria.

In another exemplary embodiment of the present invention, payment of the amount of money that the buyer owes the seller is requested, such as through sending the user a bill. Further, the amount of money for the reallocated bandwidth can be received from the seller, where it will be processed and sent to the seller, placed in an account of the seller, and/or used to pay amounts of money the seller owes to a third party or for the transaction fee.

In an alternate embodiment, an operator captures consumer payment directives using a telephone with a small text display. These consumer payment directives are sent to a central computer operated by the system, which then uses an automated teller machine network to obtain funds in the amount of the payment from the consumer's automated teller machine-accessible bank account. Once the funds are obtained into an account of the system operator, the system determines how to pay the biller, either by wire transfer, debit network using the biller's bank account number, or by check and list.

Several exemplary embodiments of the present invention for performing clearing and settlement functions include bill pay or remittance processing systems as set forth below. For brevity and clarity, the consumer's account with the biller is referred to herein as the C-B ("consumer-biller") account, thereby distinguishing that account from other accounts: the consumer's account with its bank, the biller's account with its bank, etc. In most cases, the biller uses the C-B account number to uniquely identify the consumer in its records.

Bill pay transactions, however accomplished, have several common elements, which are either explicit or can be implied by the nature of the transaction. The first is presentment: a biller presents the consumer with a bill showing the C-B account number and an amount due. The second common element is payment authorization: the consumer performs some act (e. g., signs a check or other negotiable instrument) which authorizes the consumer's bank to transfer funds from the consumer's account to the biller; this element might occur after presentment or before (as in the case of pre-authorized withdrawals), and need not be explicit (delivery of a check is

implicit authorization for the amount of the check). This element is almost always accompanied by some action by the consumer bank to ensure payment to it from the consumer, such as withdrawing the funds from consumer's bank account, posting the amount to the consumer's credit card account or line of credit, etc. The third common element is confirmation to the consumer of the funds withdrawal. The fourth common element is the crediting of the payment to the C-B account. In some cases, the biller acknowledges the crediting with nothing more than refraining from sending a past due bill.

Figures 16 through 18 show block diagrams of bill pay systems which implement these four common elements in different ways. In those block diagrams, the participants are shown in ovals, and the flow of material is shown by numbered arrows roughly indicating the chronological order in which the flows normally occur. The arrows embody a link, which is a physical link for paper flow, a data communications channel from one point to another, or other means for transferring material. Where several alternatives exist for a flow, the alternatives might be shown with a common number and a letter appended thereto, such as"2"and"2A".

"Material"refers to documents and/or information, whether paper-based ("postal mail"), electronic (e-mail, messages, packets, etc.), or other transfer medium. In most cases, the material which is flowing is shown near the arrow which links the material's source and destination.

Figure 16 is a block diagram of a paper bill pay system 1600, wherein billers send paper bills or coupon books to consumers and consumers return paper checks and payment coupons. The proof and capture process for these remittances is highly automated, except for the aptly-named "exception items." In bill pay system 1600, the participants are a consumer C (1602), a biller B (1604), consumer C's bank (Bank C) 1606, biller B's bank (Bank B) 1608 and, optionally, a lockbox operator 1610.

Bank C maintains consumer C's bank account 1612 and a clearing account 1614, while Bank B maintains biller B's bank account 1616 and a clearing account 1618. The material passing between the participants includes a bill 1620, a remittance 1622 comprising a check 1624 and a payment coupon 1626, an account statement 1628, an accounts receivable ("A/R") data file 1630, an encoded check, which is check 1624 with MICR encoding, and possibly a non-sufficient funds ("NSF") notice 1636.

The flow of material between participants in bill pay system 1600 begins (arrow 1) when biller B sends bill 1620 through the postal mails to consumer C. Bill 1620 indicates a C-B account

number and an amount due, and is typically divided into an invoice portion to be retained by consumer C and a payment coupon portion to be returned, each of which shows the C-B account number and amount due.

In response to receiving bill 1620, consumer C sends remittance 1622 to biller B (arrow 2).

Remittance 1622 contains check 1624 drawn on consumer C's account 1612 at Bank C and payment coupon 1626, preferably included in the return envelope provided by biller B. Biller B then MICR encodes the amount of the remittance onto check 1624 to create encoded check 1634, and deposits check 1634 (arrow 3), and credits consumer C's account in biller B's customer general ledger ("G/L") account database 1632. Alternately, remittance 1622 is mailed to lockbox operator 1610 (arrow 2A), which opens remittance 1622, MICR encodes check 1624 to create encoded check 1634, captures the C-B account number and amount of the check electronically to create A/R data file 1630. Lockbox operator 1610 then sends A/R data file 1630 to biller B, and sends encoded check 1634 to Bank B to be credited to biller B's account 1616 (arrow 3A).

Because check 1634 is signed by consumer C, it authorizes Bank C to pass the amount of the check to Bank B after Bank B presents the check to Bank C. The signed check serves as the second common element of a bill pay transaction: authorization.

However encoded check 1634 reaches Bank B, Bank B then presents check 1634 to Bank C, along with other checks received by Bank B which were drawn on Bank C accounts (arrow 4).

When Bank C receives check 1634, it withdraws the amount of the check from C's account 1612 and passes the funds to B's account at Bank B (arrow 5). Actually, this funds transfer occurs from C's account 1612 to clearing account 1614, to clearing account 1618, and then to B's account 1616, possibly with one or more intermediate settlement banks in the chain (omitted for clarity).

If the funds are not available in C's account 1612 to cover the amount of check 1634 or if C's account 1612 has been closed, then Bank C will return the check to Bank B, who will in turn return the check to biller B. Biller B will then have to reverse the transaction crediting consumer C's C-B account in G/L database 1632 and renegotiate payment from consumer C, all at significant cost to biller B. Even if check 1634 clears, the process of providing good funds to biller B is not instantaneous, since check 1634 must physically travel from biller B to Bank B to Bank C. Of course, if biller B has sufficient credit rating with Bank B, Bank B could move the funds from clearing account 1618 to B's account 1616 when Bank B receives check 1634.

At some time following the clearing of check 1634, biller B also updates its A/R records in G/L database 1632 to credit consumer C's C-B account, and Bank C confirms to consumer C the withdrawal of the amount of check 1634 by listing it on statement 1628 and/or by the return of cancelled check 1634. If the check doesn't clear, then biller B and other parties to the transaction unwind the payment.

One benefit of bill pay system 1600 is that, for nearly all billers, there is no need for biller enrollment (any consumer can pay a biller without prior arrangements or a waiting period).

Similar to the above system is the GIRO systems used in several countries in Northern Europe.

The GIRO systems were set up there either by the government or the postal system, which is a traditional supplier of financial services. In a GIRO system, it is mandated that each bill payer and each bill payee be assigned a GIRO number. The biller sends bills with its biller GIRO number on the payment coupons. The layout, shape, etc. of the GIRO payment coupons is also mandated, so a consumer will receive similar coupons with each bill. After reviewing the bill, the consumer simply adds their GIRO number to the payment coupon and signs it. Thus, the payment coupon also serves as a banking instrument similar to a check.

The consumers in a GIRO system are comfortable with it because the payment coupons all look the same. The consumer then mails the payment coupons to either a GIRO central processor or its own bank, which then sorts them by biller GIRO number and submits them to the biller. Since the payment coupons are all in a fixed format, they can be easily encoded in a machine readable format, including the payment amount, which the biller pre-prints onto the coupon. If the consumer gives their GIRO number to the biller, the biller can also pre-print that number on the payment coupon as well. Since all the coupons look the same, the banks can process them like a check and achieve economies of scale.

Figure 17 is a block diagram of an alternate bill pay system 1700, which reduces the effort required on the part of consumer C relative to bill pay system 1600, but which increases costs for billers. The difference between bill pay system 1700 and bill pay system 1600 is that consumer C initiates payment electronically (or by other non-check means).

Bill pay system 1700 includes most of the same participants as bill pay system 1600: consumer C, Bank C, Bank B, possibly a lockbox operator (not shown in Figure 17), and biller B, who is typically not a proactive or willing participant in this system. Additionally, a service bureau S

(1702) and a Bank S (1704) are participants, with service bureau S maintaining a service database 1706 which is used to match bill payment orders with billers. The material passing among the participants includes bill 1620, as in the prior example, as well as a bill payment order 1708 and related confirmation of receipt 1716 (both typically transmitted electronically), an enrollment package 1709, a biller confirmation 1710, a bill payment 1712 ("check and list") which includes check 1714.

In bill pay system 1700, consumer C enrolls in bill pay system 1700 by sending service bureau S (arrow 1) enrollment package 1709 comprising a voided check and list of billers to be paid by S on behalf of C. S subsequently sends biller B biller confirmation 1710 (arrow 2) to verify (arrow 3) that C is indeed a customer of B.

With bill pay system 1600 (Figure 16), consumer C identifies the proper biller by the remittance envelope and the payment coupon, neither of which is available to service bureau S in bill pay system 1700. Thus, service bureau S must identify the correct biller for each bill payment order some other way. Typically, service bureau S does this by asking consumer C for biller B's name, address, telephone number and consumer C's account number with biller B ("C-B account number"). Since neither Bank C nor service bureau S may have any account relationship with biller B, they must rely upon consumer C's accuracy in preparing enrollment package 1709 which is used to put biller B's information into service database 1706. Service bureau S typically requires this information only once, during biller enrollment, storing it to service database 1706 for use with subsequent payments directed to the same billers. Of course, if this information changes, service database 1706 would be out of date. If this information is wrong to start with, or becomes wrong after a change, service bureau S might send funds to the wrong entity. What a service bureau will often do to reduce errors in biller identification is to not allow the consumer to make payments to a biller for a specified time period after enrolling the biller, to allow service bureau S to verify biller B and the C-B account structure with biller B in a biller confirmation message 1710.

Sometime later, consumer C receives bill 1620 (arrow 4) and initiates bill payment order 1708 (arrow 5). Bill payment order 1708 includes authorization for service bureau S to withdraw funds from C's account 1612 to pay bill 1620, the amount to pay (not necessarily the amount due on bill 1620), the date on which to pay, and some indication of biller B as the payee. Service bureau S responds with confirmation of receipt 1716 indicating that bill pay order 1708 was received (arrow 6). Consumer C can send bill pay order 1708 in any number of ways, such as using a

personal computer and modem, directly or through a packet of other data network, via an automatic teller machine (ATM), video touch screen, a screen phone, or telephone Touch- ToneTM pad (TTP) interacting with a voice response unit (VRU). However this is done, service bureau S receives one or more bill pay orders from consumer C. These orders could be instructions to pay some amount for a bill or a set amount of money at periodic intervals.

Assuming that service bureau S has correctly identified and confirme that biller B is a biller which consumer C desired to pay with bill pay order 1708, then service bureau S passes the funds to biller B as biller payment 1712 (arrow 12) after securing funds to cover the remittance.

Bill payment can take several forms as discussed below. In Figure 17 a"check and list"is depicted, which is common in the art. A check and list comprises a single payment, check 1714 drawn on service bureau S's account 1718, accompanied by a list of all consumers whose individual remittances are aggregated in the single check. The list shows C-B account numbers and payment amounts for each consumer included on the list which should total to the amount of the single check 1714. This process brings some economies of scale to service bureau S, although at additional expense to biller B. In some cases, rather than endure the expense of checking over the list to ensure it matches the check amount, biller B will refuse to accept that form of payment.

To secure funds, service bureau S clears check 1634 through Bank S 1704 drawn on C's account 1612 at Bank C (arrows 7-11). S then sends payment 1712 to biller B (arrow 12). Biller B must treat payment 1712 as an exception item, posting G/L database 1632 from the list instead of payment coupons as in bill pay system 1600. Biller B deposits check 1714 with Bank B (arrow 13) who clears it through Bank S and a settlement account 1720 to obtain good funds for B's account 1616 (arrows 14-17). If the bill pay transaction goes through, Bank C will confirm that it went through by sending a confirmation (typically statement 1628) to consumer C. The cycle is completed (arrow 18) when consumer C receives notice that funds were withdrawn from C's account 1612 for the amount entered in bill pay order 1708.

Several variations of the system shown in Figure 17 are used today. In one variation, S sends an individual check 1634 (unsigned--signature on file) drawn on C's account 1612 to biller B in response to bill pay order 1708. This clears as in bill pay system 1600 (Figure 16, arrows 3-7), but B must process these one at a time, since they are exception items. This reduces the possibility that B will refuse to process check 1634, since it only differs from the expected payment form by lacking a coupon. Thus, biller B is less likely to refuse this form of payment

over a check and list, and the biller is less likely to have problems of the list not balancing or having bad account numbers.

In a second variation, instead of a check from Bank C cleared through Bank S to credit S's account 1718, S has Bank S submit a debit to C's account 1612 through the Automated Clearing House ("ACH") (see Figure 18 and accompanying text). In a third variation, in place of arrows 12-17, ("check and list"), S may send A/R data and a credit to biller B through one path of : i) Bank S to ACH to Bank B to biller B or ii) MasterCard's RPS (Remittance Processing System) to Bank B to biller B. As used here, the RPS is merely an alternative to the ACH. In a fourth variation, a combination of the second and third variations, S sends simultaneous ACH transactions (debit account 1612 and credit account 1616).

Figure 18 is a block diagram of yet another bill pay system 1800, which is usually used with billers who expect regular, periodic and small payments. Relative to the previously discussed bill payment systems, billers generally prefer bill pay system 1800 when they are set up to handle such transactions.

Bill pay system 1800, while providing more efficient remittance processing by biller B due to its increased control over the process, leaves consumer C with very little control over the bill pay transactions after the relationship is set up, since consumer C is typically required to give biller B an open ended authorization to withdraw funds. Furthermore, bill pay system 1800 is not appropriate for all types of billers, such as those who do not have an on-going and predictable relationship with consumers.

Figure 18 introduces several new items which flow among the participants including ACH 1802, such as a voided check 1806, a debit advice 1808, a pre-authorization message 1810, and a debit request message 1812. In bill pay system 1800, biller B is required to maintain an additional customer database 1804.

For bill pay system 1800 to work properly, there is an enrollment phase (arrows 1-4) and an operational phase (arrows 5-13). In the enrollment phase, consumer C gives biller B voided check 1806, which biller B uses to initiate pre-authorization message 1810. Biller B is not allowed by ACH 1802 to directly submit pre-authorization message 1810, which means Bank B, an ACH Originating Financial Depository Institution (OFDI), must get involved and submit message 1810 to Bank C, an ACH Receiving Financial Depository Institution (RFDI). After pre-

authorization message 1810 is accepted by Bank C, Bank C will accept Bank B initiated automatic debits to be posted to C's account 1612. In the operational phase, biller B queries customer database 1804 to determine if consumer C is enrolled as an automatic debitor. If so, biller B optionally sends debit advice 1808 to consumer C, and sends debit request message 1812 to biller B's bank, Bank B, which then sends it through the ACH 1802 to Bank C, which debits C's account 1612 and transfers the funds to biller B's account 1616 via the ACH. The transaction is confirmed to consumer C on bank statement 1628 sent to consumer C from Bank C. In this system 1800, debit request message 1812 might be rejected by Bank C for, among other reasons, non-sufficient funds, resulting in the flows along arrows 10-12.

Centralized vs. De-centralized DVNS One issue that the bandwidth market raises is the question of where to place certain DVNS functions. The current strategy of many bandwidth providers is built upon a DVNS that runs and operates completely at the distributor. However, there may be some benefit to a bandwidth provider in moving part of the DVNS functions from the distributor to a Network Business Center (NBC) or Network Operations Control Center (NOCC). In particular, by placing most of the DVNS Operations Manager functionality at a centralized location, the bandwidth provider may have a much better view on the state of the network. In an exemplary model, each DVNS is responsible for setting up calls for their CPEs. While the DVNS has a good idea of how its customers are using the network, the NOCC may not have a good handle on the overall network.

By moving call setup to a central location, the bandwidth provider's operators can get a complete overview of what is happening on the network at all times. This eliminates the possibility of a DVNS over-allocating bandwidth to its customers.

In addition to providing better network management capabilities, centralized call setup opens up some interesting possibilities. As a single system will know the state of the network at all times, it could potentially increase prices in those areas where demand is greatest. Armed with real- time call information, a centralized management system could analyze the information and automatically raise the bandwidth providers'wholesale prices in high traffic areas. On a similar note, the bandwidth providers could also lower their prices in areas where the network is underutilized in order to stimulate demand. Assuming bandwidth demand is elastic, this would allow a bandwidth provider to price its wholesale services at the exact point where supply hits demand, optimizing its revenues.

In order to maintain a sense of autonomy, a bandwidth provider could offer its distributors a series of APIs that allow them access to Operations Management functions at the central location. In addition, a graphical user interface could be developed to permit remote configuration and management. The central application could be designed in such a way that distributors would only have access to their managed partition.

One downside of this approach is the need to split the Operations and Service Managers. As the Service Manager provides the distributors with specific functions that map their value added services, or content, to the bandwidth providers, they will need to be tailored to each distributor.

For this reason, the Service Manager will probably need to reside at the distributor's location.

Any hooks between the Service Manager and Operations Manager that are required to map content to a bandwidth provider's services, will have to traverse the bandwidth provider's network.

Another downside to a centralized call setup mechanism is that it represents a single point of failure. However, this could be solved by providing a backup system located at another site.

It is important to note that the bandwidth market will work irrespective of where the DVNS Operations Management functions are located.

Other Markets As discussed, the bandwidth market will provide an efficient mechanism for pricing services.

However, the bandwidth market does not address the problem that distributors face in figuring out ways of billing content charges to customers who are not owned by their own DVNS. A Content Market, which is outlined in the next section, provides a potential solution for addressing this problem.

CONTENT MARKET While a bandwidth provider will focus on providing wholesale bandwidth to distributors, many of these distributors will be content providers. In order to attract distributors, a bandwidth provider must make sure that it offers them the ability to market and sell their services to customers over the bandwidth provider's network. A Content Market could provide an excellent avenue for distributors to sell their services.

Content over Bandwidth There is a current trend developing in which bandwidth costs are considered to be a negligible component that a provider absorbs in favor of content charges. Based on this trend, it is desired for bandwidth providers to focus more on billing for content and less on bandwidth. Bandwidth will essentially be an element of their overall cost structure.

Determining Cost Structure If, as predicted, many distributors are going to absorb bandwidth costs and make their money from content charges, they must have some mechanism for tying bandwidth utilization back to the content event that generated the traffic. This will be needed to make sure that content providers are adequately covering their bandwidth costs. As bandwidth costs to show a long movie will be greater than a shorter current release, it may not be economically desirable to show the long movie even though the distributor's rental cost of the movie is most likely less than the new release.

The problem of tying content to bandwidth can be solved by allowing the distributor to add a Content ID as a User Defined Information Element during call setup. This Content ID will flow through the usage collections and wholesale rating processes, and eventually be used by the DVNS to correlate a content event back to a call.

A Content Market and Billing Channel One of the problems that will be faced by distributors is how to bill customers for content.

Although a DVNS will have no problem billing their own customers, they may not have a mechanism to bill customers that belong to other distributors.

A couple possible solutions exist. The content provider could require each customer that accesses its services to open an account. The customer could then be billed separately for the content that they use. However, this approach has a number of flaws. First, a lot of administrative work is required by the content provider to open an account and bill each customer. The administrative overhead may discourage lower cost content services and those that are used infrequently from participating in the bandwidth provider's market. Second,

customers may be unwilling to open an account for a service that they only expect to use once.

Third, as taxation varies across countries, content providers may not be willing to offer services in locations that do not have a critical mass of customers to justify setting up marketing and billing infrastructure. Fourth, service providers may be reluctant to open accounts for customers in countries with credit problems. Finally, customers may not be willing to wait for service providers to open their accounts before accessing the distributor's content.

A more attractive alternative to content providers opening accounts for every customer is for the distributors to provide a billing channel for content charges incurred by their customers.

Although this could be achieved through bilateral agreements between distributors, it may be more practical in the form of a Content Market.

By providing a billing channel for value added service charges, a Content Market would allow content providers to sell their services to anyone who has access to the bandwidth provider's network. Unlike the bandwidth market, the Content Market would mainly provide billing functions. The bill pay systems as set forth above may be used, as may any other suitable billing system or pay system. By allowing distributors to bill customers through another distributor's DVNS, potentially huge markets would open up to content providers. A software publishing company in a first location could now sell its products to a customer in a second location, without worrying about how it will get paid. A bill for the services would be sent by the content provider's DVNS to the customer's DVNS through the Content Market clearinghouse. The recipient DVNS would then bill its customer for the services they used. The content charges would show up on the customers bill from the distributor. The Content Market could be designed to allow descriptive messages for content services to flow through the system in order to help the customer understand the charges.

Figure 19 illustrates such method of providing an open market environment for electronic content. Optionally, a listing of the content available from the content provider may be displayed. In operation 1900, a request for content by a distributor is received. Such a request for content may be initiated by a customer, such as when a customer orders content from the distributor, or may come directly from the customer. The request for content is preferably received electronically and should include an identification of the content requested.

In operation 1902, the request for content is transmitted to a content provider. The content and a content identifier are received from the content provider in operation 1904 in response to the

request for content. The content and the content identifier are sent to the distributor for delivery to the customer in operation 1906. More detail of operations 1902 through 1906 is provided below with reference to Figures 20 through 22. A bill for the content and the content identifier are received from the content provider in operation 1908, which are then communicated to the distributor in operation 1910 which allows the distributor to bill the customer. Alternatively, payment may be received directly from the customer. One of the bill pay systems set forth above may be used to send the bill to the distributor or customer. In operation 1912, descriptive messages relating to the bill are communicated between the content provider and the distributor to help the customer understand the bill. These messages are included with the bill. Examples of such messages may include an identification of the content sent, its price, an amount of a discount, etc.

Figure 20 illustrates another manner of performing operations 1902 through 1906 of Figure 19.

It should be noted that"customer"as used in this description includes any handler or purchaser of content. In operation 2000, the digital good, i. e., content, requested by a customer is identified. A purchase price for the digital good is negotiated in operation 2002. A request for quotation or a bid is received from the customer in operation 2004 and forwarded to the content provider. This material may include credentials identifying the customer for purposes of, for example, providing a discount. After the negotiating phase, a request for delivery of the goods is received from the customer in operation 2006. The goods may be encrypted in operation 2008 and sent to the customer in operation 2010.

Figure 21 illustrates an optional encryption technique designed to ensure payment by the customer. In operation 2100, the digital good which the customer wishes to purchase is encrypted. A first cryptographic checksum is calculated for the encrypted good in operation 2102. The encrypted digital good and the first cryptographic checksum are then transmitted to the customer in operation 2104. A timestamp is generated in operation 2106 at the end of goods transmission and sent to the customer in operation 2108.

The customer receives the encrypted digital good and the first cryptographic checksum. The customer calculates a second cryptographic checksum for the received encrypted digital good.

The customer creates an electronic payment order containing information identifying the transaction, the second cryptographic checksum, credentials, for example, authorizing the customer to purchase the goods, and a timestamp. Thereafter, the electronic payment order is received in operation 2108.

The first and second cryptographic checksums are compared in operation 2110 to ensure that they match. A match indicates that the digital good has been correctly received. An electronic signature and a decryption key are added to the electronic payment order in operation 2112 which may be sent to the customer in operation 2114.

Figure 22 illustrates an alternative to sending the decryption key and the electronic payment order directly to the customer. In operation 2200, the merchant signed electronic payment order and the key are submitted to an account server for review.

The account server reviews the information in the electronic payment order and sends a message in response to the review, which is received in operation 2202. The review may include verifying that the customer is authorized to make the requested purchase, verifying that the customer has the necessary funds, and ensuring that the timestamp is valid. In the event the review the not positive, an error code is contained within the message which explains why the electronic payment order has not been approved. If the report is positive, the message so indicates and, in operation 2204, the key is included in the message.

In operation 2206, the message is forwarded to the customer. If the message contains the key, the customer uses the key to decrypt the goods. If, for some reason, the customer does not obtain the key, the customer may contact the account server and obtain a copy of the key from the server.

In an alternate embodiment of the present invention, the customer is required to open an account with the content provider before the content is sent to the distributor. This requirement helps ensure payment by the customer, in that the customer's identity may now become known to the content provider.

Optionally, the content provider may be charged a transaction fee, which may be a percentage of an amount of money of the bill or may be a flat fee per transaction or per item.

By offering a billing channel and clearing capabilities for content to distributors, a bandwidth provider could open up huge markets for its providers. In addition, the bandwidth provider could get a cut of the revenues for providing the billing channel and clearinghouse services.

Call Termination Charges

The Content Market could also be used to solve the problem of call termination charges. If a CPE belonging to one distributor wants to call a video phone on a network that is connected to a bandwidth provider, some mechanism needs to be in place to handle call termination charges.

The gateway provider that owns the terminating or transit network, would like to be able to bill the calling CPE for the cost of the service used on the external network. Although the bandwidth provider could implement a custom interworking solution to handle this situation, it would have to develop a separate interface for each type of networking service. This could result in a lot of interfaces.

By treating the termination charge as content, the bandwidth provider would give the gateway distributor the ability to bill the calling CPE's DVNS through the Content Market. After correlating the external termination charge with the call using the Content IE provided by the Services Manager at call setup, the gateway DVNS would send the charge for the call to the Content Market along with a message describing the event. The Content Market would then bill the calling party's DVNS for the termination charge. This DVNS could either bill its customer for the termination charge as a separate line item, or match the event with the call and calculate the total charge. Through a simple, generic interface, the Content Market could be used to provide a billing mechanism for all types of termination charges.

Directorv Services As a complimentary offering to the Content Market, a bandwidth provider could offer directory services for content available on its network. These directory services could be extended to include price lists provided by vendors.

While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.