Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CLOUD-BASED INTEROPERABILITY PLATFORM USING A SOFTWARE-DEFINED NETWORKING ARCHITECTURE
Document Type and Number:
WIPO Patent Application WO/2014/150992
Kind Code:
A1
Abstract:
Embodiments of the present disclosure relate to a system implemented using a software-defined network (SDN) interoperability architecture for performing video conferencing. Embodiments of the present disclosure provide a decision engine application, a decision engine controller, and a media processing resources (MPRs) that provide conferencing capabilities. The system implemented using a software-defined network (SDN) interoperability architecture allows for distributed media routing and contextual provisioning of a conference based upon static and dynamic variables in real-time, moderator control, and/or user control. Contextual provisioning may be employed to adjust the settings for a conference and to select the different components (e.g., hardware and software components) that are employed in the conference based upon static and dynamic variables. Additional embodiments of the present disclosure relate to distributing decoded streams between video conferencing components and performing capacity management by basing determinations upon state information and costs involved in allocating resources in a cloud environment.

Inventors:
GAGE STEVEN (US)
SETHURAMAN ARAVIND (US)
CHIORAZZI LOU (US)
Application Number:
PCT/US2014/024723
Publication Date:
September 25, 2014
Filing Date:
March 12, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TELIRIS INC (US)
GAGE STEVEN (US)
SETHURAMAN ARAVIND (US)
CHIORAZZI LOU (US)
International Classes:
H04L12/18
Foreign References:
US20070165669A12007-07-19
US20090033739A12009-02-05
US20110279636A12011-11-17
US20080063173A12008-03-13
US20040085914A12004-05-06
Attorney, Agent or Firm:
BRUESS, Steven, C. (P.O. Box 2903Minneapolis, MN, US)
Download PDF:
Claims:
Claims

What is claimed is:

1. A method for performing contextual provisioning for a conference hosted on a system implemented using a software-defined network (SDN) interoperability architecture, the method comprising: receiving, at a decision engine application information about a first call from a first endpoint of the conference; sending instructions to route the first call to a first media processing resource (MPR), wherein the MPR resource is identified based upon proximity to the first endpoint; receiving, at the decision engine application, information about a second call from a second endpoint of the conference, wherein the information about the first call and the information about the second call indicate that the second endpoint supports a set of capabilities different from the first endpoint; sending instructions to route the second call to an identified second MPR, wherein the second media handling resource is identified based upon proximity to the second endpoint; receiving feedback data related to the conference; analyzing the feedback data; and based upon the analysis, performing contextual provisioning to increase the quality of the conference.

2. The method of claim 1 , wherein the first endpoint is a single-decode endpoint and the second endpoint is a multi-decode endpoint.

3. The method of claim 1 , wherein performing contextual provisioning to increase the quality of the conference comprises sending instructions to the first MPR via a decision engine controller.

4. The method of claim 3, wherein the instructions instruct the first media handling resource to send only audio data.

5. The method of claim 3, wherein the feedback data comprises information related to at least one of: service provider information; and one or more dynamic variables.

6. The method of claim 5, wherein service provider information comprises at least one of:

capacity by region by time of day; cost per region by time of day; feature set purchased by a service provider; and feature set purchased by an endpoint customer.

7. The method of claim 5, wherein dynamic variables comprise at least one of:

network capabilities; network capacity; moderator control; user control; conference call quality; addition of one or more new endpoints to the conference call; and removal of one or more existing endpoints from the conference call.

8. The method of claim 2, wherein performing contextual provisioning further comprises sending instructions to the first MPR via a decision engine controller, wherein the instructions instruct the first MPR to create a composite stream from two or more individual streams of data.

9. The method of claim 2, wherein performing contextual provisioning comprises sending instructions to the first MPR via a decision engine controller, wherein the instructions instruct the first MPR to provide multiple separate streams to the first endpoint.

10. A system implemented using a software-defined network (SDN) interoperability architecture for performing conferences, the system comprising: a first media processing resource (MPR) for distributing decoded streams, wherein distributing decoded streams further comprises: receiving a first data stream; determining that the first data stream is an unprocessed stream; based on the determination, decoding a the unprocessed stream to produce a decoded stream; and sending the decoded stream to a second MPR.

11. The system of claim 10, further comprising the second MPR for performing the operations comprising: receiving a second data stream; determining that the second data stream is a decoded data stream; and based upon the determination, providing the second data stream to a third MPR.

12. The system of claim 10, further comprising a decision engine controller for performing capacity management, wherein performing capacity management comprises: receiving state information about the system; determining whether to modify allocated resources; and when the determination is made to modify the allocated resources, modifying the allocated resource of the system.

13. The system of claim 12, wherein determining whether to modify allocated resources further comprises: analyzing the state information and call history information.

14. The system of claim 12, wherein determining whether to modify allocated resources further comprises: analyzing the state information and time of day.

15. The system of claim 12, wherein modifying the allocated resources comprises allocating additional resources to the system.

16. The system of claim 15, wherein allocating additional resources comprises allocating one or more MPRs to the system.

17. A system for conference calls, the system comprising: a decision engine controller for performing operations comprising: receiving information about a first call from a first endpoint of the video conference; sending instructions to route the first call to an identified first media processing resource (MPR), wherein the first MPR is identified based upon proximity to the first endpoint; receiving information about a second call from a second endpoint of the video conference, wherein the information about the first call and the information about the second call indicate that the second endpoint supports a set of capabilities different from the first endpoint; sending instructions to route the second call to an identified second MPR, wherein the second MPR is identified based upon proximity to the second endpoint; a decision engine application for performing operations comprising: receiving feedback data related to the conference; analyzing the feedback data; and based upon the analysis, performing contextual provisioning to increase the quality of the conference and communicating a first contextual provisioning instruction and a second contextual provisioning instruction to the decision engine controller; and the first MPR for performing operations comprising: receiving a first input stream from the first endpoint; providing the first input stream to the second MPR; receiving a second input stream and a third input stream from the second

MPR; receiving the first contextual provisioning instructions from the decision controller; and providing the second input stream and the third input stream to the first endpoint according to the first contextual provisioning instructions; and the second MPR for performing steps comprising: receiving the second input stream from the second endpoint; receiving the third input stream; providing the second input stream and the third input stream to the first media handling resource, wherein the second input stream and the third input stream are provided as individual streams; receiving the first input stream from the first MPR; receiving the second contextual provisioning instructions from the decision engine controller; and providing the first input stream to the second endpoint according to the second contextual provisioning instructions.

18. The system of claim 17, further comprising a first network switch, wherein providing the first input stream to the second MPR comprises sending the first input stream to the first network switch.

19. The system of claim 18, further comprising a second network switch, wherein providing the second input stream to the first MPR comprises sending the second input stream to the second network switch.

20. The system of claim 19, wherein providing the second input stream to the first endpoint according to the first contextual provisioning instructions further comprises: creating a composite stream of data from the second input stream and an additional input stream; and providing the composite stream to the first endpoint.

Description:
CLOUD-BASED INTEROPERABILITY PLATFORM USING A

SOFTWARE-DEFINED NETWORKING ARCHITECTURE

Priority [0001] This application is being filed on 12 March 2014, as a PCT International application, and claims priority to U.S. Patent Application No. 13/834,295, filed March 15, 2013, the disclosure of which is hereby incorporated by reference herein in its entirety.

Background [0002] Conferencing systems generally employ specialized hardware known as multipoint control units (MCUs) to support video and/or audio conferencing. A problem with MCUs are that they are generally expensive, are capable of handling limited types of communication protocols or codecs, and generally are limited in the number of simultaneous connections that they can support with other hardware devices in a conferencing system. The use of specialized hardware in conferencing systems makes it difficult to support new communication protocols as they are developed and to scale conferencing systems to meet client needs. It is with respect to this general environment that embodiments disclosed herein have been contemplated.

Summary [0003] Embodiments disclosed relate to a system implemented using an SDN interoperability architecture that may be dynamically scaled to support conferences of varying sizes. Such embodiments provide the flexibility to dynamically adjust to conferencing needs as they change overtime. Additionally, the embodiments disclosed herein may be implemented using standard computing devices and routers rather than specialized conferencing equipment. Use of standard computing devices allows a provider to modify a system implemented using an SDN interoperability architecture without incurring the added costs of purchasing dedicated conferencing equipment. [0004] Additionally, a method is disclosed that for providing distributed decoded data streams between components in a conferencing network. In embodiments, a first conferencing component receives a data stream. The first conferencing component determines that the data stream is an unprocessed data stream and, based upon the determination, decodes the unprocessed data stream to produce a decoded data stream. The first conferencing component then provides the decoded data stream to one or more additional conferencing components, thereby ensuring that the one or more additional conferencing components do not have to duplicate the decoding process. In an embodiment, the decoded data stream may be efficiently sent to the one or more additional components using data link layer forwarding. Data link layer forwarding may be performed by the data link layer without the intervention of resources (e.g., applications, devices, etc.) from a higher communication level (e.g., the network level, transportation level, application level, etc.). In other embodiments, the decoded data stream may be transported to other devices using a communication protocol such as, but not limited to, the Real-time Transport Protocol (RTP) or the Scalable Video Coding (SVC) protocol.

[0005] Further embodiments relate to capacity management of distributed conferencing network or a SDN interoperability system. In embodiments, state information of the distributed conferencing network or the SDN interoperability system is monitored and compared to additional data to determine whether to allocate or deallocate resources from the distributed conferencing network or a SDN interoperability system. In cloud- based embodiments, resources may be allocated and/or deallocated dynamically based upon needs. Generally, a cloud provider charges based on the number of resources allocated for a session. As such, capacity management provides more efficient monitoring and control over resource allocation and thereby allows for cost monitoring and use of cloud networks.

[0006] This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.

Brief Description of the Drawings

[0007] The same number represents the same element or same type of element in all drawings.

[0008] FIG. 1 is an embodiment of a distributed conferencing system 100 that may be employed to create a cloud-based interoperability platform for conferencing.

[0009] FIG. 2 is an alternate embodiment of a distributed conferencing system 200 that may be employed to create a cloud-based interoperability platform for conferencing. [0010] FIG. 3 is an embodiment of a method 300 to initiate conference for an endpoint.

[0011] FIG. 4 is an embodiment of a method 400 for contextual provisioning.

[0012] FIG. 5 is an embodiment of a method 500 for transcoding conference information.

[0013] FIG. 6 is an embodiment of a method 600 to convert media transfer streams to one or more native format streams supported by one or more endpoint devices.

[0014] FIG. 7 illustrates an embodiment of a computer environment and computer system 700 for implementing the systems and methods disclosed herein.

[0015] FIG. 8 illustrates an example embodiment of a system implemented using an SDN interoperability architecture 800. [0016] FIG. 9 is an embodiment of a method 900 for distributing decoded streams between multiple media processing resources.

[0017] FIG. 10 is an embodiment of a flow 1000 of data generated when the method 900 is performed by components of a SDN interoperability platform. [0018] FIG. 11 is an embodiment of a method 1100 for performing capacity

management.

Detailed Description

[0019] Embodiments of the present disclosure relate to a distributed video conference system that allows for distributed media routing and contextual provisioning of a conference based upon static and dynamic variables in real-time. In embodiments, the distributed video conference system provides a cloud-based interoperability platform that different endpoints having different capabilities can access to participate with each other in a conference. For example, endpoints employing devices with different capabilities ranging from video capability, 2D/3D capability, audio capability, different communication protocol support, etc., can communicate with each other using the distributed video conference system. As used throughout the disclosure, an endpoint may relate to one or more devices (e.g., a computer or a conference room comprising multiple cameras) employing one or more codecs. In other embodiments, an endpoint may be related to a customer account that provides different capabilities based upon a service provider or service provider package.

[0020] A distributed conferencing system provides many advantages over prior conferencing system. In embodiments, the distributed conferencing system may be used to host video conferences or other types of conferences, such as, but not limited to, conferences that include multimedia data (e.g., documents, slides, etc.). Video conferences may include streams of both audio data and video data. A distributed conferencing system may utilize general hardware components rather than conference specific hardware such as a multipoint control unit (MCU). Distributed conferencing systems provide scalability; that is, a distributed conferencing system can quickly be scaled up to support a larger number of conferences and participating devices. Additionally, a distributed conferencing system reduces the distance between an endpoint one or more conference components. Ideally, the distance between an endpoint and a conference component is minimal. For example, providing a conferencing component in near geographic or network proximity to an endpoint provides lower latency and improved error resilience. As such, it is desirable to provide as little distance as possible between the endpoints and the conferencing components. While the conferencing components may be connected by a high-speed, high quality network, the endpoints generally communicate with the conferencing system over a lower quality network. Thus, greater conference quality can be achieved by reducing the distance that data travels between an endpoint and a conferencing component over a low quality network.

[0021] Distributed conferencing systems provide multiple benefits. For example MCUs generally are incapable of simultaneously transmitting multiple streams of data to multiple MCUs involved in a conference. Instead, MCUs transmit a stream to a single MCU, thereby requiring all participants in a conference to communicate with one of the MCUs and to use a single communications protocol. The present disclosure, however, provides a distributed conferencing system to provide a cloud-based interoperability platform that allows communication between any number of components. This provides flexibility when provisioning a conference by allowing a decision engine to utilize different conference components based upon their performance capability during initial conference setup. Further, the flexibility of the distributed conferencing system allows the decision engine 106 to adjust the provisioning of the conference in real-time based upon feedback information to continually provide an optimal conferencing experience. This allows the distributed conferencing system 100 to react to changes in the system during a conference such as, but not limited to, changes in network condition, load balancing and traffic on specific components, lag experienced by different endpoints, etc., by adjusting the conference provisioning (e.g., change conference settings, migrate communications to different conference components, etc.) in real-time. Additionally, MCUs are expensive pieces of hardware, while general computing devices are not. As such, a distributed conferencing system can be scaled without incurring the cost of obtaining additional MCUs. In addition to these benefits, one of skill in the art will appreciate the benefits that the distributed conference system 100 provides over previous conferencing systems. [0022] In embodiments, contextual provisioning may be employed to adjust the settings for a conference and to select the different components (e.g., hardware and software components) that are employed in the conference. A decision engine may perform the contextual provisioning in real-time to provide an optimal conference experience based upon static and dynamic variables related to conference performance, end point capabilities (e.g., the capabilities of one or more user devices participating in the conference), network capabilities, level of service (e.g., feature sets provided by a service provider or purchased buy a customer), and other factors. For example, static variables used in contextual provisioning may be, but are not limited to, capabilities of endpoint devices, number of cameras, number of display screens, endpoint codec support (e.g., the quality of video and audio an endpoint supports, whether the endpoint can decode multiple streams or a single stream, etc.), endpoint codec settings (e.g., resolution and audio settings), user status (e.g., whether the user purchased a business or consumer service), network capability, network quality, network variability (e.g., a wired connection, WiFi connection, cellular data or 3G/4G/LTE connections, etc.), display capabilities (e.g., 2D or 3D display), etc. While specific examples of static variables are listed and referenced in this disclosure, one of skill in the art will appreciate that the listing is not exhaustive, and other static variables may be used when performing contextual provisioning. Additional information related initializing and conducting a conference, is provided in U.S. Patent number 8,130,256, entitled "Telepresence Conference Room Layout, Dynamic Scenario Manager, Diagnostics and Control System and Method," filed on October 20, 2008 and U.S. Patent Application Serial No. 12/252,599, entitled "Telepresence Conference Room Layout, Dynamic Scenario Manager, Diagnostics and Control System and Method," filed on October 16, 2008, both of which are hereby incorporated by reference in their entirety.

[0023] Dynamic variables may also be employed in contextual provisioning. In embodiments, a dynamic variables may relate to data about a new endpoint that joins a conference, an endpoint leaving a conference, changes in the network, changes in the number of conferences hosted by a particular component, changes in network capacity, changes in network quality, changes to the load experienced by different modules in the distributed video conference system, etc. While specific examples of dynamic variables are listed and referenced in this disclosure, one of skill in the art will appreciate that the listing is not exhaustive, and other dynamic variables may be used when performing contextual provisioning. [0024] In embodiments, contextual provisioning is supported by making all of the data streams received by one of the conference components from an endpoint, e.g., by a media handling resource, to other conferencing components that are part of a conference (e.g., other media handling resources that are provisioned for the conference) through cascading. In embodiments, each stream may be made available by cascading the input data streams received from each endpoint to all media handling resources. In such embodiments, the distributed conferencing system differs from conferencing systems that utilize MCUs because in a conferencing system employing MCUs, each MCU sends a single composited stream of all of the endpoint data it receives to other MCUs participating in a conference. As such, unlike the embodiments disclosed herein, every separate data stream originating from an endpoint in a conference is not made available to other conference components.

[0025] A decision engine that is part of a distributed video conference system may use static and dynamic variables during the initial set up of a conference in addition to performing contextual provisioning throughout the duration of the conference. For example, decision engine may use static and dynamic variables during set up of a conference to select which modules of the distributed video conferencing system should be employed during the conference. Furthermore, the decision engine may continually monitor changes in a conference to both static variables (e.g., capabilities of devices joining or leaving the conference) and dynamic variables in real-time during the conference. The real-time monitoring provides for real-time contextual provisioning of components of the distributed conference system to provide an optimal conference experience throughout the duration of the conference.

[0026] Embodiments of the distributed conferencing system described herein may support audio and video conferencing using general computing components. Advantages are provided by utilizing general computing components to create a cloud- based interoperability platform that may easily be scaled. Previous conferencing systems generally employ specific hardware (e.g., a multipoint control unit (MCU)) to provide conferencing capabilities. MCUs are expensive pieces of hardware that are generally limited to support specific types of communication protocols, thereby making it difficult to scale a conferencing system and support endpoints that provide capabilities different from the capabilities of the MCU. Among other benefits, the embodiments disclosed herein provide increased scalability and capability support through a cloud-based interoperability platform that is based upon a distributed conferencing system that utilizes general hardware.

[0027] FIG. 1 is an embodiment of a distributed conferencing system 100 that may be employed to create a cloud-based interoperability platform for conferencing. The distributed conferencing system 100 may be used to provide video, audio, and/or multimedia conferencing to any number of endpoints utilizing different devices having the same or different capabilities. FIG. 1 is provided as an example of a distributed conferencing system. In other embodiments, fewer or more components may be part a distributed conferencing system 100. For example, while FIG. 1 illustrates four different endpoints (102A-102D), three different session border controllers (104A- 104C), a single decision engine 106, three different media handling resources (108A- 108C), and three different media transfer components (110A-110C), one of skill in the art will appreciate that some of these components may be combined or further distributed such that fewer or more of the different components illustrated in the distributed conferencing system 100 may be employed with the embodiments disclosed herein. [0028] FIG. 1 illustrates an embodiment in which three different endpoints 102A, 102B, 102C, and 102D are connected via a distributed conferencing system 100. In the example embodiments, the three different endpoints are participating in a conference. In the example embodiment, the three different endpoints 102A-102D are participating in the same conference; however, in alternate embodiments, the three different endpoints 102A-102D may be participating in different conferences. The three different endpoints 102A-102D may employ different devices having different capabilities, or may employ similar devices having similar capabilities. In the example embodiment, endpoint 102A may employ a computing device, such as a tablet computer, a laptop computer, or a desktop computer, or any other type of computing device. Endpoint 102B may be a conference room employing conferencing equipment such as, one or more cameras, speakers, microphones, one or more display screens, or any other type of conferencing equipment. Endpoint 102C may be a phone device, such as a telephone, a tablet, a smartphone, a cellphone, or any other device capable of transmitting and receiving audio and/or video information. Endpoint 102D may be a laptop or other type of computing device. Although specific examples of devices are provided, one of skill in the art will appreciate that any other type of endpoint device may be utilized as part of an endpoint in the system 100. In embodiments, the different endpoint devices may have different capabilities that may not be compatible. For example, in embodiments, the endpoint devices may employ different communication protocols. However, as will be described in further detail below, the embodiments disclosed herein allow the different devices to communicate with each other using the distributed conferencing system 100.

[0029] In embodiments, the different endpoints may be in different regions or may be in the same region. A region may be a geographical region. For example, endpoint 102 A may be located in Asia, endpoint 102B may be located in the United States, and endpoints 102C and 102D may be located in Europe. In other embodiments, each endpoint may have a different service provider or service package. In the example embodiment illustrated in FIG. 1, each component with a reference that ends in the same letter may be located in the same region, may be under the control of the same service provider, or may be part of the same service package. While the embodiment of the distributed conferencing system 100 generally shows communications between devices in the same region, using the same service provider, and using the same service package, one of skill in the art will appreciate that other embodiments exist where the different components may communicate across regions, service providers, etc. For example, although FIG. 1 illustrates endpoint 102A communicating with session border controller 104A, in other embodiments, endpoint 102A may communicate with session border controller 104B, 104C, or other components in different regions, under the control of different service providers, etc. [0030] Each of the endpoints 102A-102D joining a conference may call into the conference. Endpoints may call into a conference to participate in the conference call. The call may contain one or more streams of data that is sent or received by one or more devices or systems that comprise the endpoint participating in the call. For example, within a single conference call a robust endpoint with multiple cameras, such as endpoint 102B, may generate multiple video streams and one or more audio streams of input data that is sent to a media handling resource, such as media handling resources 108B. By contrast, other endpoints may provide only a single input video stream and/or audio stream for a conference call. Similarly, as discussed further herein, certain endpoints may receive and decode multiple video streams and/or audio streams within a single conference call, while others (e.g., depending on the capabilities of such endpoints) may receive only a single stream of video data and/or a single stream of audio data within a particular conference call. The endpoint may continue to generate, send, and receive the one or more data streams for the duration of the call. In embodiments, an IP address of an SBC or a media resource may be used to call into a conference. In an alternate embodiment, calling into the conference may include dialing a number to join the conference. In such embodiments, a conference may be identified by a unique number. In another embodiment, the distributed conference system 100 may be accessed generally by a unique number, and a unique extension number may be used to identify a particular conference. In another embodiment, a conference may be joined using a URL or a URI that identifies a conference. For example, the conference may be associated with a user identified by an email address or web address. An example of such a URI may be conferencel23 (¾tel iris.com. A conference may be joined by directing an endpoint device to the email address or web address. In such embodiments, a conference may be accessed through a web browser or an email application on an endpoint device participating in the conference. While specific example of joining a conference have been provided, one of skill in the art will appreciate that other methods of accessing a conference may be employed without departing from the disclosure.

[0031] In embodiments, each endpoint connecting to a conference may be directed to a session border controller (SBC), such as session border controllers 104A-104C. An SBC may comprise hardware components, computer-executable instructions running on a device, software, etc. In one embodiment, when attempting to initially join the conference, each device may be directed to a specific SBC. For example, endpoints in specific regions may be directed to an SBC located within their region or service provider. FIG. 1 illustrates such an embodiment where each device is directed to an SBC associated with its region or service provider. As illustrated in the example, endpoint 102A is directed to SBC 104A, endpoint 102B is directed to SBC 104B, and endpoint 102C is directed to SBC 104C. In such embodiments, the initial connection may be automatically established with a local SBC to reduce connection lag, to avoid unnecessary cost, or for convenience. However, in other embodiments, an endpoint may initially connect with an SBC in a different region, service provider, etc. For example, if the local SBC is experiencing a large amount of traffic, an optimal result may be obtained by connecting to a remote SBC that is experiencing less traffic. Such circumstances may arise based upon the time of day in a region where the SBC is located. For example, an SBC in the U.S. may be overloaded during midday, while an SBC located in Asia may experience little traffic at the same time due to the time difference. In such embodiments, the amount of traffic, or other measures such as time of day, may be used to determine which SBC receives the initial connection. Although not shown in FIG. 1, endpoints 102A-102D may communicate with SBC's 104A-104C over a network. As used throughout the disclosure, a network may be a local area network (LAN), a wide area network (WAN), a telephone connection such as the Plain Old Telephone Service (POTS), a cellular telephone or data network, a fiber optic network, a satellite network, the Internet, or any other type of network known. One of skill in the art will appreciate that any type of network may be employed to facilitate communication among the different components of the distributed conference network 100 without departing from the spirit of the disclosure.

[0032] In embodiments, the SBC may perform different actions upon initialization and through the duration of a conference depending on the type of endpoint that connects to the SBC. For example, in one embodiment, the SBC may continue to transport a stream of data to the various conference components. In such embodiments, the SBC may send information to a decision engine, but it may handle the streams itself. For example, if the SBC is communicating with a H.323 endpoint, the SBC may continue to receive and transmit media streams from the client while forwarding information to the decision engine. However, if the SBC is communicating with a session initiation protocol (SIP) endpoint, it may pass the media streams directly through to the media handling resource. One of skill in the art will appreciate that different protocols may be employed by the different embodiments, and that the actions performed by the SBC may change depending on the protocol without departing from the spirit of this disclosure.

[0033] Upon the establishment of a connection between an endpoint and an SBC, the SBC sends call information to a decision engine 106 over a network. In embodiments, the call information includes information about a particular call into the conference. The call information may include static and/or dynamic variable information. The information sent to the decision engine 106 may include, but is not limited to, conference information (e.g., a conference identifier identifying the conference the endpoint is joining), a participant list for the conference, details related to the conference such as type of conference (e.g., video or multimedia), duration of the conference, information about the endpoint such as information related to the endpoint' s location, a device used by the endpoint, capabilities supported by the endpoint, a service provider for the endpoint, geographic information about the endpoint, a service package purchased by the endpoint, network information, or any other type of information.

[0034] In embodiments, the decision engine may use the information received from an SBC to perform initial contextual provisioning for the endpoint joining the conference. The initial contextual provisioning may be based upon different decision factors. For example, static factors such as information related to the number of cameras and screens supported by an endpoint, the codecs (audio, video, and communication protocols supported by the endpoint), endpoint codec settings, whether the endpoint is a business or consumer, information related to the endpoint's network (e.g., network capacity, bandwidth, quality, variability, etc.), whether the endpoint is 2D or 3D capable, etc. The decision engine 106 may also use static information about other endpoints identified as participants to the conference. In embodiments, such information may be provided with the call information, or such information may be previously gathered and stored, for example, in instances where the conference call was previously scheduled and the participating endpoints were previously identified.

[0035] In one embodiment, the decision may be based upon the nearest point of presence to the endpoint. In embodiments, a point of presence may be any device that is part of the conference system. For example, in embodiments, the decision engine 106, media handling resources 108A-108C, and media transfer components llOA-l lOC may all be part of the conference system to which endpoints 102A-102D connect. In embodiments, SBC's 104A-104C may also be part of the conferencing system, or may external components to the conferencing system. In embodiments, the components that are part of the conferencing system may be connected by a dedicated, high-speed network that is used to facilitate the transfer of data between the various different components. As such, data may be transmitted between the conference components at a higher rate. However, devices that are not part of the distributed conference system, such as endpoints 102A-102D may have to connect to the conferencing system using an external network, such as the Internet. A low quality network results in more errors when transmitting data, such as video streams, which negatively impact the conference quality. The external network may have lower bandwidth requirement and/or rates of data transmission, thereby resulting in lag when communicating with the conferencing system. As such, reducing use of a low quality network, such as the Internet, by directing an endpoint to the nearest conferencing component increases conference quality. [0036] In embodiments, lag due to data transmission over an external network may be reduced by connecting an external device to the nearest point of presence that is part of the dedicated conference network. For example, the nearest point of presence may be determined based off of proximity, which may be geographical proximity or network proximity. In embodiments, geographic proximity may relate to the distance between physical location of the dedicated conference component and the endpoint. Network proximity, on the other hand, may relate to the number of servers that must be used to transmit the data from the endpoint to the dedicated conference component. Reducing the number of hops it takes to establish a communication connection with the dedicated conference component provides a better conferencing experience for the endpoint by reducing lag. As such, the decision engine 106 may determine which media handling resource, or other dedicated conferencing component, to direct the endpoint to by identifying the nearest point of presence geographically or based off network proximity. Connecting an endpoint to a point of presence within close geographic or network proximity may provide for lower latency and better error resilience.

[0037] In embodiments, service provider information may also factor into the determination performed by the decision engine 106 for the initial contextual provisioning. Examples of service provider information may include, but are not limited to, capacity of the service provider by time of day, cost per region by time of day, feature set purchased by the service provider and customer by time of day, traffic on the service provider, or any other types of service provider factors. Dynamic information may also factor into the initial contextual provisioning, such as information related to the current status of the network, the status of the different components of the dynamic conferencing system 100, current bandwidth, current traffic, current number of users, etc.

[0038] Based upon the call information received from an SBC, the decision engine determines an initial provisioning for the endpoint and directs the endpoint to a particular conference component. For example, in the illustrated embodiment, endpoint 102 A is directed to media handling resource 108 A and media transfer component 110 A, endpoint 102B is directed to media handling resource 108B and media transfer component HOB, and endpoint 102C is directed to media handling resource 108C and media transfer component HOC. In embodiments, the decision may be determined based on proximity of the nearest points of presence. For example, endpoint 102A, media handling resource 108A, and media transfer component 11 OA may be located in the same geographic region or have close network proximity. However, in other embodiments, the endpoints may be directed to different media handling resources and media transfer components. The media transfer and media handling components may comprise hardware, software, or a combination of hardware and software capable of performing the functionality described herein.

[0039] In embodiments, upon determining the initial provisioning, the decision engine 106 routes the call from an endpoint to a particular media handling resource. Routing the call by the decision engine 106 may include sending an instruction from the decision engine to the SBC or to the endpoint device itself to connect to a particular media handling resource, such as media handling resources 108A-108C. In such embodiments, the call is directed from a particular endpoint, such as endpoint 102 A, to a particular media handling resource, such as media handling resource 108 A, as illustrated in the embodiment of FIG. 1. In one embodiment, the connection to the media handling resource may be established through the SBC, as illustrated in FIG. 1. However, in an alternate embodiment, once the initial contextual provisioning is performed by the decision engine 106, the endpoint may directly communicate with the media handling resource over a network. The decision as to whether or not the endpoint communicates directly with the media handling resource may be based upon the communication protocol used by the endpoint, a load balancing algorithm, or any other static or dynamic information considered by the decision engine 106.

[0040] In embodiments, the media handling resource, such as media handling resources 108A-108C, may be employed provide interoperability between the different devices of different endpoints, e.g., endpoints 102A-102D, that support different capabilities and/or different communication protocols. In embodiments, the media handling resource to which an endpoint device is directed is capable of supporting similar communication protocols and/or capabilities as the endpoint device. As such, the media handling resource is capable of receiving and sending streams of data (e.g., video and/or audio data) that are in a format that the endpoint device supports. In further embodiments, the decision engine 106 may communicate with a media handling resource over a network to provide instructions to the media handling resource on how to format data streams that the media handling resource provides to the endpoint device or devices.

[0041] In conferences with multiple participants or multi-camera endpoints, multiple streams of data may be transmitted, with each stream of data carrying input data (e.g., video and audio data) from each of the participants in a conference. Depending on endpoint capabilities, an endpoint device may or may not be able to decode multiple streams of data simultaneously. As used herein, a multi-decode endpoint is an endpoint that is capable of decoding multiple video streams and multiple audio streams. Components communicating with a multi-decode endpoint, such as a media handling resource, may forward multiple streams to a multi-decode endpoint. As used herein, a single-decode endpoint is an endpoint that is capable of receiving and decoding only a single stream of video data and audio data. A single-decode endpoint may be capable of receiving a single stream of data. As such, components communicating with a single-decode endpoint, such as a media handling resource, may only send a single stream of data to a single-decode endpoint. In such embodiments, if multiple streams are to be sent to the single-decode endpoint, the media handling resource may transcode the multiple streams into a single, transcoded stream and send the transcoded stream to the single-decode endpoint. [0042] For example, referring to FIG. 1, one or more devices associated with endpoint 102A may be capable of handling only a single stream of data. Under such circumstances, as part of routing a call, decision engine 106 may instruct media handling resource 108A to format data sent back to endpoint 102A into a composite stream. A composite stream may stream of data that is formed by compositing two or more streams of data into a single stream. For example data received from endpoints 102B and 102C may be composited into a single stream by media handling resource 108 A that includes information from the two endpoints. The composite stream may then be returned to a device at endpoint 102A. However, if the device or devices at an endpoint is capable of decoding multiple streams, the decision engine 106 may instruct the media handling resource communicating with the endpoint to return multiple streams to the one or more devices at the endpoint. This reduces the processing resources required by the media handling resource, thereby allowing the media handling resource to handle a larger load of traffic. It also permits endpoints that can decode multiple streams to make full use of the separate data streams, such as by displaying each separate video. stream on a different endpoint device.

[0043] In embodiments, the media handling resource receives one or more input data streams from an endpoint and normalizes the one or more input data streams into one or more media transfer streams. For example, endpoints participating in the distributed conference may send different types of data streams according to different codecs. Example codecs that may be support by different endpoints include, but are not limited to, H.263 AVC, H.264 AVC, Microsoft's RT Video codec, Skype's VP 8 codec, H.264 SVC, H.265, etc. While specific codecs are identified as being supported by endpoints, the supported codecs are provided as examples only. One of skill in the art will appreciate that other types codecs may be employed with the systems and methods disclosed herein. In embodiments, a media transfer stream is a stream of data formatted to be compatible with a media transfer component, such as media transfer components 110A-110C. In embodiments, the media transfer stream may be a format optimized for sharing over a network. Once the one or more data streams are normalized into media transfer streams, the one or more normalized streams may be provided to a media transfer component associated with the media handling resource. The decision engine 106 may provide instructions to the media handling resource identifying the media transfer component that the media handling resource may communicate with for a particular conference. [0044J The normalization of the one or more data streams from the one or more endpoints results in multiple similarly formatted data streams for each endpoint input stream received by a media handling resource, thereby addressing incompatibility problems between the different endpoints which may have different capabilities and support different communication protocols. For example, in FIG. 1, media handling resources 108A-108C normalize the data streams received from endpoints 102A-102D, respectively, and provide the normalized data streams to media transfer components 110A-110C. Relays 110A-110C may transmit the normalized streams to other media transfer components participating in the conference via a network. For example, media transfer component 110A may transmit the normalized data stream received from media handling resource 108A to media transfer components HOB and HOC. Similarly, media transfer component 11 OB may transmit the normalized data stream received from media handling resource 108B to media transfer components 110A and HOC, and media transfer component HOC may transmit the normalized data stream received from media handling resource 108C to media transfer components 110 A and HOB. As such, in embodiments, the media transfer components (e.g., media transfer components 110A- HOC) may be employed to provide communication across the distributed conference system 100. In embodiments, media transfer components are capable of simultaneously transmitting multiple streams of media transfer component data to multiple media transfer components simultaneously, and receiving multiple data streams from multiple media transfer components. Furthermore, in embodiments, a media transfer component may operate on a general purpose computer, as opposed to a dedicated piece of hardware, such as an MCU.

[0045] In embodiments, multiple endpoints may rely on the same or similar conferencing components. For example, if endpoints 102C and 102D are in the same region they may share the same SBC 104C, media handling resource 108C, and media transfer component HOC. In such embodiments, the media handling resource 108C may receive one or more individual streams from one or more devices located at endpoints 102C and 102D. In such embodiments, the media handling resource 108C may create an individual normalized stream for each stream received from devices at endpoints 102C and 102D and provide the individual normalized streams to the media transfer component HOC. This allows a media transfer component, such as media transfer component HOC, to share each stream individually with the other media transfer components (e.g., media transfer components 110A and HOB) (as opposed to creating one composite stream out of streams received from endpoints 102C and 102D). This permits contextual provisioning while providing greater flexibility in providing individual, uncomposited streams to endpoints that can handle such individual streams, even if the streams originated from disparate endpoints, using different communications protocols. [0046] In embodiments, each media transfer component transmits the received media transfers from other media transfer components participating in the conference to its respective media handling resource. In such embodiments, the media handling resource may convert the one or more received media transfers into a data stream format supported by the endpoint communicating with the media handling resource, and transmit the converted data stream to the endpoint. The endpoint device may then process the data stream received from the media handling resource and present the data to a user (e.g., by displaying video, displaying graphical data, playing audio data, etc.).

[0047] In embodiments, multiple streams that make up the input from the various endpoints participating in a conference may be cascaded. Cascading the streams may comprise making each individual input stream generated by a device at an endpoint available to each conferencing component. For example, any one of the media handling resources 108A-108C or any of the media transfer components llOA-HOC may receive an individual input stream from any device from endpoints 102A-102D. As such, in embodiments, every individual input stream from each endpoint may be made available to each conferencing component. Prior conferencing systems employing MCUs did not provide this ability. In MCU conferencing systems, each MCU is only able to transmit a single stream between other MCUs. In such a system, endpoints A and B may be connected to MCU 1 and endpoints C and D may be connected to MCU 2. Because MCU 1 and MCE 2 can only transmit a single stream between each other, MCU 1 would receive a single stream CD (representing a . composite of streams C and D from MCU 2), and MCU 2 would receive as single stream AB (representing a composite of streams A and B from MCU 1). Transmitting composite streams between MCUs provides many drawbacks that may result in poor quality. For example, among other drawbacks, MCUs inability to communicate multiple streams between each other removes the ability for each MCU to modify individual streams according to specific endpoint requirements and results in poor compositions of stream data. However, by providing the ability of each conferencing component to receive individual streams of data from each endpoint in a conference, the distributed conferencing system 100 does not suffer the same drawbacks as prior conferencing systems.

[0048] Once the conference is provisioned by the decision engine 106, feedback data may be received and monitored by the decision engine 106 from one or more of the media handling resources, media transfer components, SBC's or other devices involved in the conference. In embodiments, feedback information is received by decision engine 106 in a real-time, continuous feedback loop. The decision engine 106 monitors the data received in the feedback loop to provide continuous or periodic contextual provisioning for the duration of the conference. For example, the decision engine 106 uses the feedback loop to analyze data related to the conference. Based upon the data, the decision engine may adjust the parameters of the conference, for example, by sending instructions to one or more components to reduce video quality in response to lower bandwidth, to direct communication from an endpoint to a new median handling resource, to involve more or fewer conference components in the conference, etc. in order to continually provide an optimal conference experience. As such, in embodiments, the decision engine 106 is capable of altering the conference in real-time based upon decision criteria to provide the best end user experience to all attendees in a conference. In embodiments, any of the static and dynamic information described herein may be analyzed by the decision engine 106 in conjunction with decision criteria when performing real-time contextual provisioning during the conference. [0049] Unlike an MCU system, the dynamic conferencing system 100 cascades the input stream received by each media handling resource 108A-108C from the various endpoints 102A-102D. For example, by employing cascading, the dynamic conferencing system 100 may provide every input stream from the various endpoints 102A-102D participating in a conference call to every media handling resource 108A- 108C. This allows for the media handling resources 108A-108C to perform contextual provisioning as instructed by the decision engine 106 to tailor the one or more data streams that are returned to an endpoint.

[0050] In embodiments, providing access for all of the conferencing components to each endpoint data stream allows contextual provisioning to be performed such that the conference may be tailored to each endpoint. The tailoring may be based upon the capabilities of an endpoint, the network conditions for the endpoint, or any other type of static and/or dynamic variable evaluation. For example, a conference may include at least two endpoints. A first endpoint may support a low quality codec, such as, for example a low quality video recorder on a smartphone. In embodiments, a low quality codec may be a codec that is used to display video on small screens, provides low quality video, etc. The second endpoint in the conference may have a large display screen, such as a display screen in a dedicated conferencing room. Displaying a low quality video on a large screen may result in a highly degraded image presented to the user. In such embodiments, the distributed conferencing system may employ contextual provisioning to instruct the second endpoint to display a smaller view of the data from the first endpoint rather than utilizing the large display, thereby providing a better image to the second endpoint.

[0051] For example, in embodiments, endpoint 102C may be a smart phone with that supports a low quality codec. Endpoint 102C may be in a conference with a dedicated conference room such as endpoint 102B. Displaying video generated by endpoint 102C full screen on the one or more displays of endpoint 102B would result in a distorted or otherwise poor quality video display. To avoid this, the decision engine 106 may send contextual provisioning instructions to the media handling resource 108B that is communicating with endpoint 102B. Based upon the instructions, media handling resource 108B may format a data stream representing video recorded from endpoint 102C such that the video is not displayed in full screen at endpoint 102B. In another embodiment, rather than formatting the data stream, media handling resource 108B may include instructions with the data stream that instructs the one or more devices at endpoint 102B not to display the video in full screen.

[0052] In another embodiment, contextual provisioning may be employed to address network connection issues. For example, a first endpoint in a conference may have a poor quality network connection that is not capable of transmitting a quality video stream. The distributed conferencing system may employ contextual provisioning to instruct a conference component to send the audio input stream received from the endpoint without the video stream to avoid displaying poor quality images to other endpoints participating in the conference. In an alternate embodiment, the decision engine may instruct the media handling resource to send a static image along with the audio stream received from the endpoint instead of including the video stream. In such embodiments, the endpoint receiving data from the media handling resource may receive a data stream that allows it to play audio while displaying a still image. The still image may be an image produced based upon the removed video data or it may be a stock image provided by the video conferencing system. In such embodiments, the contextual provisioning may be based upon a quality of service, an instruction from a conference participant, or other criteria.

[0053] For example, the decision engine 106 may send instructions to a media handling resource 108C receiving an input stream from an endpoint 102C to convert the input stream from a video stream to an audio only stream. In embodiments, the decision engine 106 may send the instructions after determining that the endpoint 102C is communicating over a poor quality network or providing poor quality video. In another embodiment, rather than instructing the media handling resource 108C that receives the poor quality video input stream from the device to convert the input stream to audio only, the decision engine 106 may send instructions to a media handling resource communicating with another endpoint, such as media handling resource 108B instructing the resource to convert the poor quality video to an audio only data stream before sending the audio only data stream to endpoint 102B. Furthermore, if the decision engine 106 makes a determination to send only audio data, the decision engine 106, in embodiments, may also provide users the ability to override the contextual provisioning decision made by the system via an access portal. For example, a user at endpoint 102B may use a conference application to access a portal on the decision engine 106 and override the decision to send an audio only input stream by selecting an option to receive the video stream. Upon receiving the selection through the portal, the decision engine 106 may instruct the media handling resources 108B to send the video input stream to the endpoint 102B.

[0054] In yet another embodiment, contextual provisioning may be employed to correctly display video and other visual conference data (e.g., shared electronic documents) based upon the hardware employed by different endpoints. For example, a conference may involve a high quality multiscreen endpoint with high quality networking and multiple cameras. In such embodiments, contextual provisioning may be employed to send each of the multi codec images (images produced by the multiple cameras in the high quality multiscreen endpoint) to multi decode endpoints (e.g., endpoints capable of decoding multiple audio and multiple video streams), while sending a single composited stream of data to single-decode endpoints (e.g., endpoints capable of decoding only a single audio and video stream). Furthermore, in such embodiments, the distributed conferencing system may employ contextual provisioning to instruct a conferencing component to encode a single composited stream that correctly groups the multiple images from the multiple data streams of the high quality multiscreen endpoint to ensure that the composited stream correctly displays the multiple images from the high quality multiscreen endpoint.

[0055] For example, endpoints 102B and 102C may be in a conference call. Endpoint 102B is a high quality dedicated conference room that contains multiple cameras and multiple display screens. Endpoint 102C may be a single-decode endpoint that sends and receives a single stream of video data and audio data. Endpoint 102B may transmit multiple video input streams from the multiple cameras that are part of endpoint 102B. Based upon the capabilities of the device at endpoint 102C, decision engine 106 may send an instruction to media handling resource 108C to composite the multiple video input streams received from devices at endpoint 102B (through media handling resource 108B) into a single stream in a manner that correctly reconstructs the view of the conference room at endpoint 102B prior to sending a composite data stream to endpoint 102C. Because of the capabilities of the device at endpoint 102C, the device would not be able to otherwise properly reconstruct the video input streams received from endpoint 102B.

[0056] In another embodiment, a low quality endpoint may join a conference. For example endpoint 102D may join a conference already in progress among endpoints 102A-102C. Endpoint 102D may be a low quality endpoint (e.g., it may support a low quality coded or have a low quality network connection). In response to the addition of endpoint 102D, decision engine may select a different layout for the conference by provisioning different conference components (e.g., media handling resources) or by changing the formatting of the conferencing data. In embodiments, the decision engine 106 may send instructions to media handling resources 108A-108C instructing the resources to adjust the format of the conference (e.g., video quality adjustments, switching to audio only, or any other format changes) to account for the addition of the low quality endpoint 102D to the conference.

[0057] Although the distributed conference system 100 is illustrated as multiple distinct components, one of skill in the art will appreciate that the different components may be combined. For example, the media handling resource and the media transfer component may be combined into a single component that performs both functions. In embodiments, each individual component may be a module of computer-executable instructions executed by a computing device. In alternate embodiments, each component may be a dedicated hardware component. One of skill in the art will appreciate that the distributed conference system 100 may operate in a cloud-based environment that utilizes any number of different software and hardware components that perform the same and/or different functions.

[0058] In embodiments, the distributed conferencing system may provide an access portal that endpoints may use to schedule or join a conference. In one embodiment, the access portal may be an application operating on an endpoint device. In an alternate embodiment, the access portal may be a remote application run on a server accessible by an endpoint device. In embodiments, the access portal may comprise a graphical user interface that the endpoint device displays to a user. The graphical user interface may receive input from a user that allows for the scheduling of a conference, inviting attendees to a conference, joining a conference, exiting a conference, etc. In embodiments, the access portal may also be present during the conference to receive input to control the conference experience. For example, the access portal may receive commands such as muting the endpoint, changing the video quality, displaying data (e.g., displaying a document to other participants), contacting a service provider for assistance during a conference, or any other type of conference control. In further embodiments, the conferencing system may provide administrator access which can be used to change conference settings. The portal may also provide for moderator control which allows a conference moderator to receive information about the conference and to make changes to a conference while the conference is in progress. For example, the portal may provide a moderator with the ability to add or remove endpoints, to mute endpoints, or to take any other type of action known in the art. In addition a portal may provide a control that allows the user to make changes to the presentation of the conference at the user's endpoint or endpoint devices. In such embodiments, the input received by the moderator control and the User control may be used along with other decision criteria by the decision engine to perform contextual provisioning (e.g., static and dynamic variables, endpoint capabilities, etc.).

[0059] In further embodiments, the access portal may provide additional functionality such as allowing a user to view billing information, change service plans, receive and view reports pertaining to conferencing use, etc. As such, the access portal may also provide administrative options that allow for service changes and monitoring by the individual endpoints. In further embodiments, the portal may provide an administrator interface that allows for the adjustment of decision criteria that the decision engine evaluates when performing contextual provisioning. For example, the administrator interface may provide for the selection and/or definition of particular decision criteria are used for contextual provisioning, defining preferences for certain decision criteria over others, or allow for any other types of adjustments to the performance of the decision engine. The administrator portal may also allow the administrator to override decisions made by the decision engine 106 (e.g., to send a video stream that the decision engine 106 otherwise would have not sent to a particular endpoint). In embodiments, the portal may be an application resident on an endpoint device or it may be a web application resident on the decision engine or any other server that is part of the conferencing system.

[0060] As such, in embodiments, the access portal may provide three types of control based upon the permission level of the user accessing the portal. The first level of control may be an admin level. Admin level control may be used adjust overall system settings and configuration. A second level of control may be moderator control. Moderator control may be used to control settings of the entire conference. For example, the moderator control allows for adjusting settings to different components in a conference and controlling how different endpoints receive conference data. A third type of control may be user control. Use control may provide the ability to adjust settings only to the user device or to control what the user device displays. One of skill in the art will appreciate that other types of control may be employed without departing from the spirit , of the disclosure. [0061] While the embodiment of system 100 depicts a conferencing system with a four endpoints 102A-102D, three SBC's 104A-104C, a single decision engine 106, three media handling resources 108A-108C, and three media transfer components 110A- 110C, one of skill in the art will appreciate that a distributed conferencing system can support conferences between more or fewer endpoints. Additionally, a distributed conferencing system may include more or fewer conferencing components (e.g., decision engines, media handling resources, media transfer components, etc.) without departing from the spirit of this disclosure.

[0062] FIG. 2 is an embodiment of yet another distributed conferencing system 200 that may be employed to create a cloud-based interoperability platform for conferencing. FIG. 2 depicts an embodiment in which four endpoints 102A-102D are joined in a conference. The SBC's 204A-204C and the single decision engine 206 perform the functionality of the similar components described in FIG. 1. However, the system 200 depicts an embodiment in which one or more devices at an endpoint may communicate directly with a media handling resource after initially joining the conference, as illustrated by the communication arrow connecting endpoint 202A and media handling resource 208A. For example, during the initial provisioning of the conference, the decision engine 206 may direct endpoint 202A to media handling resource 208A. However, in embodiments, instead of establishing communication via an SBC, the endpoint may directly communicate with a media handling resource. In such embodiments, communication between the endpoint and the SBC may cease, and the SBC may no longer be a part of the conference.

[0063] FIG. 2 also illustrates an embodiment in which a conference may be conducted without use of media transfer components. In the distributed system 200, the media handling resources 208A-208C may communicate directly with one another. For example, media handling resource 208A may provide a data stream from endpoint 202A to media handling resources 208B and 208C, media handling resource 208B may provide a data stream from endpoint 202B to media handling resources 208A and 208C, and media handling resource 208C may provide a data stream from endpoint 202C to media handling resources 208A and 208B. In such embodiments, the one or more media handling resources may broadcast, unicast, or directly send data streams to other media handling resources that are part of the conference. For example, multiple unicast for AVC may be used to transmit the data from received from an endpoint between the different media handling resources. The mode of communication (e.g., mode of communication, broadcast, unicast, directed streams, which codecs to apply, etc.) established between the media handling resources may be determined by the decision engine 206. When the mode of communication is determined by the decision engine 206, the decision engine 206 may send instructions to the one or more media handling resources 208A-208C related to the mode of communication. In one embodiment, the media handling resources 208A-208C may perform a conversion on the one or more streams of data (e.g., format the stream, normalize the stream, etc.) or may pass the one or more streams of data to other media handling resources unaltered. In yet another embodiment (not illustrated in FIG. 2), each endpoint 202A-202D may simultaneously broadcast or otherwise send data streams to each media handling resource that is part of the conference. FIG. 1 and FIG. 2 show two different system employing two different methods of sharing input streams between media handling resources. However, one of skill in the art will appreciate that other system topologies and other methods may be employed to share data streams between media handling resources, or other components of a distributed conferencing system, without departing from the scope of the present disclosure.

[0064] FIG. 3 is an embodiment of a method 300, which may, in embodiments, be performed by a session border controller, such as the SBC's 104A-104C of FIG. 1 and SBC's 204A-204C of FIG. 2, to initiate conference for an endpoint. In embodiments, the steps of method 200 may be performed by a dedicated piece of hardware or by software executed on a general computing device. In alternate embodiments, the steps of method 300 may be performed by computer-executable instructions executed by one or more processors of one or more general computing devices. Flow begins at operation 302 where a call is received from an endpoint device. Upon receiving the call, flow continues to operation 304 where call information is transmitted to a decision engine. In one embodiment, information about the call may be received in a data stream from the endpoint device. In another embodiment, data about the conference the endpoint is attempting to join is received from a datastore that contains information about a scheduled conference. In such embodiments, the data may be gathered and transmitted to the decision engine at operation 304 or the conference identifier may be transmitted to the decision engine, thereby allowing the decision engine to independently access information about the conference. In embodiments, any type of static or dynamic information may also be transmitted to the decision engine at operation 304 that may be utilized by the decision engine to perform initial provisioning.

[0065] After transmitting call information to the decision engine, flow continues to operation 306 where instructions are received from the decision engine. In embodiments, the instructions from the decision engine may identify a media handling resource and/or media transfer component to which the call should be routed. In another embodiment, the decision engine may also provide additional instructions that may be used by other components in the distributed conferencing system. In such embodiments, the additional instructions may be passed to such other components when routing the call.

[0066] Flow continues to operation 308, where t the instructions received at operation 306 are performed. In one embodiment, performing the instructions may comprise forwarding the call to a specific conference component identified in the instructions. This may be accomplished by forwarding the stream of data received from the endpoint device to a conference component, such as a media handling resource. In such embodiments, an SBC may maintain a connection with the endpoint device during the duration of the call. However, in alternate embodiments, an SBC may forward the call by providing instructions to the endpoint device to establish a connection with a specific conference component. The endpoint device may then establish a direct connection with the identified component, thereby ending the SBC's involvement in the conference. In such embodiments, any additional instructions received by the SBC from the decision engine at operation 306 may be transmitted to the endpoint device, which may then be transmitted to other conference components accordingly.

[0067] Upon performing the instructions at operation 308, depending on the embodiment employed, an SBC or other device performing the method 300 may end its involvement in the conference or act as an intermediary between the endpoint device and another conferencing component, such as a media handling resource, While acting as an intermediary, the SBC may facilitate communications between the endpoint device and a conferencing component thereby actively routing the call.

[0068] FIG. 4 is an embodiment of a method 400 for contextual provisioning. In embodiments, the method 400 may be employed by a decision engine, such as decision engine 106 and decision engine 206. In embodiments, the steps of method 400 may be performed by a dedicated piece of hardware. In alternate embodiments, the steps of method 400 may be performed by computer-executable instructions executed by one or more processors of one or more general computing devices. Flow begins at operation 402, where information is received related one or more calls attempting to join a conference. In embodiments, the data received at operation 402 may comprise information about the endpoint(s) making the call, information about the conference, information about the conference participants, and/or any other type of static or dynamic information described herein. [0069] Flow continues to operation 404 where the information received in operation 402 is analyzed to determine an optimal initial provisioning for an endpoint joining a conference. In embodiments, the decision engine determines and/or identifies specific components of the distributed conferencing system to direct the calls toward. In addition to analyzing the call information received at operation 402, data about the distributed conference system may be received or accessed. Data about the distributed conferencing system may include data related to the current network load and traffic, workload of different components of the distributed conference system, or any other data about the distributed conference system. The data about the distributed conference system may be used, in embodiments, by the decision engine to determine an initial provisioning for an endpoint joining a conference.

[0070] Flow continues to operation 406, where based upon determination made in operation 404, initial provisioning of the endpoint is performed for a conference. In embodiments, the decision engine may perform initial provisioning by routing the call to one or more specific components in the distributed conference system. In one embodiment, routing the call may be performed directly by a decision engine at operation 406. For example, in such an embodiment, the decision engine may forward a stream of data from the call to the one or more specific components identified for initial provisioning in the determine operation 404. In another embodiment, routing the call may comprise sending instructions to an SBC that initially received the call to forward the call to one or more specific components identified in the decision operation 404. In yet another embodiment, routing the call may comprise sending instructions to one or more devices associated with the endpoint that instruct the one or more devices to communicate with one or more components of the distributed conference system. [0071] In addition to routing the call, the initial provisioning performed at operation 406 may include defining conference settings for the endpoint participating in the conference. In one embodiment, the conference settings may be determined based upon an analysis of the static and/or dynamic information performed in operation 404. In embodiments, a decision engine may send instructions to a device associated with an endpoint at operation 406 to adhere to the determined conference settings. In further embodiments, operation 406 may also comprise sending instructions to one or more distributed conference components to adhere to specific conference settings. For example, instructions may be sent to a media handling resource provisioned to interact with the endpoint at operation 406. Such instructions may direct the media handling resource to convert streams to a particular format for consumption by the one or more endpoint devices based upon the capabilities of the one or more endpoint devices.

[0072] For example, if the one or more endpoint devices are not capable of decoding multiple input streams, the media handling resource may be instructed to format multiple streams into a composite stream that may be transmitted to the one or more endpoint devices. If, however, the one or more endpoint devices are capable of decoding multiple streams, the median handling resource may be instructed to forward multiple data streams to the one or more endpoint devices at operation 406. In embodiments, one endpoint in a conference may receive a composited stream, while another, more robust endpoint, may receive multiple streams in the same conference. In embodiments, the instructions may be sent to the one or more distributed components directly or instructions may be sent to the one or more distributed components using an intermediary (e.g., via an SBC). In still further embodiments, instructions regarding conference settings or contextual provisioning may be sent to other endpoints participating in the conference at operation 406.

[0073] Upon performing the initial provisioning at operation 406, initial conference provisioning is established for each of the one or more endpoints joining the conference as identified by the call information received at operation 402. By analyzing the call information, an optimal initial contextual provisioning is provided. However, conditions may change during the call that can affect the quality of the conference for the endpoint. In order to maintain an optimal conference experience of the endpoint for the duration of the call, real-time feedback data related to the conference is monitored and the provisioning of the conference is adjusted accordingly.

[0074] Flow continues to operation 408 where feedback data is received from one or more conference components associated with the conference. In embodiments, the feedback data is received via one or more continuous feedback loop(s) and may comprise any static or dynamic data related to the conference call. The continuous feedback loop(s) may be received from one or more conference components associated with the endpoint that was initially provisioned at operation 406. However, the method 400 may be performed for every endpoint connecting to a conference. As such, data related to the other endpoints, and the distributed conferencing system components interacting with the other endpoints, may also be received in the one or more continuous feedback loop(s) at operation 408. Furthermore, the continuous feedback loop may include information related to changes in conference participants, such as endpoints joining or leaving the conference. In such embodiments, feedback data related to every component in the conference may be received at operation 408 as well as the structure and endpoints in the conference may be received at operation 408.

[0075] Flow continues to operation 410, where the feedback data is analyzed to determine whether to adjust the contextual provisioning for the endpoint and/or components interacting with the endpoint in the conference to improve the quality of the conference. In embodiments, the determination may be based upon analyzing the feedback data to determine that the conference quality is being adversely affected.. In such embodiments, if the quality is adversely affected, flow branches YES to operation 412 and real-time contextual provisioning is performed to address the adverse effects. In another embodiment, the conference may not be adversely affected, but, based on the feedback data, it may be determined that the conference quality may be improved anyway. For example, it may be determined that conference lag may be reduced by transitioning communications with an endpoint from a first conference component to a second conference component. In other embodiments, conference quality may be improved by adjusting or substituting conference components in order to optimize costs savings for the participant or the service provider. In such embodiments, YES to operation 412 and real-time contextual provisioning is performed to increase the quality of the conference. If upon analysis of the feedback data, the quality of the conference is not adversely affected and cannot be improved, flow branches NO to operation 414.

[0076] At operation 412, real-time contextual provisioning is performed. In embodiments, real-time contextual provisioning may include instructing one or more devices at the endpoint to adjust conference settings. In another embodiment, real-time contextual provisioning may comprise instructing one or more distributed conference components to adjust conference settings. In yet another embodiment, the real-time contextual provisioning may further include migrating the call from a first conference component to a second conference component. For example, the call may be migrated for load balancing purposes, due to bandwidth or performance issues related to a particular conference component, or for any other reason. In such embodiments, the one or more endpoint devices may be instructed to establish a connection with a different conference component or the conference component currently interacting with the one or more endpoint devices may be instructed to forward the call to a different conference component. As such, embodiments disclosed herein provide for performing real-time contextual provisioning based upon decision criteria analyzed against static and/or dynamic information related to the endpoints participating in a conference, the conference components, network performance, user service plan, conference structure, a change of participants to the conference, etc. In doing so, among other benefits, performance of the method 400 allows for an optimal conference experience for an endpoint involved in a conference. [0077] Flow continues to operation 414 where a determination is made whether the one or more devices for the endpoint are still participating in the conference. If the endpoint is still participating in the conference, flow branches YES returns to operation 410 and the feedback data continues to be received. If the endpoint is no longer participating in the conference, flow branches NO, and the method 400 ends. [0078] FIG. 5 is an embodiment of a method 500 for transcoding conference information. In embodiments, the method 500 may be performed in order to allow communication between one or more endpoints having different capabilities and/or supporting different communication protocols. In one embodiment, the method 500 may be performed by a media handling resource. In embodiments, the steps of method 500 may be performed by a dedicated piece of hardware. In alternate embodiments, the steps of method 500 may be performed by computer-executable instructions executed by one or more processors of one or more general computing devices.

[0079] Flow begins at operation 502 where one or more input streams from one or more devices associated with an endpoint are received. In embodiments, the one or more input streams may be in a native format that is supported by the one or more devices comprising the endpoint. Flow continues to operation 504 where the one or more input streams are converted into a media transfer format. In embodiments, the media transfer format is a format that is compatible with one or more media transfer components that are part of a distributed conference system. In embodiments, the media transfer format may be optimized for transmission across a network. In embodiments, where multiple input streams are received at operation 502, the media handling may convert the multiple input streams from a native format to a media transfer format in parallel. [0080] Flow continues to operation 506 where the media handling resource transmits one or more streams to one or more other conference components. Transmitting the one or more media transfer formatted streams allows for the one or more input streams from different endpoint devices to be shared with other conference components, such as other media handling resources, and, ultimately, other endpoints, as described with respect to the systems 100 and 200. The sharing of the streams may also be utilized for contextual provisioning. In embodiments, the method 500 may be performed continuously by a media handling resource for the duration of the endpoint' s participation in the conference. In embodiments, multiple endpoints and/or multiple endpoint devices in a conference may use the same media handling resource. In such embodiments, the media handling resource may transmit a separate stream for each endpoint and/or endpoint device and provide separate steams for each device to other media transfer component, which, in turn, may transmit the streams individually. In another embodiment, the media handling resource may transmit the streams to other media handling resources without the use of a media transfer component, for example, by broadcasting, unicasting, or any other method.

[0081] FIG. 6 is an embodiment of a method 600 to convert media transfer streams to one or more native format streams supported by one or more endpoint devices. In embodiments, the method 600 may be performed in order to allow communication between one or more endpoints having different capabilities and/or supporting different communication protocols. In one embodiment, the method 600 may be performed by a media handling resource. In embodiments, the steps of method 600 may be performed by a dedicated piece of hardware. In alternate embodiments, the steps of method 600 may be performed by computer-executable instructions executed by one or more processors of one or more general computing devices.

[0082] Flow begins at operation 602 wherein instructions are received from a decision engine. In embodiments, the instructions from the decision engine may be used to determine the format of the native format stream for one or more endpoint devices. Flow continues to operation 604 where one or more media transfer formatted streams of data are received from a one or more media transfer component. In embodiments, the one or more media transfer streams of data may be in a format compatible for the media transfer component. The one or more streams may represent input stream data from other participants (e.g., endpoints) . participating in the conference. Flow continues to operation 606 where the one or more media transfers streams are converted to one or more native format streams. In embodiments, a native format stream is a format supported by one or more endpoint devices. In embodiments, the type of conversion performed at operation 606 may be determined by the instruction received at operation 602. For example, if the instruction received at operation 602 creation of a composite stream, the conference component, such as a media handling resource, may convert multiple streams in a media transfer format into a single composite stream in a native format. On the other hand, if a pass through instruction is received, multiple media transfers may be individually converted to a native format or, in embodiments where the one or more endpoint devices are compatible with the media transfer format, may not be converted at all in operation 606. Flow continues to operation 608 where the one or more converted streams are transmitted to one or more endpoint user devices directly or via an intermediary. In embodiments, the method 600 may be performed continuously by a media handling resource for the duration of the endpoint' s participation in the conference. [0083] With reference to FIG. 7, an embodiment of a computing environment for implementing the various embodiments described herein includes a computer system, such as computer system 700. Any and all components of the described embodiments (such as the endpoint devices, the decision engine, the media handling resource, the media transfer component, a SBC, a laptop, mobile device, personal computer, a smart phone, etc.) may execute as or on a client computer system, a server computer system, a combination of client and server computer systems, a handheld device, and other possible computing environments or systems described herein. As such, a basic computer system applicable to all these environments is described hereinafter. [0084] In its most basic configuration, computer system 700 comprises at least one processing unit or processor 704 and system memory 706. The most basic configuration of the computer system 700 is illustrated in FIG. 7 by dashed line 702. In some embodiments, one or more components of the described system are loaded into system memory 706 and executed by the processing unit 704 from system memory 706. Depending on the exact configuration and type of computer system 700, system memory 706 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.), or some combination of the two.

[0085] Additionally, computer system 700 may also have additional features/functionality. For example, computer system 700 may include additional storage media 708, such as removable and/or non-removable storage, including, but not limited to, magnetic or optical disks or tape or solid state storage. In some embodiments, software or executable code and any data used for the described system is permanently stored in storage media 708. Storage media 708 includes volatile and non- volatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data.

[0086] System memory 706 and storage media 708 are examples of computer storage media. Computer storage media includes RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage, other magnetic storage devices, solid state storage or any other tangible medium which is used to store the desired information and which is accessed by computer system 700 and processor 704. Any such computer storage media may be part of computer system 700. In some embodiments, system memory 706 and/or storage media 708 may store data used to perform the methods or form the system(s) disclosed herein. In other embodiments, system memory 706 may store instructions that, when executed by the processing unit 704, perform a method for contextual provisioning 714, methods for transcoding data 716, and/or methods performed by a session border controller 718. In embodiments, a single computing device may store all of the instructions 714-618 or it may store a subset of the instructions. As described above, computer storage media is distinguished from communication media as defined below.

[0087] Computer system 700 may also contain communications connection(s) 710 that allow the device to communicate with other devices. Communication connection(s) 710 is an example of communication media. Communication media may embody a modulated data signal, such as a carrier wave or other transport mechanism and includes any information deliver}' media, which may embody computer readable instructions, data structures, program modules, or other data in a modulated data signal. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information or a message in the data signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as an acoustic, RF, infrared, and other wireless media. In an embodiment, instructions and data streams described herein may be transmitted over communications connection(s) 710.

[0088] In some embodiments, computer system 700 also includes input and output connections 712, and interfaces and peripheral devices, such as a graphical user interface. Input device(s) are also referred to as user interface selection devices and include, but are not limited to, a keyboard, a mouse, a pen, a voice input device, a touch input device, etc. Output device(s) are also referred to as displays and include, but are not limited to, cathode ray tube displays, plasma screen displays, liquid crystal screen displays, speakers, printers, etc. These devices, either individually or in combination, connected to input and output connections 712 are used to display the information as described herein.

[0089] In some embodiments, the component described herein comprise such modules or instructions executable by computer system 700 that may be stored on computer storage medium and other tangible mediums and transmitted in communication media. Computer storage media includes volatile and non-volatile, removable and nonremovable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Combinations of any of the above should also be included within the scope of computer readable media. In some embodiments, computer system 700 is part of a network that stores data in remote storage media for use by the computer system 700. [0090] As previously described, components of the cloud-based interoperability platform, such as, for example, the exemplary system 100 of FIG. 1 may be implemented as software. As such, embodiments disclosed herein may be implemented as a software-defined network (SDN) interoperability architecture that may be dynamically scaled to support conferences of varying sizes. Such embodiments provide the flexibility to dynamically adjust to conferencing needs as they change over time. Additionally, certain embodiments disclosed herein may be implemented using standard computing devices and routers rather than specialized conferencing equipment. Use of standard computing devices allows a provider to implement an SDN interoperability architecture without incurring the added costs of purchasing dedicated conferencing equipment.

[0091] Using an SDN interoperability architecture also provides increased flexibility with respect to performing updates and adding new features. Again, because the components may be implemented as software modules executing on general computing devices, updates to the interoperability platform using an SDN architecture may be performed by pushing software updates out to the different devices that are part of the SDN interoperability architecture. Furthermore, additional features or capability may be added to the system using an SDN interoperability architecture without having to purchase specialized devices, thereby avoiding the limitations of conferencing networks that employ specialized devices. [0092] FIG. 8 illustrates an example embodiment of a system implemented using an SDN interoperability architecture 800. FIG. 8 provides an exemplary embodiment of a distributed conferencing system, such as the conferencing system 100, implemented using software as an SDN. In embodiments, an SDN interoperability architecture may comprise various different planes for performing certain actions or services. In one embodiment, an system implemented using an SDN interoperability architecture may comprise an application plane 802, a control plane 804, and a media transfer plane 806. In embodiments, the application plane may comprise one or more resources (e.g., software applications, devices, etc.) that may be used to provide services. For example, in a conferencing environment, resources that are part of the application plane may be used to provide various different conferencing features. In one embodiment, contextual provisioning and other such services performed by a decision engine may be performed by applications residing in the application plane of an SDN.

[0093] For example, a decision engine application module 808 may be part of the application plane 802. A decision engine application module 808 may comprise a state manager capable of monitoring state information of one or more conferences and a decision engine that, based upon the state information and, potentially, additional factors such as historical information, time of day, number of scheduled calls, etc. is capable of making adjustments to one or more conferences resulting in better conference quality, better utilization of resources, cost savings, etc. In embodiments, the decision engine application module 808 may perform functions such as contextual provisioning. In embodiments, contextual provisioning may be employed to adjust the settings for a conference and to select the different components (e.g., hardware and software components) that are employed in the conference. A decision engine application module 808 may perform the contextual provisioning in real-time to provide an optimal conference experience based upon static and dynamic variables related to conference performance, end point capabilities (e.g., the capabilities of one or more user devices participating in the conference), network capabilities, level of service (e.g., feature sets provided by a service provider or purchased buy a customer), and other factors. A decision engine application module 808 may use static and dynamic variables during the initial set up of a conference in addition to performing contextual provisioning throughout the duration of the conference. For example, decision engine application 808 may use static and dynamic variables during set up of a conference to select which modules of the distributed video conferencing system should be employed during the conference. Furthermore, the decisio : engine may continually monitor changes during a conference to both static variables (e.g., capabilities of devices joining or leaving the conference) and dynamic variables in real-time during the conference. The real-time monitoring provides for real-time contextual provisioning of components of the distributed conference system to provide an optimal conference experience throughout the duration of the conference. As such, the contextual provisioning method 400 described in FIG. 4 may be performed by the decision engine application module 808. In ' further embodiments, other functionality may be performed by the decision engine application module 808 to perform additional value added functionality to an audio and/or video conference. Such value added functionality includes, but is not limited to, providing a user interface and/or user customizable controls, tracking conferencing data, providing billing information, or any other type of conferencing related functionality.

[0094] In further embodiments, the decision engine application module 808 may also provide administrative options that allow for service changes and monitoring by the individual endpoints. In further embodiments, the portal may provide an administrator interface that allows for the adjustment of decision criteria that the decision engine evaluates when performing contextual provisioning. For example, the administrator interface may provide for the selection and/or definition of particular decision criteria are used for contextual provisioning, defining preferences for certain decision criteria over others, or allow for any other types of adjustments to the performance of the decision engine. The administrator portal may also allow the administrator to override decisions made by the decision engine application module 808. In further embodiments, the decision engine application module 808 may determine whether to allocate or deallocate resources in a cloud environment based on service provider considerations. As an example, the decision engine module 808 may decide to allocate additional resources (e.g., to maintain quality or specific features) or to deallocate resources. For example, resources may be deallocated or additional resources may not be allocated when the cost of maintaining the resource or allocating the additional resource outweighs the benefit of maintaining a specific level of quality for a particular user. As one of skill in the art will appreciate, the decision engine application module 808 may be updated and or scaled to perform additional functionality thereby allowing a conferencing provider to more easily implement value added services to its interoperability platform.

[0095] In embodiments, the decision engine application module 808 may be implemented as software executed on a general computing device, such as the computing system 700 illustrated in FIG. 7. In one embodiment, the decision engine application module 808 and any other applications that are part of the application plane 802 may be implemented on one or more devices or resources that are part the application plane 802. In one embodiment, the one or more devices or resources may be dedicated to perform functions that are a part of the application plane 802; however, in other embodiments, a single device or resource may be utilized to perform functions that are part of the other planes (e.g., the control plane 804 and/or the media transfer plane 806) as well.

[0096] In embodiments, system implemented using an SDN interoperability architecture may also comprise a control plane 804. In embodiments, the control plane 804 may be a link between an application plane 802 and a media transfer plane 806. Resources (e.g., applications, devices, etc.) that are part of the control plane 804 may control network functionality. As such, the control plane 804 comprises one or more applications capable of instructing resources (e.g., applications, devices, etc.) that are part of the media transfer plane 806, such as a network router or switch. Such instructions may relate to handling data or a data stream, forwarding of data or a data stream, where to send the data or data stream (e.g., addressing), etc. In further embodiments, resources that are part of the control plane 804 and/or the application plane 802 allow for the separation of processes from network devices, such as network switches 814. This allows for SDN chaining - that is, the dynamic inclusion of application services without reliance on hardware resources (e.g., functions and services that are traditionally provided in an application-specific integrated circuit (ASIC) implantation of a network device) that are of a network. As such, SDN chaining allows for the dynamic provisioning of services, provided by applications, based upon needs for a particular data stream as it is transmitted by resources on the media transfer plane 806. For example, services provided by one or more MPRs 812 may be dynamically chained, e.g., inserted, into the data flow of the media transfer plane 806. As described, in embodiments the MPRs 812 may be software modules capable of performing different functions. Functionality may be added or changed by adding or modifying MPRs 812 and then executing the modified or added MPRs 812 as part of a data flow in the media transfer plane 806. Thus, functionality may be modified and performed without relying on ASIC implanted functions of networking devices, such as routers, network switches, etc. This provides greater flexibility than systems that do not employ an SDN architecture because those systems rely on services provided by media transfer plane 806 resources (e.g., network switches, routers, etc.) that are often hard to update and/or dynamically adjust. Instead, an SDN architecture allows changes in services by providing new applications or updating applications that are not part of a network device and then dynamically executing the applications on data from a data stream that is transmitted using resources in the media transfer plane 806. decision engine

[0097] In embodiments, some functionality provided by the decision engine 106 of FIG. 1 may be implemented and/or performed by one or more applications residing in the control plane of an interoperability system using an SDN architecture. In embodiments, the decision engine controller 810 may perform the functions of a decision engine, such as decision engine 106, that are related to managing network traffic flow. As such, the decision engine controller 810 may direct media processing resource and network switches 814 regarding how to conduct the flow of data on the media transfer plane 806. For example, the decision engine controller 810 may instruct endpoints that are part of a conference to connect to a particular media processing resource, such as media processing resource (MPR) 812. In further embodiments, decision engine controller 810 may communicate with MPR 812 over a network via an API, such as a southbound API or any other type of API and/or communication protocols, to provide instructions to the MPR 812 regarding how to format data streams that the MPR 812 provides to the endpoint device or devices: As such, one of skill in the art will appreciate that the interoperability between the decision engine and media handling resources described with respect to FIG. 1 may be handled by a decision engine controller 810 that is part of the- control plane 804 of the SDN interoperability architecture 800.

[0098] In embodiments, the decision engine controller 810 may be implemented as software executed on a general computing device, such as the computing system .700 illustrated in FIG. 7. In one embodiment, the decision engine controller 810 and any other applications that are part of the control plane 804 may be implemented on one or more devices that are part the control plane 804. In one embodiment, the one or more devices be dedicated to perform functions that are a part of the control plane 804; however, in other embodiments, a single device may perform control plane 804 functions in addition to functions that are a part of the other planes (e.g., the application plane 802 and/or the media transfer plane 806). In embodiments, the decision engine controller 810 may communicate with a decision engine application 808 via an API, such as a northbound API or any other type of API and/or communication protocols. Decision engine controller 810 may also communicate with components on the media transfer plane 806, such as a MPR 812 and network switch 814 via an API, such as a southbound API.

[0099] A system using an SDN interoperability architecture may also comprises a media transfer plane 806. In embodiments, the resources that are part of the media transfer plane 806 may be used to perform the actual transfer of data (e.g., processed streams, decoded streams, unprocessed streams, etc.) between devices in a network. In embodiments, the functionality performed by the media handling resources and the media transfer components described in FIG. 1 may be performed in the media transfer plane 806. As such, the media transfer plane 806 may be tasked with transferring media (e.g., audio data, video data, or other types of data.) between devices that are part of the SDN interoperability platform 800 and, ultimately, between endpoint devices that are participating in conference.

[0100] In embodiments, the media transfer plane 806 may comprise at least one media processing resource, such as MPR 812. In embodiments, MPR 812 may perform the operations of the media handling resources and the media transfer components described with respect to FIG. 1. One of skill in the art will appreciate that the media transfer plane 806 may comprise one or more MPRs 812 that are used to perfor the capabilities of the media handling resource described in network 100 of FIG. 1. As such, in embodiments, MPR 812 is capable of receiving and sending streams of data (e.g., video and/or audio data) that are in a format that the endpoint device supports. In further embodiments, the decision engine controller 810 may communicate with a MPR 812 over a network via an API to provide instructions to the MPR 812 on how to format data streams that the media handling resource provides to the endpoint device or devices.

[0101] The Open Systems Interconnection (OSI) model characterizes the functions of a communication system in terms of seven abstraction layers. The abstraction layers defined in the OSI model are the application layer, the presentation layer, the session ; layer, the transport layer, the network layer, the data link layer, and the physical layer. Data link layer, forwarding may be performed by resources that are part of the data link layer in the OSI model without the intervention of resources (e.g., applications, devices, etc.) from a higher communication level (e.g., the network level, transportation level, application level, etc.). Data link layer forwarding may be performed with the embodiments disclosed herein to provide efficiencies in data transmissions. For example, resources that are a part of the media transfer plane 806 may be used to forward data without resources from the other planes (which are part of a different layer, e.g., the application plane 802, control plane 804, etc.). In other embodiments, the decoded data stream may be transported to other devices using a communication protocol such as, but not limited to, the Real-time Transport Protocol (RTP) or the Scalable Video Coding (SVC) protocol.

[0102] In further embodiments where the media transfer plane 806 comprises a plurality of media processing resources, each MPR 812 transmits the received media from an endpoint to other MPR(s) 812 participating in the conference via a network switch 814. In one embodiment, the media transfer plane 806 comprises one or more network switches 814 to handle communications between a plural MPRs 812 and endpoint(s). In alternate embodiments, the media transfer plane 806 may comprise multiple network switches 814. In such embodiments, the media processing resource may convert the one or more received media transfers into a data stream format supported by the endpoint communicating with the media processing resource, ' and transmit the converted data stream to the endpoint. The endpoint device may then process the data stream received from the media processing resource and present the data to a user (e.g., by displaying video, displaying graphical data, playing audio data, etc.). In further embodiments, a MPR 812 may be capable of performing the methods 500 and 600 described in FIGs. 5 and 6 respectively.

[0103] As illustrated in FIG. 8 the embodiments described with respect to FIGs. 1-6 may be implemented in a system using an SDN interoperability architecture 800. The system implemented using an SDN interoperability architecture 800 is a highly flexible and scalable embodiment of the distributed network described with respect to FIG. 1. Furthermore, the various components of the system 800 may be implemented on one or more general computing devices, such as computer system 700 thereby allowing the creating of a scalable network for conferencing without having to acquire specialized devices. In embodiments, the components of the application plane 802, control plane 804, and media transfer plane 806 may be implemented on one or more devices that make up the system implemented using an SDN interoperability architecture 800. In one embodiment, a single device may be devoted to performing the functionality of a specific plane, however, in alternate embodiments, a single device may perform functionality that spans two or more planes. While the system implemented using an SDN interoperability architecture 800 is described as having three planes of operation, one of skill in the art will appreciate that the system implemented using an SDN interoperability architecture 800 may comprise more or fewer planes without departing from the scope of the present disclosure.

[0104] In embodiments, the resources described herein, e.g. one or more decision engine applications 808, one or more decision engine controllers 810, one or more PRs 812, and/or one or more network switches may reside in the same or different graphical locations. For example, the system implemented using an SDN interoperability architecture 800 may be located across multiple data centers ' and geographic locations with different components located in different areas. In embodiments, an endpoint may be directed to resources located in the closest network or geographic proximity when joining a conference; however, endpoints may be directed to any resource in the system implemented using an SDN interoperability architecture 800. Thus, as will be apparent to one of skill in the art, the components of a system implemented using an SDN interoperability architecture 800 may be located in a single location or across multiple locations.

[0105] FIG. 9 is an embodiment of a method 900 for distributing decoded streams between multiple media processing resources. Flow begins at operation 902 where a device, such as, but not limited to, a media processing resource, receives a stream of ' data. In embodiments, the stream received at operation 902 may be generated by an endpoint or another component of the interoperability system. In embodiments., the stream may be received directly from the device that generated it or via an intermediary such as a network switch. In embodiments, the stream of data may be a data stream that was generated by an endpoint device, another media processing resource, a component in a SDN interoperability platform. Flow continues to operation 904 where a determination is made as to whether the received data stream is an unprocessed stream. In embodiments, an unprocessed stream is a stream generated by an endpoint device that is part of a conference. The unprocessed stream may be a stream of data that is in a format native to the endpoint device. In embodiments, flow branches YES to operation 906 where the unprocessed stream is decoded. The device(s) performing the method 900 may decode the unprocessed stream into a decoded stream format. A decoded stream format may be a data stream that is formatted for distribution between multiple components in a system implemented using an SDN interoperability architecture. For example, the decoded stream format may be a stream formatted for optimized transmission between multiple devices, may be formatted for optimized processing, may be formatted to include specific features, functionality, or data, or may modified in any other manner from the unprocessed stream native to the endpoint device. Upon formatting the stream at operation 906, flow continues to operation 908 where the decoded stream is processed and/or provided to other components of the system implemented using an SDN interoperability architecture. [0106] Returning to operation 904, if the stream is not an unprocessed stream, flow branches NO to operation 908 where the stream is processed and/or provided to other components in the SDN Platform. In an embodiment, the decoded data stream may be efficiently sent to the one or more additional components using Data link layer forwarding. Data link layer forwarding may be performed by the data link layer without the intervention of resources (e.g., applications, devices, etc.) from a higher communication level (e.g., the network level, transportation level, application level, etc.). In other embodiments, the decoded data stream may be transported to other devices using a communication protocol such as, but not limited to, RTP or SVC.

[0107] In embodiments, each media processing resource that is part of a system implemented using an SDN interoperability architecture may perform the operation 904 on data streams that they receive. As such, only decoded streams are transferred between media processing resources in a conference, thereby providing a savings in processing resources by alleviating the requirement that every media processing resource (or some other component) in an SDN interoperability platform decode the same unprocessed stream. While embodiments of the method 900 have been described with respect to a system implemented using an SDN interoperability architecture, the method 900 may be performed by components of other types of interoperability systems, such as a media handling resource of a distributed conferencing system, as described with respect to FIG. 1. [0108] FiG. 10 is an embodiment of a flow 1000 of data when the method 900 is performed by components of a system implemented using an SDN interoperability architecture. In the example embodiment, endpoint 1002 generates a stream of unprocessed data 1004 and sends the unprocessed stream 1004 to MPR 1006. The MPR 1006 determines that it received art unprocessed data stream, decodes the data stream, and sends the decoded data stream 1008 to other media processing resources, e.g., MPR 1010 and MPR 1012, which may be allocated as a resource for a conference. Because MPR 1010 and MPR 1012 receive a decoded data stream 1008, the MPRs 1010 and 1012 ca process the decoded stream 1008 without duplicating the decoding process performed by MPR 1006. In embodiments, the decoded stream 1008 may be transmitted to MPR 1010 and MPR 1012 using Data link layer forwarding. Data link layer forwarding may be performed by the data link layer without the intervention of resources (e.g., applications, devices, etc.) from a higher communication level (e.g., the network level, transportation level, application level, etc.). For example, network switch 1014 may forward the decoded stream to MPR 1010 and MPR 1012 without requiring additional processing or handling of the data streams by resources from higher levels of the architecture, such as resources that part of a control plane, an application plane, a network layer, etc. As such, the decoded data stream may be delivered to MPR 1010 and MPR 1012 more efficiently. [0109] FIG. 11 is an embodiment of a method 1100 for performing capacity management. In embodiments, the method 1100 may be performed by a decision engine application, a decision engine controller, a decision engine, or by any other device or application. In embodiments, flow begins at operation 1102 where information about the state of a system is received. In embodiments, the system may be a system implemented using an SDN interoperability architecture, a distributed conferencing system, or any other type of system. In embodiments, the state information may include information such as, but not limited to, the number of conferencing calls, the number of media processing resources or media handling resources available, the workflow capacity of each media processing resources or media handling resources, the amount of used/available, bandwidth, the number of sessions available, the number of used sessions, or any other type of information related to the state of the system.

[0110] Flow continues to operation 1104, where a determination is made as to the amount of resource and/or processing needs that will be required to manage the load requirements of one or more resources. In embodiments, the determination may take into account the needs at a specific point in time. For example, the method 1100 may determine the amount of resources required to handle the load requirement in next the minute, ten minutes, half hour, hour, six hours, etc. One of skill in the art will appreciate that the time period may be based upon any interval. In further embodiments, the interval may dynamically change based upon current conditions. For example, during peak hours, the determination may select a shorter interval, such as ten minutes or half hour. Alternatively, at non-peak hours, the need may be determined for a longer interval of time. The determination may be based on factors such as information about scheduled calls in the near future, the time of day, the max capacity of the individual components/network as a whole, history of call volumes, etc.

[0111] In embodiments, the determination may be based upon cost savings. For example, the decision to allocate or deallocate additional resources may be based upon the service pack (e.g., quality or set of features) purchased by a particular user. Based Upon the user's service pack, the decision engine or decision engine application module may decide to allocate additional resources (e.g., to maintain quality or specific features) or to deallocate resources. For example, resources may be deallocated or additional resources may not be allocated when the cost of maintaining the resource or allocating the additional resource outweighs the benefit of maintaining a specific level of quality for a particular user. As such, in embodiments, the decision may be based upon a comparison of a user's service pack against the cost of allocating resources to provide specific features or maintain a specific quality level.

[0112] Flow continues to operation 1106 were, based at least in part upon a comparison of state information and the determination of resource needs, a prediction is made as whether or not more (or fewer) resources are required to handling current and future conferencing requirements. In embodiments, modification of resources may either increase or decrease the resources allocated for conferencing. For example, in a cloud environment, resources may be dynamically allocated based upon request. A service provider that requests resources may be charged per clock cycle that the resource is allocated to it. Thus, if a resource is not being used, the service provider may want to deallocate the resource in order to avoid unnecessary costs incurred for holding unused resources. Based upon evaluation of the current state information and additional factors (e.g., factors used to determine resource needs), a determination is made at operation 1106 whether to modify the number of allocated resources for the system.

[0113] Based upon the analysis, if the number of resources should not be modified, flow branches NO and returns to operation 1102 and continues monitoring the state of the system. However, if the number of resources should be modified, flow branches YES to operation 1108 and the number of resources allocated to the system are modified. For example, extra media processing resources or media handling devices may be allocated to the system in order to handle the current and future needs of the system. Similarly, as discussed above, resources may be deallocated from the system if the analysis performed at operation 1106 determines that the systems has more resources allocated than necessary to handle current needs. As such, system resources may be deallocated and used for other purposes. Because the needs of the system may constantly change, flow returns to operation 1102 and continues operation. As such, the method 1100 provides for capacity management by performing dynamic resource monitoring to ensure that the system has the allocated enough resources to handle call needs' without wasting resources. [0114] Additional embodiments of the present disclosure relate to providing the ability of a legacy conferencing device, such as an MCU, to join a distributed conferencing system, such as the distributed conferencing network 100 of FIG. 1 or the system implemented using an SDN interoperability architecture 800 of FIG. 8. In embodiments, an MCU may be updated to act similarly to a media handling resource or a media processing resource. In such embodiments, an MCU may act as an edge server capable of serving one or more endpoints. The MCU may forward data streams received from the endpoints connected to it to other media handling resources or MPRs in the network. In embodiments, the forwarded streams may be subject to limitations of the MCU. For example, the MCU may not be able to send a separate stream for each endpoint connected to it but rather send a single composite stream of every endpoint connected to it. The MCU may also receive processed data streams from other media handling resources or MPRs comprising information from other endpoints in a conference. The MCU may format the processed data streams and provide them to the endpoints connected to the MCU.

[0115] In further embodiments, the MCU may also be modified to receive and interpret commands from a decision engine or decision engine controller. As such, the MCU may be capable of modifying its performance as dictated by a decision engine or decision engine controller, for example, during contextual provisioning. As such, by making slight modification to MCUs, legacy conferencing networks that require MCUs may function as part of a distributed conferencing network or a system implemented using an SDN interoperability architecture as disclosed herein.

[0116] This disclosure describes some embodiments of the present invention with reference to the accompanying drawings, in which only some of the possible embodiments were shown. Other aspects may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments were provided so that this disclosure was thorough and complete and fully conveyed the scope of the possible embodiments to those skilled in the art. [0117] Although specific embodiments were described herein, the scope of the invention is not limited to those specific embodiments. One skilled in the art will recognize other embodiments or improvements that are within the scope and spirit of the present invention. Therefore, the specific structure, acts, or media are disclosed only as illustrative embodiments. The scope of the invention is defined by the following claims and an equivalents therein.