Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
USE OF WEBRTC APIS FOR IMPROVING COMMUNICATON SERVICES
Document Type and Number:
WIPO Patent Application WO/2014/142715
Kind Code:
A1
Abstract:
Device and server node and associated methods related to communication services applying webRTC. The method performed by a device comprises creating an RTCPeerConnection object and instructing the RTCPeerConnection object to create a RTCSessionDescription, SDesc_1, for a session containing audio and video components. The method further comprises determining which capabilities that are supported by the device based on information in the SDesc_1, and further indicating information related to the determined capabilities to a first user on a user interface, thereby enabling the first user, when using the service, to select a type of user communication, based on the indicated information, that can be provided by the device.

Inventors:
ERIKSSON GÖRAN (SE)
HÅKANSSON STEFAN (SE)
Application Number:
PCT/SE2013/050219
Publication Date:
September 18, 2014
Filing Date:
March 12, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
H04L29/08
Domestic Patent References:
WO2005015935A12005-02-17
Foreign References:
US20100235550A12010-09-16
Other References:
NANDAKUMAR C JENNINGS CISCO S: "SDP for the WebRTC; draft-nandakumar-rtcweb-sdp-00.txt", SDP FOR THE WEBRTC; DRAFT-NANDAKUMAR-RTCWEB-SDP-00.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH- 1205 GENEVA, SWITZERLAND, 15 October 2012 (2012-10-15), pages 1 - 35, XP015087907
UBERTI GOOGLE C JENNINGS CISCO SYSTEMS J ET AL: "Javascript Session Establishment Protocol; draft-ietf-rtcweb-jsep-01.txt", JAVASCRIPT SESSION ESTABLISHMENT PROTOCOL; DRAFT-IETF-RTCWEB-JSEP-01.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH- 1205 GENEVA, SWITZERLAND, 5 June 2012 (2012-06-05), pages 1 - 31, XP015083173
KAPLAN ACME PACKET D BURNETT VOXEO N STRATFORD VOXEO TIM PANTON PHONEFROMHERE COM H: "API Requirements for WebRTC-enabled Browsers; draft-kaplan-rtcweb-api-reqs-01.txt", API REQUIREMENTS FOR WEBRTC-ENABLED BROWSERS; DRAFT-KAPLAN-RTCWEB-API-REQS-01.TXT, INTERNET ENGINEERING TASK FORCE, IETF; STANDARDWORKINGDRAFT, INTERNET SOCIETY (ISOC) 4, RUE DES FALAISES CH- 1205 GENEVA, SWITZERLAND, 1 November 2011 (2011-11-01), pages 1 - 13, XP015079331
Attorney, Agent or Firm:
EGRELIUS, Fredrik et al. (Patent Unit Kista DSM, Stockholm, SE)
Download PDF:
Claims:
CLAIMS

1. Method performed by a device in a communication network, the device being operable to implement WebRTC and to be associated with a service applying WebRTC, the device further being operable to be associated with a first user being connected to the service, the method comprising:

-creating (202a; 202b) an RTCPeerConnection object;

-instructing (204a; 204b) the RTCPeerConnection object to create a RTCSessionDescription, SDesc_1 , for a session containing audio and video components;

-determining (205a; 205b) which capabilities that are supported by the device based on information in the SDesc_1 ; and

-indicating (206a; 206b)information related to the determined capabilities to the first user on a user interface,

thus enabling the first user, when using the service, to select a type of user communication, based on the indicated information, that can be provided by the device.

2. Method according to claim 1 , wherein the SDesc_1 comprises information on one or more of:

-codecs supported by the device;

-characteristics associated with codecs supported by the device;

-IP addresses associated with the device;

-network interfaces supported by the device.

3. Method according to claim 1 or 2, further comprising

-creating (201 b) a media stream comprising at least one unattached media stream track by use of a WebRTC API; and

-instructing (203b) the RTCPeerConnection object to set the created media stream as source for media.

4. Method according to any of claims 1-3, further comprising;

-receiving (303) an indication from a server node, said indication being associated with capabilities of a second device associated with a second user connected to the service; and

wherein the determining (205a; 205b) of which capabilities that are supported by the device comprises:

-determining (304) which capabilities that are supported by both the device and the second device based on the received indication.

Method according to claim 4, wherein the information related to the determined capabilities supported by both devices is indicated (305) to the first user in association with an indication of the second user on the user interface, such that the first user can conclude that the indicated information is related to

communication with the second user, when using the service.

Method according to any of the preceding claims, further comprising:

-establishing a data channel between the device and a second device; -transmitting and/or receiving a test message over the established data channel; and

-determining a data throughput between the device and the second device based on information related to the transmitted and/or received test message.

Method according to claim 6, wherein the determining of which capabilities that are supported by the device is further based on the determined data throughput.

Method according to any of the preceding claims, wherein the determining of which capabilities that are supported is further based on one or more of:

-the location of the device; and

-statistics from previous service sessions.

Device (800) operable in a communication network, the device being operable to implement WebRTC and to be associated with a service applying WebRTC, the device further being operable to be associated with a first user being connected to the service, the device comprising:

-a WebRTC control unit (806), adapted to create an RTCPeerConnection object, and further adapted to instruct the RTCPeerConnection object to create an RTCSessionDescription, SDesc_1 ;

-a determining unit (807), adapted to determine which capabilities that are supported by the device based on information in the SDesc_1 ; and

-an indicating unit (808), adapted to indicate information related to the determined capabilities to the first user on a user interface,

thus enabling the first user, when using the service, to select a type of user communication, based on the indicated information, that can be provided by the device.

Device according to claim 9, wherein the SDesc_1 comprises information on one or more of:

-codecs supported by the device;

-characteristics associated with codecs supported by the device;

-IP addresses associated with the device; and

-network interfaces supported by the device.

Device according to claim 9 or 10, further adapted to create a media stream comprising at least one unattached media stream track by use of a WebRTC API; and further adapted to instruct the RTCPeerConnection object to set the created media stream as source for media.

Device according to any of claims 9-11 , further comprising:

-a receiving unit (805), adapted to receive an indication from a server node, said indication being associated with capabilities of a second device associated with a second user connected to the service; and

wherein the determining of which capabilities that are supported by the device comprises determining which capabilities that are supported by both the device and the second device based on the received indication.

Device according to any of claims 9-12, wherein the information related to the determined capabilities supported by both devices is indicated to the first user in association with an indication of the second user on the user interface, such that the first user can conclude that the indicated information is related to

communication with the second user, when using the service.

Device according to any of claims 9-13, further adapted to establish a data channel between the device and a second device; to transmit and/or receive a test message over the established data channel; and to determine a data throughput between the device and the second device based on information related to the transmitted and/or received test message.

15. Device according to claim14, wherein the determining of which capabilities that are supported by the device is further based on the determined data throughput.

16. Device according to any of claims 9-15, wherein the determining of which

capabilities that are supported is further based on one or more of:

-the location of the device; and

-statistics from previous service sessions.

17. Method performed by a server node in a communication network, the server node being operable to implement WebRTC and to be associated with a service applying WebRTC, the method comprising:

-receiving (501) a first RTCSessionDescription, SDesc_1 , from a first device associated with a first user connected to the service;

-receiving (502) a second RTCSessionDescription, SDesc_2, from a second device associated with a second user connected to the service; -determining (503) information related to capabilities supported by both the firstdevice and the second device based on SDesc_1 and SDesc_2; and -indicating (504) the determined information to at least one of the first and the second device.

18. Method according to claim 17, wherein the respective RTCSessionDescripton from a device comprises information on one or more of:

-codecs supported by the device;

-characteristics associated with codecs supported by the device -IP addresses associated with the device; and

-network interfaces supported by the device.

19. Method according to claim 17 or 18, wherein the determined information related to capabilities comprises information on one or more of:

-codecs supported by both the first and the second device;

-characteristics associated with codecs supported by both the first and the second device;

-network interfaces supported by both the first and the second device; and -a type of real-time communication supported by both the first and the second device.

20. Method according to any of claims 17-19, wherein the determining of which capabilities that are supported is further based on one or more of:

-the location of the device; and

-statistics from previous service sessions.

21. Method according to any of claims 17-20, wherein, when a capability is

supported only by one of the first and second device, further comprising:

reserving network functionality and capacity to translate a capability supported by of one devices into a capability supported by the other device.

22. Server node operable in a communication network, the server node being

operable to implement WebRTC and to be associated with a service applying

WebRTC, the server node comprising:

-a receiving unit (905), adapted to receive a first RTCSessionDescription, SDesc_1 , from a first device associated with a first user connected to the service; and further adapted to receive a second RTCSessionDescription, SDesc_2, from a second device associated with a second user connected to the service;

-a determining unit, (906), adapted to determine information related to capabilities supported by both the first device and the second device based on SDesc_1 and SDesc_2; and

-an indicating unit (907), adapted to indicate the determined information to at least one of the first and the second device.

23. Server node according to claim 22, wherein the respective

RTCSessionDescripton from a device comprises information on one or more of:

-codecs supported by the device;

-characteristics associated with codecs supported by the device; -IP addresses associated with the device; and

-network interfaces supported by the device.

24. Server node according to claim 22-23, wherein the determined information

related to capabilities comprises information on one or more of:

-codecs supported by both the first and the second device;

-characteristics associated with codecs supported by both the first and the second device;

-network interfaces supported by both the first and the second device; and -a type of real-time communication supported by both the first and the second device.

25. Server node according to any of claims 22-24, wherein the determining of which capabilities that are supported is further based on one or more of:

-the location of the device; and

-statistics from previous service sessions.

26. Server node according to any of claims 22-25, further adapted to, when a

capability is supported only by one of the first and second device:

reserving network functionality and capacity to translate a capability supported by one of the devices into a capability supported by the other device.

27. Computer program (1010), comprising computer readable code means, which when run in a device causes the device to perform the method according to any of claims 1-8.

28. Computer program product (1008) comprising computer program (1010)

according to claim 27.

29. Computer program (1 110), comprising computer readable code means, which when run in a server node causes the server node to perform the method according to any of claims 17-21.

30. Computer program product (1108) comprising computer program (11 10)

according to claim 29.

Description:
USE OF WEBRTC APIS FOR IMPROVING COMMUNICATION

SERVICES

TECHNICAL FIELD

[01] The herein suggested solution relates generally to communication services, and in particular to enabling adaption of a communication service applying WebRTC to capabilities associated with at least one device.

BACKGROUND

[02] In IETF (Internet Engineering Task Force) and W3C (World Wide Web Consortium) there is an ongoing effort aiming to enable support of conversational audio, video and data in a so-called "web browser". Wthin this effort, WebRTC (Web Real-Time

Communication) is an API definition being drafted by the W3C to enable browser-to- browser applications for voice calling, video chat and P2P (Peer to Peer) data sharing without plugins. That is, WebRTC could be used in/by a web application in order to provide conversational services to users. By use of WebRTC and the associated APIs (Application Programming Interfaces), a web browser will be able to send and receive e.g. RTP (Real-time Transport Protocol) and SCTP (Stream Control Transmission Protocol) data, and to get input samples from microphones and cameras connected to a user device, and to render media, such as audio and video. A web browser which is to provide conversational services by use of WebRTC, is controlled by a web (html) page via JS (Java Script) APIs, and, by html elements for rendering. [03] Examples of defined APIs associated with WebRTC are:

1. Navigator.getUserMedia() (defined in [1]): gives a web application, after user consent, access to media generating devices such as microphones and cameras.

2. RTCPeerConnection (defined in [2]): an object that enables the web

application to stream data from media generating devices to a peer.

3. RTCSessionDescription (defined in [2]): objects that are used to control the

RTCPeerConnection objects.

[04] The default use of the APIs described above in a web page or web application will be described below. When an end user has initiated a communication session in a web application associated with a communication service, the following is performed: 1. Using navigator.getUserMedia() to get allowance (active consent) from the user to use the microphone and the camera of the user's device

2. Creating a RTCPeerConnection (PC) object to handle a connection to a remote peer, and the transmission/reception of audio and video to/from the remote peer. 3. Instructing the PC to use the audio and video data created by microphone and camera, which is accessed with user consent after using

navigator.getUserMedia(), as source for media to be sent to a peer using the PC.

4. Instructing the PC object to create, and use, an RTCSessionDescription object that is used to describe the intended session. 5. Signaling the RTCSessionDescription data to the remote peer, which in its turn uses navigator.getUserMedia() to get user consent, creates a PC, instructs the PC to use audio/video from microphone/camera, applies the received

RTCSessionDescription and generates a new RTCSessionDescription in response to the received RTCSessionDescription. 6. Receiving the RTCSessionDescription from the remote peer and applying it locally.

7. Starting the sessions, in which audio and video can flow between the peers.

[05] In addition to these APIs having the functions described above, extensions have recently been proposed (see [3]). One of the extensions enables a web application to create unattached media stream tracks. Unattached in the sense that there is no real source delivering data. Such an unattached media stream track could be made "real", e.g. in order to be the source for transmission of real media data, by using the

Navigator.getUserMedia() API to connect a media generating device to the unattached stream track, which then will cease to be an unattached media stream track. SUMMARY

[06] The use of services applying WebRTC may be associated with negative user experiences related e.g. to that a user is presented with service alternatives which are, in fact, not available. It is therefore desirable to enable a correct presentation of service alternatives to a user, and thereby enable an improved user experience. According to the herein suggested solution, WebRTC APIs are utilized in a new, unexpected way which enables a more correct presentation of service alternatives to a user as compared to prior art solutions. The solution enables service providers using WebRTC as a platform for communication services to offer services giving a better user experience. The suggested solution is defined by the appended claims.

[07] According to a first aspect, a method is provided which is to be performed by a device in a communication network. The device is assumed to be operable to implement WebRTC and to be associated with a service applying WebRTC. The device is further assumed to be operable to be associated with a first user being connected to the service. The method comprises creating an RTCPeerConnection object and instructing the RTCPeerConnection object to create a RTCSessionDescription, SDesc_1 , for a session containing audio and video components (without accessing real media data). The method further comprises determining which capabilities that are supported by the device based on information in the SDesc_1 , and further indicating information related to the determined capabilities to the first user on a user interface. Thereby, the first user is enabled, when using the service, to select a type of user communication, based on the indicated information, that can be provided by the device.

[08] According to a second aspect, a device is provided, which is operable in a communication network. The device is further operable to implement WebRTC and to be associated with a service applying WebRTC. The device is further operable to be associated with a first user being connected to the service. The device comprises a WebRTC control unit, adapted to create an RTCPeerConnection object, and further adapted to instruct the RTCPeerConnection object to create an RTCSessionDescription, SDesc_1. The device further comprises a determining unit, adapted to determine which capabilities that are supported by the device based on information in the SDesc_1. The device further comprises an indicating unit, adapted to indicate information related to the determined capabilities to the first user on a user interface, thus enabling the first user, when using the service, to select a type of user communication, based on the indicated information, that can be provided by the device.

[09] According to a third aspect, a method is provided, which is to be performed by a server node in a communication network. The server node is assumed to be operable to implement WebRTC and to be associated with a service applying WebRTC. The method comprises receiving a first RTCSessionDescription, SDesc_1 , from a first device associated with a first user connected to the service. The method further comprises receiving a second RTCSessionDescription, SDesc_2, from a second device associated with a second user connected to the service. The method further comprises determining information related to capabilities supported by both the first device and the second device based on SDesc_1 and SDesc_2, and indicating the determined information to at least one of the first and the second device. [010] According to a fourth aspect, a server node is provided, which is operable in a communication network. The server node is operable to implement WebRTC and to be associated with a service applying WebRTC. The server node comprises a receiving unit, adapted to receive a first RTCSessionDescription, SDesc_1 , from a first device associated with a first user connected to the service; and further adapted to receive a second RTCSessionDescription, SDesc_2, from a second device associated with a second user connected to the service. The server node further comprises a determining unit, adapted to determine information related to capabilities supported by both the first device and the second device based on SDesc_1 and SDesc_2. The server node further comprises an indicating unit, adapted to indicate the determined information to at least one of the first and the second device.

[011] According to a fifth aspect, a computer program is provided, which comprises computer readable code means, which when run in a device causes the device to perform the method according to the first aspect.

[012] According to a sixth aspect, a computer program product is provided, which comprises a computer program according to the fifth aspect.

[013] According to a seventh aspect, a computer program is provided, which comprises computer readable code means, which when run in a server node causes the server node to perform the method according to the third aspect.

[014] According to an eighth aspect, a computer program product is provided, which comprises a computer program according to the seventh aspect.

BRIEF DESCRIPTION OF THE DRAWINGS

[015] The above, as well as additional objects, features and advantages of the suggested solution, will be better understood through the following illustrative and non- limiting detailed description of embodiments, with reference to the appended drawings, in which: Figure 1 illustrates a schematic communication network comprising a number of user devices and a server node, in which the suggested solution may be applied.

Figures 2a, 2b and 3-4 are flow charts illustrating exemplifying procedures performed by a device according to exemplifying embodiments. Figures 5-7 are flow charts illustrating exemplifying procedures performed by a server node according to exemplifying embodiments.

Figure 8 is a block chart illustrating a device according to an exemplifying embodiment.

Figure 9 is a block chart illustrating a server node according to an exemplifying

embodiment. Figures 10 and 1 1 are block charts illustrating exemplifying computer implemented embodiments of an arrangement for use in a device and a server node, respectively.

DETAILED DESCRIPTION

[016] Figure 1 illustrates a schematic communication network in which the herein suggested solution may be applied. The illustrated communication network comprises a server node 101 and a number of devices 102: 1-102:4, each associated with a user 103: 1-103:4, respectively. The devices are connected to the server node e.g. via the Internet. Examples of devices may be e.g. User Equipments (UEs), tablets or computers connected to the Internet e.g. via a wireless communication network, such as an LTE-type system, or a wired alternative, e.g. via VDSL2 over copper cables. The users 103: 1-103:4 are assumed to be connected to a service applying WebRTC. A user connects to the service from her/his device, e.g. in a web browser, and is assumed to have a set of selected other users e.g. in a contact list. The other users in the contact list may be e.g. friends, family or work acquaintances, with which the user wishes to communicate from time to time. The service is a real time communication service, which may provide different combinations of real time user communication, such as e.g. video (visual), audio and chat. The devices may be differently equipped for user interaction, in form of displays, loudspeakers, microphones, cameras, keyboards and touch screens, etc.

[017] An exemplifying set-up where the herein suggested solution is envisaged to be used in association with a communication service may be described as follows: Users of the service log on to the service from their respective locations and devices. The log on can be automatic, or manual, and take place e.g. when the device or browser is started, or, as a deliberate action from the user. When a user logs on, the user is registered with the service as being connected and "available" for communication. The user can from this stage see, on a user interface such as a display or touch screen, what other users, e.g. in a contact list, that are present, i.e. connected and available for communication via the communication service. Later, when the user either wants to start a communication session, using the service, with one or more other users, or if another user wants to start a communication session with this user, a session is set up between the users in question.

[018] It has been realized by the inventors, that when using the WebRTC solution, as described above and defined e.g. in [1] and [2], users will be given the impression that a communication session with another party (peer) involving audio, video and data can be initiated, even when this is not the case. This may lead to that users are disappointed when a requested service cannot be provided, which gives a bad user experience, and thus a bad impression of the service.

[019] In order to solve this problem, a solution is suggested herein, where the existing WebRTC APIs are utilized in a clever manner, such that a user may be presented with only those alternatives for communication that can actually be provided by the user's device, and possibly also by the device of another user with which the user will want to communicate. An embodiment of the suggested solution also utilizes the existing

WebRTC APIs in combination with a recently proposed API for creation of unattached media stream tracks [3] to enhance the service and the user experience of the service.

[020] The solution according to the embodiments enables a web application, applying WebRTC for providing communication service, to determine, before indicating available communication types to the user, what kind of communication that might be possible to establish, e.g. with other users of the service. For example, the solution may be used in order to present appropriate service alternatives to a user on a User Interface (Ul).

[021] Alternatively, the service provider can use the suggested solution to acquire, and reserve resources needed, to provide the communication types wanted. As an example, video transcoding resources could be reserved.

[022] This suggested solution can be used in combination with information, such as e.g. knowledge about which networks that are available to different devices or nodes in the communication network, and possibly their expected capacity; knowledge of IP address assignment and/or of to which type of network different IP addresses belong. Further, the suggested solution could be used together with other data or information, such as knowledge about previous sessions using a certain IP address, e.g. monitored empirical data; or together with knowledge about previous sessions in a current location, which knowledge could be acquired using e.g. the geolocation web API. Further, changes in network conditions may be estimated using e.g. a combination of clever use of the available APIs in combination with use of information regarding networks, earlier sessions, etc. Such information could be collected and stored in a database.

[023] Below, exemplifying procedures according to the suggested solution will be described in detail. Different actions associated with the solution are presented and explained. For example, existing APIs are used in a new way, together with new functionality being non-WebRTC functionality, i.e. not belong to the WebRTC definitions. The suggested solution aims at enabling e.g. an improved user experience in association with services applying WebRTC. A conventional use of the WebRTC APIs has been described earlier. Below, the new use of the APIs will be described. Further, in the description below, "ICE" will be mentioned. ICE (Interactive Connectivity Establishment) is a part of WebRTC, described e.g. in [5], which is used for two purposes: One is to establish connectivity, even in presence of NATs (Network Address Translators) and FWs (Fire Walls), between network nodes/devices employing the WebRTC API definition, or "standard". The other is to verify that the two network nodes/devices, with or without user involvement, consent to establishing connectivity. "ICE candidate" in this document means a "CANDIDATE" in [5], which in turn is a transport address, or in other words a

combination of IP address and port. In short, ICE is a two stage process. First the two end-points that want to establish connectivity gather possible transport addresses (ICE candidates), and in the second stage the two end-points try to establish connectivity using the gathered ICE candidates in a systematic order.

[024] According to an exemplifying embodiment of the suggested solution, the WebRTC APIs mentioned earlier are used to check the capabilities of the device and/or browser of the end user in conjunction with the registration process. Such an embodiment could be described as follows 1. Creating a media stream with unattached audio and video tracks using the

API available in [3];

2. Creating a RTCPeerConnection (PC) object; Instructing the PC object to use the created media stream (with unattached tracks) as source for media;

Instructing the PC object to create a RTCSessionDescription object that is used to describe the intended session;

Notel : The PC object can be used in different ways, e.g the

RTCSessionDescription object can be populated with only a fraction of the eventual ICE candidates, with subsequent ICE candidates being made available as objects on their own, or it can contain the full ICE candidate list. For this document it does not matter whether the ICE candidates are present immediately or comes as separate objects with some delay - it is in that case assumed that they are added to the RTCSessionDescription object.

Note2: If step 3 is not done; the PC is instructed to create an

RTCSessionDescription object indicating that audio and video is intended to be part of the session by using certain constraints

Parsing the RTCSessionDescription to determine capabilities of the user device and/or browser, such as e.g. which codecs that are supported, and which network interfaces that are present. The inventors have realized that these are accessible due to the ICE procedure (see [5]), which is started when creating an RTCSessionDescription.

Utilizing the information from the RTCSessionDescription and information from an RTCSessionDescription associated with another user to determine what capabilities that can be supported in a session with the other user.

[025] Note that the above sequence does not involve use of the API

navigator.getUserMedia(). This way, the user is never asked to give the web application access to camera and microphone, and the camera or microphone is never switched on (and no "hot" indicator is ever lit up). Doing any of this would give the user a feeling of privacy infringement: "why is this web app (trying to) using my camera, I have no intention to have a communication session right now".

[026] In an alternative exemplifying embodiment, where an API for creating unattached media stream tracks is not available, the procedure could be implemented and/or described as follows: Creating a RTCPeerConnection (PC) object.

Instructing the PC object to create a RTCSessionDescription object that is used to describe the intended session. The constraints "OfferToReceiveAudio" and "OfferToReceiveVideo" should be used when doing this, when appropriate, in order to indicate that the intention is to have audio and video in the session.

Parsing the RTCSessionDescription to determine the capabilities of the user device and/or browser, such as e.g. which codecs that are supported, and which network interfaces that are present. The inventors have realized that these are accessible due to the ICE procedure (see [5]), which is started when creating an RTCSessionDescription.

Utilizing the information from the RTCSessionDescription and information from an RTCSessionDescription associated with another user to determine what capabilities that can be supported in a session with the other user. For example, determining which capabilities that are supported by both the device and/or browser associated with the first user and the device and/or browser associated with the second user. Such overlapping capabilities can be used for

communication between the users, i.e. in a session between the peers.

[027] The information which is derivable from an RTCSessionDescription when the procedure involves an unattached media stream track will be more detailed than when no unattached media stream track is involved.

[028] From an RTCSessionDescription generated by the RTCPeerConnection object, the following information can be derived:

• Audio codecs/formats supported - including features, level and profiles within the codecs/formats.

• Video codecs/formats supported - including features, level and profiles within the codecs/formats.

• What types of network interfaces that are available (e.g. based on IP

addresses obtained from the ICE candidates) [029] By using the information from the RTCSessionDescription in a clever way, a communication service can present e.g. appropriate icons to users, in order to give them the right expectations in regard of achievable communication types. For example, a set of rules could comprise the following:

• If there are overlapping formats for both audio and video for two peers

(users/devices), and if the network interfaces between the peers support higher bitrates: show an icon on the user interface indicating that a video session with the other peer is possible.

• If the audio formats of two peers (users/devices) are overlapping, but there is no common video format: show an icon on the user interface indicating that an audio-only session with the other peer is possible (i.e. no video).

• If common formats for audio and video are available for two peers, but the network interfaces available to one or both peers will not (based on experience or other information) support the bitrates required for video: show an icon indicating that an audio-only (i.e. not video) session is possible with the other peer.

• By examining the ICE candidates and compare with known data determine whether a communication is likely to be possible or not. A clarifying example: if you find out that the WebRTC node is behind a firewall that is known to block communication, communication with nodes outside it is probably not possible for that node.

• Examine the RTP (Real-time Transport Protocol) features supported, and if known middleboxes will not interoperate with those features, indicate that communication will not be possible with the other peer. [030] Further, e.g. when a communication session is being set up, the user interface can be adapted in an appropriate manner. For example, no video surface may be present if no video session will be possible, a small video surface may be present if it is determined that the video codecs and their profiles in common would only allow a low resolution video session, or if it is determined that the network connection can only support lower bitrates, etc. Exemplifying method performed by a device, figures 2a, 2b and 3-4

[031] An exemplifying method performed by a device in a communication network will be described below with reference to figure 2a and will be further exemplified with reference to figures 2b-4. The device is operable to implement WebRTC and further operable to be associated with a service applying WebRTC. The device is further operable to be associated with a first user being connected to the service.

[032] The method comprises creating, in an action 202a, an RTCPeerConnection object. That is, using a WebRTC API for creating an RTCPeerConnection object. The method further comprises instructing, in an action 204a, the RTCPeerConnection object to create an RTCSessionDescription, SDesc_1 , for a session containing audio and video components. This could alternatively be described as that the method comprises creating an RTCSessionDescription, SDesc_1. It should be noted that no user consent is needed in association with performing the method, since no real media stream is associated with the RTCPeerConnection object orSDesc_1. In this way, a user does not need to give consent to use of camera, microphone etc. in a situation where the user has neither initiated a communication session with another user, nor received a request for communication session from another user.

[033] The method further comprises determining, in an action 205a, which capabilities that are supported by the device based on information in the SDesc_1. In an

implementation, the SDesc_1 is parsed for information related to capabilities supported by the device. This is possible, since it is realized that SDesc_1 comprises such information due to the ICE processing started when creating an RTCSessionDescription. In another implementation, the SDesc_1 is also or alternatively provided to, i.e. transmitted to, a server node, for further processing, e.g. comparison with SDescs associated with other devices. In this case information is received from the server node, which is related to information comprised in SDesc_1 (and probably related to information comprised in another SDesc). Detemining which capabilities that are supported by the device based on such information received from a server node is still considered to be "determining which capabilities that are supported by the device based on information in the SDesc_1". [034] Information related to the determined capabilities is then indicated to the first user on a user interface in an action 206a. The indicated information may be e.g. in form of an icon symbolizing a video camera when video communication is supported by the device, as previously described. Alternatively or in addition, the presence and size e.g. of a video area on the user interface may indicate whether video is available, and to which extent, e.g. at which resolution.

[035] Figure 2b shows an alternative embodiment of the above described method, also comprising the creation and use of a so-called unattached media stream track. The actions 202b and 204b-206b may be considered to correspond to the actions202a and 204a-206a in figure 2a.

[036] In the method illustrated in figure 2b, a media stream containing unattached media stream track(s) is created by use of a WebRTC API (described in reference [3]) in an action 201 b. The unattached media stream track is not associated with access to any camera or microphone or similar equipment, and does thus not require use of the previously described API getUserMedia().

[037] The method illustrated in figure 2b further comprises an action 203b, in which the created RTCPeerConnection object is instructed to set the media stream containing unattached media stream tracks as source for media. [038] By using the above described method, the first user, which is connected to the service, is enabled to select a type of service, i.e. user communication, e.g. video and audio or audio-only, which can actually be provided by the device. That is, a type of service for which the device supports the required capabilities, such as e.g. required type of codec/codecs and associated features/profiles/levels and supported network interfaces and IP addresses. In this way, the user will not be mislead to try to initiate a service which cannot be provided, and thus the disappointment associated with such an attempt is avoided. Consequently, the user experience is improved.

[039] Figure 3 illustrates a method which further comprises receiving an indication associated with another device D2 associated with a second user, the second user being present in e.g. a contact list of the first user and vice versa. The indication could e.g. be an RTCSessionDescription, SDesc_2, associated with the device D2, or parts thereof. The indication should comprise or indicate information enabling determining of which capabilities that are supported by both the device D1 and the device D2, based on SDesc_1 and the indication. This is illustrated as action 304 in figure 3. A media stream containing unattached media stream tracks could be used in the method illustrated in figure 3, as described in association with figure 2b. [040] The created SDesc_1 may also or alternatively be provided, i.e. transmitted, to a server node, which also receives RTCSessionDescriptions from other devices associated with other users. Such an implementation is illustrated in figure 4, where the SDesc_1 is transmitted to the server node in an action 403. Then, an indication is received from the server node in an action 404, which indication is associated with information comprised in SDesc_1 and an RTCSessionDescription SDesc_2, associated with another device D2, associated with a second user present in e.g. the contact list of the first user, as above. The information may, as described above, be related to type of codec/codecs and associated features/profiles/levels, supported network interfaces and IP addresses. The indication may provide this information explicitly, or be an indicator e.g. of a certain type of service or capability supported by both devices, meaning that it would be possible to provide the indicated type of service between the two devices.

[041] The capabilities, e.g. type of service, that are supported by both the device and the second device, D2, are determined in an action 405, based on the received indication. The determined capabilities may then be indicated to the first user on the user interface in an action 406. The determined capabilities may be indicated in association with an indication of the second user on the user interface, e.g. next to a symbol representing the second user in a contact list of the first user.

Exemplifying device, figure 8

[042] Below, an exemplifying device 800 adapted to enable the performance of at least one of the above described methods will be described with reference to figure 8. The device is operable in a communication network. For example, the device may be connected to a server node e.g. via the Internet. The device could be e.g. a User

Equipment (UE), a tablet, a computer, or any other communication device available on the market, which is operable to implement WebRTC and to be associated with a service applying WebRTC, and further operable to be associated with a first user being connected to the service. The service could be accessed e.g. via a web browser run on the device. The device is assumed to have a user interface, e.g. in form of a touch screen, on which information may be presented to the user. [043] The device 800 is illustrated as to communicate with other entities or nodes via a communication unit 802, comprising means for wireless and/or wired communication, such as a receiver 804 and a transmitter 803. The device may comprise further functional units 81 1 , such as a camera, a microphone, loudspeakers and a display unit, and signal processing circuits, such as codecs etc. The arrangement 801 and/or device 800 may further comprise one or more storage units 810.

[044] The arrangement 801 and/or device 800, or parts thereof, could be implemented e.g. by one or more of: a processor or a micro processor and adequate software and memory for storing thereof, a Programmable Logic Device (PLD) or other electronic component(s) or processing circuitry configured to perform the actions described above.

[045] The device 801 , and arrangement 801 , could be illustrated and described as comprising a webRTC control unit 806adapted to create an RTCPeerConnection object, and further adapted to instruct the RTCPeerConnection object to create a

RTCSessionDescription, SDesc_1. The device further comprises a determining unit 807, adapted to determine which capabilities that are supported by the device based on information in the SDesc_1. The capabilities may e.g. be types of real time user communication, such as video, audio and chat or data. The term "capabilities" may also refer to the technical equipment enabling different types of user communication, such as e.g. codecs and related settings, camera, microphone, etc.

[046] The device further comprises an indicating unit 808, adapted to indicate information related to the determined capabilities to the first user on a user interface, e.g. on a display in graphical form. Thereby, the first user is enabled, when using the service, to select a type of user communication, based on the indicated information, that can be provided by the device.

[047] The device, e.g. the WebRTC control unit 806, may be further adapted to create a media stream containing unattached media stream tracks by use of a WebRTC API, such as the one described in [3], and to instruct the RTCPeerConnection object to set the created media stream (with unattached tracks) as source for media. Normally, when applying WebRTC, a real media stream, i.e. from a camera and/or microphone, or similar, is set as source for media.

[048] The device may further comprise a receiving unit 805, adapted to receive an indication from a server node, where the indication is associated with capabilities of a second device associated with a second user connected to the service. (The device previously denoted only "device" is here assumed to be the "first device"). In this case, the determining of which capabilities that are supported by the first device comprises determining which capabilities that are supported by both the first device and the second device based on the received indication. That is, the capabilities supported by the first and second device may be determined based on information derived directly from the

SDesc_1 and on information derived from the indication, or, based only on information derived from the indication, which in such a case also should indicate the capabilities supported by the first device (in addition to indicating the capabilities supported by the second device).

[049] The indicating unit may be adapted to indicate the information related to the capabilities determined to be supported by both the first and the second device to the first user in association with an indication of the second user on the user interface, such that the first user can conclude that the indicated information is related to communication with the second user, when using the service. For example, a graphical representation of a video camera may be displayed next to an image, or other representation, of the second user, in order to indicate that a video session is possible and supported by the first and second device. [050] Further, the device may comprise a throughput determining unit 809 adapted to, or otherwise be adapted to, determine a data throughput capacity between the device and a second device by establishing a data channel between the device and the second device; and to transmit and/or receive a test message over the established data channel. The data throughput between the device and the second device could then be determined based on information related to the transmitted and/or received test message. The previously described determining of which capabilities that are supported by the device may then further be based on the determined data throughput. Thus, it could be determined e.g. that even though the first and second device support a video session in terms of hardware and/or software, the data throughput between the first and second device is too low for enabling such a video session. Data throughput could be measured e.g. in bytes per time unit.

Exemplifying method performed by a server node, figures 5-7.

[051] An exemplifying method performed by a server node in a communication network, e.g. the Internet, will be described below with reference to figure 5 and will be further exemplified with reference to figures 6 and 7. The server node is operable to implement WebRTC and further operable to be associated with a service applying WebRTC. Hence the server node provides the service applying WebRTC to devices which e.g. can access the server node via a browser and an Internet connection. [052] The method comprises receiving, in an action 501 , a first RTCSessionDescription, SDesc_1 , from a first device associated with a first user connected to the service. Further, the method comprises receiving, in an action 502, a second RTCSessionDescription, SDesc_2, from a second device associated with a second user connected to the service. Further, in an action 503, information is determined, which is related to capabilities supported by both the first device and the second device. The information is determined based on SDesc_1 and SDesc_2, e.g. by parsing of these. For example, the information may comprise the capabilities which overlap between the first and the second device, which is also used as an example in figure 5. The overlapping capabilities do not need to be explicitly stated in SDesc_1 and/or SDesc_2, but could be derived from by use of e.g. a mapping table mapping one or more characteristics stated in an RTCSessionDescription to one or more capabilities. Such a mapping table could be stored in a memory in the server node or elsewhere and thus be accessible for the server node.

[053] The method further comprises indicating, in an action 504, the determined information to at least one of the first and the second device. The information could relate to one or more of: codecs supported by both the first and the second

device;characteristics associated with codecs supported by both the first and the second device; network interfaces supported by both the first and the second device; and/or a type of real-time communication supported by both the first and the second device. [054] In figure 6, the actions 601-603 corresponds to the actions 501-503 in figure 5, but here is also illustrated an action 604, where a type of real time user communication supported by both the first and second device is determined based on overlapping information in SDesc_1 and SDesc_2. An indication of the type of real time user communication supported by both devices may be transmitted to e.g. the first device, which then may indicate this type of user communication to the first user on the user interface in an action 606. That is, the action 606 is not performed by the server node, but is illustrated in figure 6 in order to clarify the purpose of indicating information to the first and/or second device.

[055] The interaction between the server node and a first device D1 and a second device D2 is further illustrated in figure 7, which is a signaling scheme.

[056] Further, when it is determined that a certain capability is not supported by one of the first and the second device, i.e. is supported only by one of the devices,

network functionality and capacity may be reserved to translate a capability supported by of one devices into a capability supported by the other device. For example when the first device supports a certain type of codec and the second device supports another type of codec, a transcoder in the network may be used to transcode the format of encoded data provided from the first device into a format decodable by the second device, and vice versa. Then, network functionality and capacity related to the transcoding may be reserved by the server node.

Exemplifying server node, figure 9

[057] Below, an exemplifying server node 900 adapted to enable the performance of the above described methods will be described with reference to figure 9. The server node is operable in a communication network, such as the Internet, and operable to implement WebRTC and to be associated with a service applying WebRTC. Hence the server node may provide the service applying WebRTC to devices which e.g. can access the server node via a browser and an Internet connection.

[058] The device 900 is illustrated as to communicate with other entities or nodes via a communication unit 902, comprising means for wireless and/or wired communication, such as a receiver 904 and a transmitter 903. The server node may comprise further functional units 911 for providing regular server functions, and one or more storage units 910.

[059] The arrangement 901 and/or server node 900, or parts thereof, could be implemented e.g. by one or more of: a processor or a micro processor and adequate software and memory for storing thereof, a Programmable Logic Device (PLD) or other electronic component(s) or processing circuitry configured to perform the actions described above.

[060] The server node 900, and arrangement 901 , could be illustrated and described as comprising a receiving unit 905, adapted to receive a first RTCSessionDescription,

SDesc_1 , from a first device associated with a first user connected to the service. The receiving unit 905 is further adapted to receive a second RTCSessionDescription, SDesc_2, from a second device associated with a second user connected to the service. The receiving unit 905, as well as other units described herein, may consist of or comprises a number of different units, components and/or circuits. The server node further comprises a determining unit 906, adapted to determine information related to capabilities supported by both the first device and the second device based on SDesc_1 and SDesc_2. This information could be determined, e.g. derived, by parsing of SDesc_1 and SDesc_2, comparing the information comprised in SDesc_1 and SDesc_2 and extracting certain predefined items when these items are found to overlap, e.g. be present, in both SDesc_1 and SDesc_2.

[061] The server node further comprises an indicating unit 907, adapted to indicate the determined information to at least one of the first and the second device, as previously described.

[062] The server node may further comprise functionality adapted to, when it is determined that a certain capability is not supported by one of the first and the second device, i.e. is supported only by one of the devices,

network functionality and capacity may be reserved to translate a capability supported by of one devices into a capability supported by the other device, e.g. by transcoding between different audio and/or video formats.

Exemplifying arrangement, figure 10

[063] Figure 10 schematically shows a possible embodiment of an arrangement 1000, which also can be an alternative way of disclosing an embodiment of the arrangement 801 in the device 800 illustrated in figure 8. Comprised in the arrangement 1000 are here a processing unit 1006, e.g. with a DSP (Digital Signal Processor). The processing unit 1006 may be a single unit or a plurality of units to perform different actions of procedures described herein. The arrangement 1000 may also comprise an input unit 1002 for receiving signals from other entities or nodes, and an output unit 1004 for providing signals to other entities or nodes. The input unit 1002 and the output unit 1004 may be arranged as an integrated entity.

[064] Furthermore, the arrangement 1000 comprises at least one computer program product 1008 in the form of a non-volatile or volatile memory, e.g. an EEPROM

(Electrically Erasable Programmable Read-Only Memory), a flash memory and a hard drive. The computer program product 1008 comprises a computer program 1010, which comprises code means, which when executed in the processing unit 1006 in the arrangement 1000 causes the arrangement and/or a devicein which the arrangement is comprised to perform the actions e.g. of the procedure described earlier in conjunction with figures 2a-b and 3-4.

[065] The computer program 1010 may be configured as a computer program code structured in computer program modules. Hence, in an exemplifying embodiment, the code means in the computer program 1010 of the arrangement 1000 may comprise a WebRTC control module 1010a for create a RTCPeerConnection object, and for instructing the RTCPeerConnection object to create an RTCSessionDescription,

SDesc_1. The arrangement 1000 may further comprise a determining module 1010b for determining which capabilities that are supported by the device based on information in the SDesc_1.

[066] The computer program may further comprise anindicatingmodule 1010c for indicating information related to the determined capabilities to the first user on a user interface. The computer program 1010 may further comprise one or more additional modules 101 Od for providing other functions described above.

[067] The modules 1010a-d could essentially perform the actions of the flows illustrated in figures 2a-b and 3-4, to emulate the arrangement in the device illustrated in figure 8.

[068] In a similar manner, figure 1 1 schematically shows a possible embodiment of an arrangement 1100, which also can be an alternative way of disclosing an embodiment of the arrangement 901 in the server node900 illustrated in figure 9. The arrangement in figure 1 1 may be similar to the arrangement in figure 10, except that the computer program 11 10 comprises different instructions than the computer program 1010. Hence, in an exemplifying embodiment, the code means in the computer program 1 110 of the arrangement 1100 may comprise a receiving module 1010a for receiving

RTCSessionDescriptions from a first and a second device connected to a service. The arrangement 1100 may further comprise a determining module 1 110b for determining information related to capabilities supported by both the first device and the second device based on the received RTCSessionDescriptions.

[069] The computer program may further comprise anindicating module 11 10c for indicating thedetermined information to at least one of the first and the second device. The computer program 1 110 may further comprise one or more additional modules 1 110d for providing other functions described above.

[070] Although the code means in the embodiment disclosed above in conjunction with figureslO and 1 1 are implemented as computer program modules which when executed in the processing unit causes the decoder to perform the actions described above in the conjunction with figures mentioned above, at least one of the code means may in alternative embodiments be implemented at least partly as hardware circuits. [071] As previously mentioned, the processor may be a single CPU (Central processing unit), but could also comprise two or more processing units. For example, the processor may include general purpose microprocessors; instruction set processors and/or related chips sets and/or special purpose microprocessors such as ASICs (Application Specific Integrated Circuit). The processor may also comprise board memory for caching purposes. The computer program may be carried by a computer program product connected to the processor. The computer program product may comprise a computer readable medium on which the computer program is stored. For example, the computer program product may be a flash memory, a RAM (Random-access memory) ROM (Read- Only Memory) or an EEPROM, and the computer program modules described above could in alternative embodiments be distributed on different computer program products in the form of memories within the device or server node.

[072] While the methods and arrangements as suggested above have been described with reference to specific embodiments provided as examples, the description is generally only intended to illustrate the suggested solution and should not be taken as limiting the scope of the suggested methods and arrangements, which are defined by the appended claims. While described in general terms, the methods and arrangements may be applicable e.g. for different types of communication networksapplying different Access Technologies. [073] The herein suggested solution may be used e.g. upon connecting to the communication service, at regular intervals and/or upon adding a user to a contact list. Performing the suggested solution at regular intervals would enable keeping the service up to date in regard e.g. of available network interfaces, etc. The suggested solution could be combined with the network API being defined in W3C [4] and/or with the geoLocation API defined by W3C in order to determine the location of a device. The suggested solution may be used in association with e.g. a database such that the info obtained from the use of the APIs as outlined here may be combined with other sources of info, e.g. statistics from previous service sessions.

[074] It is also to be understood that the choice of interacting units or modules, as well as the naming of the units are only for exemplifying purpose, and nodes suitable to execute any of the methods described above may be configured in a plurality of alternative ways in order to be able to execute the suggested process actions. It should also be noted that the units or modules described in this disclosure are to be regarded as logical entities and not with necessity as separate physical entities.

ABBREVIATIONS

API Application Programming Interface

ICE Interactive Connectivity Establishment

IETF Internet Engineering Task Force

RTC Real-Time Communication

W3C World Wide Web Consortium

REFERENCES

[1] http://dev.w3.org/2011/webrtc/editor/getusermedia.html [2] http://dev.w3.org/201 1/webrtc/editor/webrtc.html

[3] http://dvcs.w3. org/hg/dap/raw-f i le/ti p/med ia-stream- capture/proposals/SettingsAPI_proposal_v6.html

[4] http://dvcs.w3.org/hg/dap/raw-file/tip/network-api/Overview. html [5] IETF RFC (Request for Comments): 5245, April 2010