Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD OF CONDUCTING CONFERENCE CALLS USING A MOUNT FOR POSITIONING AND ORIENTING A MOBILE COMPUTER DEVICE
Document Type and Number:
WIPO Patent Application WO/2022/135648
Kind Code:
A1
Abstract:
A conference call system allowing participants to work together on a virtual representation of an object in real time, the system comprising: a main server configured to manage conference call, save conference data and provide authorization for conference participants; two or more mobile computing devices associated with the main server, at least one of which is placed on a mount for positioning and orienting the mobile computing device; an adjustable optical system is attached to the mount and configured to adjust the field of view of the rear camera on the object. The mobile computing devices comprises: a software for conducting conference calls; a module for generating a virtual representation of the object; a communication module; a module for displaying a virtual representation of an object on a screen of a mobile computing device of each conference participant; and a control module associated with the main server and configured to control the communication module. The participants discuss and work the physical object represented by the virtual object to which the rear camera points and, if necessary, make corrections, for example, in the text or in the design, to the virtual object during the conference call. Therefore, many participants can collaborate on a virtual representation of the object in real time during the conference call.

Inventors:
SURANCHIN ADIL (KZ)
RAKHMANBERDIYEV ISKANDER (US)
MORENO ALEXANDER (RU)
KLINKOV KONSTANTIN (RU)
Application Number:
PCT/EA2020/050004
Publication Date:
June 30, 2022
Filing Date:
December 22, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ORBI INC (US)
SURANCHIN ADIL (KZ)
International Classes:
H04M7/00; F16M11/10; F16M11/38; G06F1/16; G06F3/14; G09B5/10; H04N1/00; H04N1/42
Domestic Patent References:
WO2013076554A12013-05-30
WO2011145539A12011-11-24
Foreign References:
US20190260966A12019-08-22
US20160282901A12016-09-29
RU2611041C22017-02-20
RU126492U12013-03-27
RU2534951C22014-12-10
Attorney, Agent or Firm:
LAW FIRM "GORODISSKY & PARTNERS" LTD. (RU)
Download PDF:
Claims:
CLAIMS

1. A conference call system allowing participants to work together on a virtual representation of an object in real time, the system comprising: a main server configured to manage conference call, save conference data and provide authorization for conference participants, two or more mobile computing devices associated with the main server, at least one of which is placed on a mount for positioning and orienting the mobile computing device and has a rear camera, wherein the mount comprises a support member pivotally attached to a base and configured to position the mobile computing device mounted on the holder of a mount, in the horizontal and vertical planes and to set the angle of inclination of the mobile computing device relative to the base, and an adjustable optical system is attached to the support member and configured to adjust the field of view of the rear camera on the object, such that the support member and the holder are configured to direct the field of view of the rear camera to the adjustable optical system and the user can see the screen of the mobile computing device, wherein each of the two or more mobile computing devices comprises: a software for conducting conference calls, a module for generating a virtual representation of the object, configured to analyze the image and separate the image of the object from the background, a communication module configured to exchange information between conference call participants by connecting each mobile computing device from the two or more mobile computing devices with each of the other mobile computing devices and transmitting the virtual representation of the object from the module for generating a virtual representation of the object of each mobile computing device to the mobile computing device of each conference participant, and a module for displaying a virtual representation of an object on a screen of a mobile computing device of each conference participant, associated with the communication module and with the module for generating a virtual representation of the object, a control module associated with the main server and configured to control the communication module.

2. The system according to claim 1, wherein the communication module of the each of the two or more mobile computing devices comprises a submodule for receiving audio, a submodule for receiving video, and a submodule for exchanging messages, said submodules are connected to a microphone, a front-facing camera, and a speaker, respectively, and are configured to form of text, audio and video real-time communication channels configured to receive a voice signal and a video signal of the conference participant, as well as sounds from other conference participants, respectively.

3. The system according to claim 1, wherein the module for generating a virtual representation of the object is connected to the rear camera to receive the image of the object.

4. The system according to claim 1, wherein each of the two or more mobile computing devices further comprises a user interface associated with the communication module and the module for displaying a virtual representation of the object, and the user interface is configured to receive a correction and addition signal from a conference participant in the virtual representation of the object.

5. The system according to claim 1, further comprising at least one personal computer (PC) associated with the main server and comprising: a software for conducting conference calls, a module for generating a virtual representation of the object, configured to analyze the image and separate the image of the object from the background, a communication module configured to exchange information between the conference call participants, said exchange is carried out through the communication of each personal computer with the main server, a module for displaying a virtual representation of an object on a screen of the PC, associated with the communication module, and a control module associated with the main server and configured to control the module for generating a virtual representation of the obj ect, the communication module and the module for displaying a virtual representation of the object, a Web-Socket server associated with each of the two or more mobile computing devices and with each of the at least one personal computer, a Janus WebRTC server associated with each of the two or more mobile computing devices and with each of the at least one personal computer.

6. The system according to claim 1, wherein the module for generating a virtual representation of the object of the at least one of the mobile computing devices which is not placed on a mount, is further configured to generate a virtual representation of an object from the image added by a user of this mobile computing device.

7. The system according to claim 1, wherein the object represents a two-dimensional or three-dimensional object.

8. The system according to claim 7, wherein a paper medium is used as a two- dimensional object.

9. The system according to claim. 1, wherein the adjustable optical system comprises a reflective element configured to reflect the object without distortion, including reflection angles, more than 45 degrees, while the adjustable optical system is attached to the support member by means of a rod, pivotally connected to a pair of rods of the support member that is attached to the base, and said rods are configured to adjust the angle of inclination of the reflective element so as to direct the field of view of the rear camera towards the object.

10. A method for conducting a conference call, allowing participants to joint work on a virtual representation of an object in real time, using the conference call system of claim 1, the method comprising: placing the mobile computing device of the conference presenter on the holder of the mount of claim 1, and orienting the holder attached to the support member in a horizontal position above the middle part of the mount base at a distance of 20-25 cm from the base, starting the software on the mobile computing device of each conference participant after receiving a signal from the main server, placing an object, intended for joint consideration, on the base, directing the field of view of the rear camera to the adjustable optical system, and manually adjusting the adjustable optical system so that the portion of the base on which the object is placed, completely falls into the field of view of the rear camera, displaying the object image transmitted from the rear camera on the display of the presenter's mobile computing device, forming a virtual representation of the object by means of a module for generating a virtual representation of an object, transmitting a virtual representation of the object by means of the communication module from the presenter to the mobile computing device of each conference call participant, and displaying a virtual representation of the object on the display of the mobile computing device of each on the conference call participant, including the presenter, so that the presenter can see edits that are added to the object image by other participants of the conference call in real time, wherein the software of the mobile computing device is configured to allow, for each of the conference call participants, to insert corrections and comments to the virtual representation of the object on the screen of his mobile computing device by means of touch input and / or text input from the keyboard and / or voice input via a voice communication channel and transmit it to all conference participants.

11. A method for remote collaboration between a teacher and students on a virtual representation of an object, using the conference call system according to claim 1, the method comprising: placing the mobile computing devices of at least a part of the students in the holders of the mounts of claim 1, and orienting the holder attached to the support member in a horizontal position above the middle part of the mount base at a distance of 20-25 cm from the base, starting the software on the mobile computing device of each student participating in the joint work with the teacher via the conference call after receiving a signal from the main server, placing an object, intended for joint work, on the base of the mount, directing the field of view of the rear camera to the adjustable optical system and manually adjusting the adjustable optical system so that the portion of the base on which the object is placed is completely within the field of view of the rear camera, generating a virtual representation of the object by means of a module for generating a virtual representation of an object for students of the at least a part of the students using mounts, said virtual representation of the object will be considered by the teacher, outputting the virtual representation of the object, formed by the module for generating the virtual representation of the object, on the display of the student's mobile computing device of the at least a part of the students using the mount, transmitting the generated virtual representations of objects by the corresponding communication modules to the teacher's computing device, and displaying the virtual representation of the object, which is to be considered by the teacher, on the display of the teacher's computing device in real time, wherein the software of the teacher's computing device is configured to allow the teacher to insert corrections and comments to the virtual representation of the object on the screen of his computing devices through touch input, and / or text input from the keyboard, 22 and / or voice input via a voice communication channel and to transmit this correction to the student whose virtual object is being considered.

12. The method according to claim 11, wherein the software of the teacher’s computing device is further configured such that the virtual representation of the object being considered by the teacher, is transmitted to all students so that all students see the edits made by the teacher.

13. The method according to claim 12, wherein the software of the mobile computing device of each student is configured so that all students can edit the virtual representation of the object on the display of their mobile device and transmit the inserted edit to all the students participating in the joint work with the teacher via the conference call in real time.

14. The method according to claim 11, wherein the object represents a two- dimensional or three-dimensional object, wherein a paper carrier is used as the two- dimensional object.

15. The method of claim 11, wherein the teacher’s computing device is a personal computer or a mobile computing device.

16. The method of claim 15, wherein the teacher's mobile computing device is mounted on a mount to form a virtual representation of the object on the teacher's mobile computing device for displaying the virtual representation of the object to the students.

23

AMENDED CLAIMS received by the International Bureau on 21 April 2022 (21.04.2022)

1. A conference call system (1) allowing participants to work together on a virtual representation of an object in real time, the system comprising: a main server (2) configured to manage conference call, save conference data and provide authorization for conference participants ; two or more mobile computing devices (3A, 3B) associated with the main server (2) , at least one of the two or more mobile computing devices comprises a front-facing camera (22) , at least one of the two or more mobile computing devices is placed on a mount (4) for positioning and orienting the mobile computing device (3) and has a rear camera (4) , wherein the mount (4) comprises: a support member (6) pivotally attached to a base (7) and configured to position the mobile computing device mounted on the holder (8) of a mount, in the horizontal and vertical planes and to set the angle of inclination of the mobile computing device (3) relative to the base (7) ; and an adjustable optical system (9) is attached to the support member (6) and configured to adjust the field of view of the rear camera (5) on the object (10) , such that the support member (6) and the holder (8) are configured to direct the field of view of the rear camera (5) to the adjustable optical system (9) and the user can see the screen (12) of the mobile computing device (3B) ; wherein each of the two or more mobile computing devices (3A, 3B) comprises: a software (13) for conducting conference calls; a module (14) for generating a virtual representation of the object, configured to analyze the image and separate the image of the object from the background, wherein the module (14) for generating a virtual representation of the object is connected to the rear camera (5) to receive the image of the object; a communication module (15) configured to form text, audio and video real-time communication channels in addition to

24

AMENDED SHEET (ARTICLE 19) transmitting a virtual representation of the object, said channels configured to receive a voice signal and a video signal of the conference participant, as well as sounds from other conference participants, respectively, to exchange information between conference call participants by connecting each mobile computing device from the two or more mobile computing devices (3A, 3B) with each of the other mobile computing devices and transmitting the virtual representation of the object from the module (14) for generating a virtual representation of the object of each mobile computing device to the mobile computing device of each conference participant; and a module (16) for displaying a virtual representation of an object on a screen of a mobile computing device of each conference participant, associated with the communication module (15) and with the module (14) for generating a virtual representation of the ob j ect ; a control module (17) associated with the main server (2) and configured to control the communication module (15) .

2. The system according to claim 1, wherein the communication module (15) of the each of the two or more mobile computing devices comprises a submodule (18) for receiving audio, a submodule (19) for receiving video, and a submodule (20) for exchanging messages, said submodules (18, 19, 20) are connected to a microphone (21) , the front-facing camera (22) , and a speaker (23) , respectively. 3. The system according to claim 1, wherein each of the two or more mobile computing devices (3A, 3B) further comprises a user interface (24) associated with the communication module (15) and the module (16) for displaying a virtual representation of the object, and the user interface is configured to receive a correction and addition signal from a conference participant in the virtual representation of the object.

5. The system according to claim 1, further comprising at least one personal computer (PC) (25) associated with the main server (2) and comprising: a software (26) for conducting conference calls;

25

AMENDED SHEET (ARTICLE 19) a module (29) for generating a virtual representation of the object, configured to analyze the image and separate the image of the object from the background; a communication module (28) configured to exchange information between the conference call participants, said exchange is carried out through the communication of each personal computer with the main server; a module (30) for displaying a virtual representation of an object on a screen of the PC, associated with the communication module (28) ; and a control module (27) associated with the main server (2) and configured to control the module (29) for generating a virtual representation of the object, the communication module (28) and the module (30) for displaying a virtual representation of the ob j ect ; a Web-Socket server (31) associated with each of the two or more mobile computing devices (3A, 3B) and with each of the at least one personal computer (25) ; a Janus WebRTC server (32) associated with each of the two or more mobile computing devices (3A, 3B) and with each of the at least one personal computer (25) .

5. The system according to claim 1, wherein the module (14) for generating a virtual representation of the object of the at least one of the mobile computing devices (3A) which is not placed on a mount, is further configured to generate a virtual representation of an object from the image added by a user of this mobile computing device.

6. The system according to claim 1, wherein the object (10) represents a two-dimensional or three-dimensional object.

7. The system according to claim 6, wherein a paper medium is used as a two-dimensional object.

8. The system according to claim. 1, wherein the adjustable optical system (9) comprises a reflective element configured to reflect the object without distortion, including reflection angles, more than 45 degrees, while the adjustable optical system

26

AMENDED SHEET (ARTICLE 19) (9) is attached to the support member (6) by means of a rod (33) , pivotally connected to a pair of rods of the support member (6) that is attached to the base, and said rods are configured to adjust the angle of inclination of the reflective element so as to direct the field of view of the rear camera (5) towards the object

(10) .

9. A method for conducting a conference call, allowing participants to joint work on a virtual representation of an object in real time, using the conference call system of claim 1, the method comprising: placing the mobile computing device (3B) of the conference presenter, provided with a front camera (22) , on the holder of the mount (4) of claim 1, and orienting the holder (8) attached to the support member (6) in a horizontal position above the middle part of the mount base (7) at a distance of 20-25 cm from the base; starting the software on the mobile computing device of each conference participant after receiving a signal from the main server; placing an object (10) , intended for joint consideration, on the base ( 7 ) ; directing the field of view of the rear camera (5) to the adjustable optical system (9) , and manually adjusting the adjustable optical system so that the portion of the base (7) on which the object is placed, completely falls into the field of view of the rear camera (5) ; displaying the object (10) image transmitted from the rear camera on the display of the presenter's mobile computing device (3B) ; forming a virtual representation of the object (5) by means of a module (14) for generating a virtual representation of an object received by the rear camera (5) , while providing image analysis and separation of the object image from the background; forming channels of text, audio and video communication in real time, in addition to transmission of a virtual representation of the object, to receive the voice signal and video signal of

27

AMENDED SHEET (ARTICLE 19) the conference participant and sounds from other conference participants by means of the communication module ( 15) ; transmitting a virtual representation of the object by means of the communication module (15) from the presenter to the mobile computing device (3A) of each conference call participant; and displaying a virtual representation of the object on the display of the mobile computing device (3A) of each on the conference call participant, including the presenter, so that the presenter can see edits that are added to the object image by other participants of the conference call in real time; wherein the software of the mobile computing device is configured to allow, for each of the conference call participants, to insert corrections and comments to the virtual representation of the object on the screen of his mobile computing device by means of touch input and/or text input from the keyboard and/or voice input via a voice communication channel and transmit it to all conference participants.

10. A method for remote collaboration between a teacher and students on a virtual representation of an object, using the conference call system according to claim 1, the method comprising: placing the mobile computing devices (3A, 3B) of at least a part of the students in the holders (8) of the mounts (4) of claim 1, and orienting the holder (8) attached to the support member (6) in a horizontal position above the middle part of the mount base (7) at a distance of 20-25 cm from the base; starting the software (13) on the mobile computing device (3A) of each student participating in the joint work with the teacher via the conference call after receiving a signal from the main server (2) ; placing an object (10) , intended for joint work, on the base (7) of the mount (4) ; directing the field of view of the rear camera (5) to the adjustable optical system (9) and manually adjusting the adjustable optical system so that the portion of the base (7) on

28

AMENDED SHEET (ARTICLE 19) which the object (10) is placed is completely within the field of view of the rear camera (5) ; generating a virtual representation of the object by means of a module (14) for generating a virtual representation of an object for students of the at least a part of the students using mounts, said virtual representation of the object will be considered by the teacher; outputting the virtual representation of the object, formed by the module (14) for generating the virtual representation of the object, on the display (12) of the student's mobile computing device (3A) of the at least a part of the students using the mount, transmitting the generated virtual representations of objects by the corresponding communication modules (15) to the teacher's computing device (3B) ; and displaying the virtual representation of the object, which is to be considered by the teacher, on the display (12) of the teacher's computing device in real time; wherein the software of the teacher's computing device (3B) is configured to allow the teacher to insert corrections and comments to the virtual representation of the object on the screen of his computing devices through touch input, and / or text input from the keyboard, and / or voice input via a voice communication channel and to transmit this correction to the student whose virtual object is being considered.

11. The method according to claim 10, wherein the software of the teacher's computing device (3B) is further configured such that the virtual representation of the object being considered by the teacher, is transmitted to all students so that all students see the edits made by the teacher.

12. The method according to claim 11, wherein the software of the mobile computing device (3A) of each student is configured so that all students can edit the virtual representation of the object on the display of their mobile device and transmit the inserted edit to all the students participating in the joint work with the teacher via the conference call in real time.

29

AMENDED SHEET (ARTICLE 19)

13. The method according to claim 10, wherein the object represents a two-dimensional or three-dimensional object, wherein a paper carrier is used as the two-dimensional object.

14. The method of claim 10, wherein the teacher's computing device (3B) is a personal computer or a mobile computing device.

15. The method of claim 14, wherein the teacher's mobile computing device (3B) is mounted on a mount (4) to form a virtual representation of the object on the teacher's mobile computing device for displaying the virtual representation of the object to the students.

30

AMENDED SHEET (ARTICLE 19)

Description:
SYSTEM AND METHOD OF CONDUCTING CONFERENCE CALLS USING A MOUNT FOR POSITIONING AND ORIENTING A MOBILE COMPUTER DEVICE

Technical Field

The present invention relates to a conferencing system and method for allowing participants to collaborate on a virtual representation of an object in real time using a mount to position and orient a mobile computing device. The method can be used for a conference call, during which multiple participants could collaborate on a virtual representation of an object in real time.

Background of the Invention

Conference calling is widely used in the world, as it is a link between a large number of participants located at great distances from each other. In addition, it provides a wide range of possibilities for using additional tools, such as screen and presentation sharing, online broadcasts, and video call recording. Currently, conferencing technology is an important tool for making operational decisions in various fields of activity. Users can make video calls for real-time meetings and conferences, training seminars, coordination of work, as well as distance education and even medicine. This allows to significantly saving time and money, and ensures prompt decision-making.

Known from prior art are METHODS AND SYSTEMS FOR COLLABORATIVE APPLICATION SHARING AND CONFERENCING (see, for example, RU 2611041, published December 27, 2015, also published as WO 2013/076554). The method comprises the steps of: providing a tiered remote access framework comprising an application tier, a server tier and a client tier; providing a server remote access application in server tier, server remote application being capable of modifying state model; providing a client remote access application in either client tier or application tier; providing a client media sharing application in client tier, providing a conferencing manager application to server tier, conferencing manager application receiving shared media; and modifying state model to further include shared media such that shared media is provided in at least one of client computing devices.

In the method, a participant may be capable of sharing various media such as video, audio, desktop screen scrapes or text messages with other participants in the collaborative session.

The participants in the sessions are identified by a Userinfo tag. Each participant is assigned a default color (DefaultColor) to represent the user's annotations within the interactive digital surface layer. Any displayable color may be selected as a default color for participants to the collaborative session. A prioritization of colors may be defined, such that a first user is assigned blue, a second user is assigned green, a third user is assigned orange, etc.

Participants in a collaborative session may be limited to interacting solely with the shared, remotely-accessed application, i.e., participants may be unable to interact with various media stored on, or accessed by, the client computing devices of other participants, however, a participant may be capable of sharing various media such as, for example, video, audio, desktop screen scrapes, text messages, libraries of images, etc., with other participants in the collaborative session.

The conferencing server machine may receive the shared media either directly from the client media sharing application or indirectly from the client remote access application. The user interface includes a floating tool bar, which provides the participant with functional controls, such as, activating the interactive digital surface layer, capturing an image of the participant's desktop, which may then be shared with the other participants in the collaborative session, etc. The interactive digital surface layer is operable to receive user input to collaboratively display annotations input by users during the sessions.

In this system, participants are ranked, there is the first user who manages the conference call, and other participants. Participants cannot interact with or access various multimedia data stored on other participants computing devices, but they can share various multimedia data, such as video, audio, screen captures, text messages, image libraries, and so on, with other participants in the shared session.

As a disadvantage, it should be noted that the specified system does not comprise a module for generating a virtual representation of an object, by means of which the image of a physical object, captured by a rear camera of the computing device, is converted into a virtual representation of the object. The system also does not comprise a communication module for transmitting the virtual representation of the object to the mobile computing device of all conference call participants to discuss possible shortcomings of the physical object and, if necessary, make corrections, for example, in the text or in the design, during the conference call. Therefore, many participants cannot collaborate on a virtual representation of an object in real time, in particular, during the conference call. Also, it is not possible to position and orient the camera of the computing device in the required position relative to the physical object, thereby softening the blur in the image captured by the rear camera.

Known from prior art is an integrated system of distance learning and video conferencing (see RU 126492 Ul, published March 27, 2013). This utility model relates to the field of automated teaching aids and information transfer and can be used for complex group and / or individual training of applicants and students, in the process of advanced training and retraining of personnel, in teaching children in difficult life situations, during foreign internships, training air traffic controllers and pilots, as well as for remote control of production processes and monitoring the progress of operations in medicine. The system of distance learning and video conferencing comprises a block of servers, an interactive block, an information node and a block of users and a block of listeners connected to them through communication channels, each of which has the architecture necessary for solving distance learning problems. When using the system, the overall integration of various components into a single technical solution is provided, the creation of a single information space and the connection of users to it.

The server block contains a client server, a video conferencing server, a content server, a webcast server, and a power supply for the server block. The video conference terminal contains a microphone system, a camera, an interactive whiteboard, a screen, a personal computer, a simulator and a 3D scanner, a 3D printer, a document camera and additional speakers connected to a personal computer.

As a disadvantage, it should be noted that the specified system does not comprise a module for generating a virtual representation of an object, by means of which the image of a physical object, captured by a rear camera of the computing device, is converted into a virtual representation of the object. The system also does not comprise a communication module for transmitting the virtual representation of the object to the mobile computing device of all conference call participants to discuss possible shortcomings of the physical object and, if necessary, make corrections, for example, in the text or in the design, during the conference call. Therefore, many participants cannot collaborate on a virtual representation of an object in real time, in particular, during the conference call. Also, it is not possible to position and orient the camera of the computing device in the required position relative to the physical object, thereby softening the blur in the image captured by the rear camera.

As the closest technical solution can be considered DEVICE, METHOD AND SYSTEM FOR SHARING OF PLOTTED IMAGE AT MULTIPLE WORKPLACES, PROGRAMME AND RECORDABLE MEDIA (see RU 2534951 C2, published December 10, 2014, also published as WO 2011/145539), relating to conference communication facilities, which allow sharing images drawn on lecture boards (white boards), etc., which are objects for visual representation, among a variety of workspaces.

The apparatus includes an image storage unit configured to store the images drawn at the respective sites; an image synthesizing unit configured to superimpose and synthesize the images stored in the image storage unit in a manner so as not to include the images drawn at transmission destinations; and an image transmission unit configured to transmit the images synthesized by the image synthesizing unit to the respective sites.

The method is performed as follows. Each workplace (a site) is equipped with a white board, a visualization device that projects images transmitted from the sites onto the white board so as to be displayed. A photographing device captures the images of the white board as a whole. Among those displayed on the white board of site 1, "A" is the image drawn at the site 1, and "B" and "C" are the images drawn at the sites 2 and 3, respectively. At the site 1, these images are provided as display images displayed by the visualization device, for example the visualization device 112 is realized by a projector that projects image data onto the white board so as to be visualized.

In the site 1 an information processing apparatus such as a personal computer 116 (PC) is further installed. The PC controls projection by the visualization device, capturing of images to be shared with the other sites by the photographing device such as a shooting device and a digital video camera, transmission of images drawn at the site 1 to the sites 2 and 3, or the like.

The PC 116 acquires the images of the site 1 , which are drawn at the site 1 , from the captured images of the white board and transmits them to the server connected via a network. Further, the PC receives display images constituted by the images of the other sites other than the site 1 from the server and causes the visualization device to project them.

The photographing device can be realized by a digital camera, a video camera, or the like. The photographing device acquires the images of the white board as moving images in, for example, a JPEG format, and sequentially transmits image files to the PC. The white boards have marks at their four corners or the like to share relative sizes of images to be shared between the remote sites.

At the site 1 , the images "B" and "C" drawn at the sites 2 and 3, respectively, transmitted to the server are projected as display images via the projector onto the white board where the image "A" of the own-site 1 is drawn. Thus, the images "A," "B," and "C" are displayed on the white board 110 as superimposed images.

Further, at the site 2, the images "A" and "C" drawn at the sites 1 and 3, respectively, are projected as display images via the projector onto the second white board where the image "B" of the own-site 2 is drawn. Thus, the images "A", "B" and “C" are displayed on the second white board as superimposed images. Moreover, at the site 3, the images "A" and "B" drawn at the sites 1 and 2, respectively, are projected as display images via the projector onto the third white board where the image "C" of the own-site 3 is drawn. Thus, the images "A", "B," and "C" are displayed on the white board as superimposed images. Consequently, the same images are shared between the sites 1 , 2, and 3.

The server manages a client list for identifying the current connected PCs to perform multiple site image sharing. In order to cause images to be shared between the PCs registered in the client list or between more PCs, the server performs image processing to generate display images to be displayed at the respective sites.

As a disadvantage, it should be noted that the specified system does not comprise a module for generating a virtual representation of an object, by means of which the image of a physical object, captured by a rear camera of the computing device, is converted into a virtual representation of the object. The system also does not comprise a communication module for transmitting the virtual representation of the object to the mobile computing device of all conference call participants to discuss possible shortcomings of the physical object and, if necessary, make corrections, for example, in the text or in the design, during the conference call. Therefore, many participants cannot collaborate on a virtual representation of an object in real time, in particular, during the conference call. Also, it is not possible to position and orient the camera of the computing device in the required position relative to the physical object, thereby softening the blur in the image captured by the rear camera.

Summary of the Invention

The present invention has been made in view of the above problems and may have an object of providing a conference call system allowing participants for collaboration in realtime on a virtual representation of an object placed on a base of a mount for positioning and orienting a mobile computing device such that the object is in the field of view of the rear camera of the mobile computing device, wherein the mobile computing device comprises a module for generating a virtual representation of the object and a communication module for transmitting the virtual representation of the object to the mobile computing device of all participants of the conference calls in real time.

The present invention may also have an object of providing a method for conducting a conference call, allowing participants to joint work on a virtual representation of an object in real time, using the conference call system of claim 1.

The present invention may also have an object of providing a method for remote collaboration between a teacher and students on a virtual representation of an object, using the conference call system according to claim 1.

According to an aspect of the present invention, there is provided a conference call system allowing participants to work together on a virtual representation of an object in real time, the system comprising: a main server configured to manage conference call, save conference data and provide authorization for conference participants, two or more mobile computing devices associated with the main server, at least one of which is placed on a mount for positioning and orienting the mobile computing device and has a rear camera, wherein the mount comprises a support member pivotally attached to a base and configured to position the mobile computing device mounted on the holder of a mount, in the horizontal and vertical planes and to set the angle of inclination of the mobile computing device relative to the base, and an adjustable optical system is attached to the support member and configured to adjust the field of view of the rear camera on the object, such that the support member and the holder are configured to direct the field of view of the rear camera to the adjustable optical system and the user can see the screen of the mobile computing device, wherein each of the two or more mobile computing devices comprises: a software for conducting conference calls, a module for generating a virtual representation of the object, configured to analyze the image and separate the image of the object from the background, a communication module configured to exchange information between conference call participants by connecting each mobile computing device from the two or more mobile computing devices with each of the other mobile computing devices and transmitting the virtual representation of the object from the module for generating a virtual representation of the object of each mobile computing device to the mobile computing device of each conference participant, and a module for displaying a virtual representation of an object on a screen of a mobile computing device of each conference participant, associated with the communication module and with the module for generating a virtual representation of the object, a control module associated with the main server and configured to control the communication module.

Preferably, the communication module of the each of the two or more mobile computing devices comprises a submodule for receiving audio, a submodule for receiving video, and a submodule for exchanging messages, said submodules are connected to a microphone, a front-facing camera, and a speaker, respectively, and are configured to form of text, audio and video real-time communication channels configured to receive a voice signal and a video signal of the conference participant, as well as sounds from other conference participants, respectively.

Preferably, the module for generating a virtual representation of the object is connected to the rear camera to receive the image of the object.

Preferably, each of the two or more mobile computing devices further comprises a user interface associated with the communication module and the module for displaying a virtual representation of the object, and the user interface is configured to receive a correction and addition signal from a conference participant in the virtual representation of the object.

Preferably, the system further comprises at least one personal computer (PC) associated with the main server and comprising: a software for conducting conference calls, a module for generating a virtual representation of the object, configured to analyze the image and separate the image of the object from the background, a communication module configured to exchange information between the conference call participants, said exchange is carried out through the communication of each personal computer with the main server, a module for displaying a virtual representation of an object on a screen of the PC, associated with the communication module, and a control module associated with the main server and configured to control the module for generating a virtual representation of the obj ect, the communication module and the module for displaying a virtual representation of the object, a Web-Socket server associated with each of the two or more mobile computing devices and with each of the at least one personal computer, a Janus WebRTC server associated with each of the two or more mobile computing devices and with each of the at least one personal computer.

Preferably, the module for generating a virtual representation of the object of the at least one of the mobile computing devices which is not placed on a mount, is further configured to generate a virtual representation of an object from the image added by a user of this mobile computing device.

Preferably, the object represents a two-dimensional or three-dimensional object, wherein a paper medium is used as a two-dimensional object.

Preferably, the adjustable optical system comprises a reflective element configured to reflect the object without distortion, including reflection angles, more than 45 degrees, while the adjustable optical system is attached to the support member by means of a rod, pivotally connected to a pair of rods of the support member attached to the base, and said rods are configured to adjust the angle of inclination of the reflective element so as to direct the field of view of the rear camera towards the object.

According to another aspect of the present invention, there is provided a method for conducting a conference call, allowing participants to joint work on a virtual representation of an object in real time, using the conference call system of claim 1, the method comprising: placing the mobile computing device of the conference presenter on the holder of the mount of claim 1, and orienting the holder attached to the support member in a horizontal position above the middle part of the mount base at a distance of 20-25 cm from the base, starting the software on the mobile computing device of each conference participant after receiving a signal from the main server, placing an object, intended for joint consideration, on the base, directing the field of view of the rear camera to the adjustable optical system, and manually adjusting the adjustable optical system so that the portion of the base on which the object is placed, completely falls into the field of view of the rear camera, displaying the object image transmitted from the rear camera on the display of the presenter's mobile computing device, forming a virtual representation of the object by means of a module for generating a virtual representation of an object, transmitting a virtual representation of the object by means of the communication module from the presenter to the mobile computing device of each conference call participant, and displaying a virtual representation of the object on the display of the mobile computing device of each on the conference call participant, including the presenter, so that the presenter can see edits that are added to the object image by other participants of the conference call in real time, wherein the software of the mobile computing device is configured to allow, for each of the conference call participants, to insert corrections and comments to the virtual representation of the object on the screen of his mobile computing device by means of touch input and / or text input from the keyboard and / or voice input via a voice communication channel and transmit it to all conference participants.

According to still another aspect of the present invention, there is provided a method for remote collaboration between a teacher and students on a virtual representation of an object, using the conference call system according to claim 1, the method comprising: placing the mobile computing devices of at least a part of the students in the holders of the mounts of claim 1, and orienting the holder attached to the support member in a horizontal position above the middle part of the mount base at a distance of 20-25 cm from the base, starting the software on the mobile computing device of each student participating in the joint work with the teacher via the conference call after receiving a signal from the main server, placing an object, intended for joint work, on the base of the mount, directing the field of view of the rear camera to the adjustable optical system and manually adjusting the adjustable optical system so that the portion of the base on which the object is placed is completely within the field of view of the rear camera, generating a virtual representation of the object by means of a module for generating a virtual representation of an object for students of the at least a part of the students using mounts, said virtual representation of the object will be considered by the teacher, outputting the virtual representation of the object, formed by the module for generating the virtual representation of the object, on the display of the student's mobile computing device of the at least a part of the students using the mount, transmitting the generated virtual representations of objects by the corresponding communication modules to the teacher's computing device, and displaying the virtual representation of the object, which is to be considered by the teacher, on the display of the teacher's computing device in real time, wherein the software of the teacher's computing device is configured to allow the teacher to insert corrections and comments to the virtual representation of the object on the screen of his computing devices through touch input, and / or text input from the keyboard, and / or voice input via a voice communication channel and to transmit this correction to the student whose virtual object is being considered.

Preferably, the software of the teacher’s computing device is further configured such that the virtual representation of the object being considered by the teacher, is transmitted to all students so that all students see the edits made by the teacher.

Preferably, the software of the mobile computing device of each student is configured so that all students can edit the virtual representation of the object on the display of their mobile device and transmit the inserted edit to all the students participating in the joint work with the teacher via the conference call in real time.

Preferably, the object represents a two-dimensional or three-dimensional object, wherein a paper carrier is used as the two-dimensional object.

Preferably, the teacher’s computing device is a personal computer or a mobile computing device.

Preferably, the teacher's mobile computing device is mounted on a mount to form a virtual representation of the object on the teacher's mobile computing device for displaying the virtual representation of the object to the students.

The technical effect achieved by the claimed invention is that the proposed system allows participants to work together on a virtual representation of an object in real time, with each of the participants equipped with a mobile computing device, and at least one of the mobile computing devices, for example, a conference presenter, is placed on a mount for positioning and orienting the mobile computing device, and the design of the mount ensures that the mobile computing device is fixed in the desired position relative to the object, so that the object placed on the base of the mount falls into the field of view of the rear camera, which forms a virtual representation of this object. Due to the fact that the mobile computing device comprises a module for generating a virtual representation of an object and a module for transmitting a virtual representation of an object to the computing devices of all conference participants in real time, many participants can work together on a virtual representation of an object in real time during a conference call. In addition, conference call participants can add edits and comments to the virtual view in real time.

Brief Description of Drawings

The invention is hereinafter explained by the description of preferred embodiments thereof with reference to the accompanying drawings, in which:

Fig. 1 illustrates a scheme of a conference call system allowing participants to work together on a virtual representation of an object in real time, the first embodiment, according to the invention;

Fig. 2 illustrates a general view of a mount (front view) on which a mobile computing device is installed;

Fig. 3 illustrates a general view of a mount (rear view) on which a mobile computing device is installed;

Fig. 4 illustrates a scheme of a mobile computing device, according to the invention;

Fig. 5 illustrates a scheme of a conference call system comprising mobile computing devices and personal computers, allowing participants to work together on a virtual representation of an object in real time, the second embodiment, according to the invention;

Fig. 6 illustrates a scheme of a conference call system comprising mobile computing devices and personal computers, and additionally a WebSocket server and a Janus WebRTS server.

Description of Preferred Embodiments

According to the invention, it is provided a conference call system 1 (fig.1) allowing participants to work together on a virtual representation of an object in real time. The system 1 comprises a main server 2 configured to manage conference call, save conference data and provide authorization for conference participants.

The system comprises two or more mobile computing devices 3 A, 3B associated with the main server 2, in accordance with the number of the conference call participants, fig.1 shows three mobile computing devices 3. At least one of the mobile computing devices 3B is placed on a mount 4 (fig.2) for positioning and orienting the mobile computing device and has a rear camera 5.

The mount 4 comprises a support member 6 pivotally attached to a base 7 and configured to position the mobile computing device 3B mounted on the holder 8 in the horizontal and vertical planes and to set the angle of inclination of the mobile computing device relative to the base 7. An adjustable optical system 9 (fig.3) is attached to the support member 6 of the mount 4 and is configured to adjust the field of view of the rear camera 5 on the object 10 placed on the base 11, such that the support member 6 and the holder 8 are configured to direct the field of view of the rear camera 5 to the adjustable optical system 9 and the user can see the screen 12 of the mobile computing device 3B.

Each mobile computing device 3 A, 3B (fig.1) comprises: a software 13 for conducting conference calls, a module 14 for generating a virtual representation of the object placed on the base 11, configured to analyze the image and separate the image of the object from the background, a communication module 15 configured to exchange information between conference call participants by connecting each mobile computing device 3 A, 3B from the two or more mobile computing devices with each of the other mobile computing devices and transmitting the virtual representation of the object from the module 14 for generating a virtual representation of the object of each mobile computing device to the mobile computing device of each conference participant, and a module 16 for displaying a virtual representation of an object on a screen of a mobile computing device of each conference participant, associated with the communication module 15 and with the module 14 for generating a virtual representation of the object, as well as a control module 17 associated with the main server 2 and configured to control the communication module 14.

The communication module 15 (fig.4) of the each of the two or more mobile computing devices 3 A, 3B comprises a submodule 18 for receiving audio, a submodule 19 for receiving video, and a submodule 20 for exchanging messages, said submodules are connected to a microphone 21, a front-facing camera 22, and a speaker 23, respectively, and are configured to form of text, audio and video real-time communication channels, to receive a voice signal and a video signal of the conference participant, as well as sounds from other conference participants, respectively.

The module 14 for generating a virtual representation of the object is connected to the rear camera 5 to receive the image of the object.

Each of the two or more mobile computing devices 3A, 3B further comprises a user interface 24 associated with the communication module 15 and the module 16 for displaying a virtual representation of the object, and the user interface 24 is configured to receive a correction and addition signal from a conference participant in the virtual representation of the object.

According to the second embodiment (fig.5), the system further comprises at least one personal computer (PC) 25 associated with the main server 2, two personal computers 25 are shown on fig.5. Each personal computer 25 comprises: a software 26 for conducting conference calls; a module 27 for generating a virtual representation of the object, configured to analyze the image and separate the image of the object from the background; a communication module 28 configured to exchange information between the conference call participants, said exchange is carried out through the communication of each personal computer with the main server; a module 29 for displaying a virtual representation of an object on a screen of the PC, associated with the communication module 28, and a control module 30 associated with the main server 2 and configured to control the module 27 for generating a virtual representation of the object, the communication module 28 and the module 29 for displaying a virtual representation of the object.

According to the second embodiment, the system further comprises a Web-Socket server 31 associated with each of the two or more mobile computing devices 3 A, 3B and with each of the at least one personal computer 25, and a Janus WebRTC server 32 associated with each of the two or more mobile computing devices 3A, 3B and with each of the at least one personal computer 25.

The module 14 for generating a virtual representation of the object of the at least one mobile computing device 3 A which is not placed on a mount, is further configured to generate a virtual representation of an object from the image added by a user of this mobile computing device 3A.

The object 10 (fig.2) represents a two-dimensional or three-dimensional object, wherein a paper medium is used as a two-dimensional object, for example, a sheet of paper with text or a drawing that needs to be corrected.

The adjustable optical system 9 comprises a reflective element (fig.3) configured to reflect the object without distortion, including reflection angles, more than 45 degrees. The adjustable optical system 9 is attached to the support member 6 by means of a rod 33, pivotally connected to a pair of rods of the support member 6 attached to the base 7, and said rods are configured to adjust the angle of inclination of the reflective element so as to direct the field of view of the rear camera 5 towards the object 10.

A method for conducting a conference call, allowing participants to joint work on a virtual representation of an object in real time, using the conference call system of claim 1, comprises the following steps.

To participate in the conference, each participant must have either a mobile computing device 3 A, 3B, or a personal computer 25. The person that has the physical object 10, further referred to as the presenter, and must have a mobile computing device 3B and a mount 4. Such participants may be few; they in turn can be the presenters.

All conference participants run the conference software on their mobile computing devices 3A, 3B, or PC 25 and log in if an authorization is required for this conference call.

The presenter sets up its mobile computing device 3B on the holder 8 of a mount 4 and configures it as follows.

The presenter places a mobile computing device 3B, such as a mobile phone or tablet, in the holder 8 so that the side face of the computing device 3B is located in the recess of the holder 8, providing stability.

Then the presenter outputs the image transmitted from the rear camera 5 to the display of the mobile computing device 3B.

The presenter positions the mobile computing device 3B in the horizontal and vertical planes on the support element 6 and sets a certain angle of inclination of the mobile computing device 3B relative to the base 7 so that the field of view of the rear camera 5 is directed at the adjustable optical system 9, whereas the presenter can see the screen of the mobile computing device 3B. Then the presenter adjusts the manually adjustable optical system 9 so that the portion of the base 7, where the object 10 is placed, falls completely into the field of view of the rear camera 5.

The presenter moves the mobile computing device 3B to the right or left on the holder 7, points the rear camera 5 just below the upper edge of the reflective element of the adjustable optical system 9, so that the object 10 completely falls into the field of view of the rear camera 5 of the mobile computing device 3B.

Fig.3 shows the direction of the field of view of the camera 5 to the reflecting element of the adjustable optical system 9.

The image of the object 10 falls on the reflecting element of the adjustable optical system 9, and is captured by the rear camera 5. The image captured by the camera 5 is displayed on the display of the mobile computing device 3B.

At the same time, a virtual representation of object 5 is being generated, which is transmitted to the mobile computing device of each conference participant and is displayed on the display of the mobile computing device 3 A of each conference participant in real time.

The presenter uses the rear camera 5 of the mobile computing device 3B and the optical system 9 to take a picture (or a sequence of pictures if the physical object changes), and the module 14 for forming a virtual representation of the object converts the picture into a virtual representation of the object. The communication module 15 sends a virtual representation of the object to all conference participants. The display module 16 displays a virtual representation of the object on the screens of each participant's mobile computing device 3A, including the presenter's mobile computing device 3B.

Each of the conference call participants can make edits and comments to the virtual representation of the object. This is done both by using graphical input (using a finger via the touch screen, a finger via the touchpad, a special pen via the touch screen, or using a graphics tablet, mouse, or other pointing device), as well as by text input from the keyboard or voice via the conference voice channel.

All these edits and comments are transmitted to all conference call participants using the communication module 15 and are added to the virtual representation of the object, and then are displayed on the screen of the mobile computing device 3 A by the display module 16. Audio comments are transmitted by the communication module 15 and all participants can hear them on their mobile computing devices. Text comments can be added to the virtual representation of an object, or transmitted separately in a chat that is a part of the conference system.

The presenter can allow or prohibit for individual participants to make comments of each type, or all types mentioned above (sounds, text).

The module 14 for generating a virtual representation of an object operates as follows.

The module input receives a photo of the object. The photo is rotated so that the image of the object is positioned in the same way as the object itself on the base in front of the presenter, if the mount was used for shooting and the optical system turned the image of the object. Then, the object is separated from the background and other objects that may have entered the frame.

To do this, both classical algorithms based on gradient analysis and border search in the image, and algorithms based on neural networks can be used, that allow to find and select certain classes of objects in the image. If there is a previous image of the object, i.e. the image of the object that is included in the module for forming a virtual representation of the object is not the first image, then the position of the object in the frame is aligned in accordance with its previous position. To do this, stabilization algorithms are used that are usually used for video stabilization based on optical flow analysis.

This is necessary so that, despite the possible displacement of the object on the base during operation or displacement of the mobile computing device, the virtual representation of the object remains stationary. The object image can also be converted to monochrome, and the brightness and contrast of the object can be increased if they are insufficient. The output of module 14 is an edited and aligned image of the object, the image separated from the background, and being its virtual representation.

The communication module 15 is responsible for transmitting information between conference call participants. Module 15 transmits and receives all types of data exchanged by conference call participants. These can be virtual representations of objects, edits made to virtual representations, images, text messages, sound, or video.

Two protocols are used for this purpose: WebSocket (for transmitting virtual views, edits, images, and text messages) and WebRTC for transmitting audio and video. In this case, the communication module 15 can transmit data directly between participants using Peer to Peer connections, or through the communication server.

The display module 16 is responsible for displaying virtual representations of objects and edits on the screen of the user's mobile computing device. Module 16 receives a virtual representation of the object either from the communication module 15, or from the module 14 for generating a virtual representation of the object, if the conference participant is currently a presenter. Then the virtual representation of the object is superimposed on the background; the background color either can be set in advance or selected by the user, scaled proportionally to fit on the screen of the user's mobile computing device or PC, and displayed on the screen. After that, on top of the object image, other users' corrections received from the communication module 15 and the user's own corrections are displayed (in the order they were added).

The management module 17 is responsible for the interaction of the user's SOFTWARE with the server 2 and allows to perform authorization on server 2, if necessary, get a list of scheduled conferences, connect to these conferences (by receiving addresses of specific servers through which communication occurs if the connection is established through the server, or the addresses of individual participants if the connection is established directly), create conferences, give a signal for beginning of the conference, invite participants, get lists of participants, give a signal for the end of the conference.

In addition to the ability to collaborate on a virtual representation of an object, a conference can have channels for text, audio, and video communication in real time.

The conference can be saved for later viewing of virtual representations of objects with corrections, as well as audio and video, both together and separately. If one of the participants offers to make a correction on the object image in the conference call mode, the following steps are performed.

The field of view of the rear camera is directed using a reflecting element of the optical system down to the object located on the base.

The user can make changes, for example, to the drawing of an object on a piece of paper, and the camera of the mobile computing device will record all the user's actions sequentially. Then, as indicated above, in the mode of obtaining an image of an object on the display of a mobile computing device, the corrected image will fall on the reflecting element of the optical system, be captured by the camera and transmitted for processing and display on the display of the mobile computing device.

The corrected image is projected on the display and sent to other conference participants using the built-in wireless communication systems.

A method for remote collaboration between a teacher and students on a virtual representation of an object, using the conference call system according to claim 1, comprises the following steps.

The mobile computing devices 3A of at least a part of the students are placed in the holders 8 of the mounts 4, and the holder 8 attached to the support member 6 is oriented in a horizontal position above the middle part of the mount base at a distance of 20-25 cm from the base.

The software 13 starts on the mobile computing device 3 A of each student participating in the joint work with the teacher via the conference call after receiving a signal from the main server 2.

An object 10, intended for joint work, is placed on the base 7 of the mount 4.

The field of view of the rear camera 5 is directed to the adjustable optical system 9 and the adjustable optical system is manually adjusted so that the portion of the base 7 on which the object 10 is placed is completely within the field of view of the rear camera 5.

A virtual representation of the object is generated by means of a module 14 for generating a virtual representation of an object for students of the at least a part of the students using mounts 4, said virtual representation of the object will be considered by the teacher.

The virtual representation of the object, formed by the module 14 for generating the virtual representation of the object is outputted on the display 12 of the student's mobile computing device 3 A of the at least a part of the students using the mount 4.

The generated virtual representations of objects are transmitted by the corresponding communication modules 15 to the teacher's computing device 3B, and the virtual representation of the object, which is to be considered by the teacher, is displayed on the display of the teacher's computing device 3Bin real time.

The software of the teacher's computing device is configured to allow the teacher to insert corrections and comments to the virtual representation of the object on the screen of his computing devices through touch input, and / or text input from the keyboard, and / or voice input via a voice communication channel and to transmit this correction to the student whose virtual object is being considered.

Due to the fact that the software of the teacher’s computing device is further configured such that the virtual representation of the object being considered by the teacher, is transmitted to all students, all the students can see the edits made by the teacher.

As an advantage of the claimed method, it should be noted that the software of the mobile computing device of each student is configured so that all students can edit the virtual representation of the object on the display of their mobile device and transmit the inserted edit to all the students participating in the joint work with the teacher via the conference call in real time.

A paper carrier is used as the two-dimensional object. The teacher’s computing device is a personal computer or a mobile computing device. The teacher's mobile computing device 3B is mounted on a mount 4 to form a virtual representation of the object on the teacher's mobile computing device for displaying the virtual representation of the object to the students.

Industrial Applicability

The system and the methods can be used for a conference call, during which multiple participants could collaborate on a virtual representation of an object in real time, make corrections or changes to the original image of the object or to the original text.