Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
APPARATUS, SYSTEMS, AND METHODS FOR DIRECT EYE CONTACT VIDEO CONFERENCING
Document Type and Number:
WIPO Patent Application WO/2014/194416
Kind Code:
A1
Abstract:
There is provided a system and method for video conferencing that enables direct eye contact between a user and a remote user visually displayed on a display screen during a video conference, the system comprising: a camera module located adjacent to the display screen; a camera positioning system receiving an input providing an indication of a location on the display screen proximal to the eyes of the remote user as displayed on the display screen; and, the camera positioning system configured for automatically displacing the camera module to the indicated location on the display screen to enable direct eye contact.

Inventors:
WHITEHEAD LORNE (CA)
ASSOULINE JONATHAN (CA)
Application Number:
PCT/CA2014/050449
Publication Date:
December 11, 2014
Filing Date:
May 13, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TANDEMLAUNCH INC (CA)
International Classes:
H04N7/15; H02J7/00; H04N5/232; H05K7/14
Domestic Patent References:
WO2012037139A22012-03-22
Foreign References:
US20120257004A12012-10-11
US20070206090A12007-09-06
US20030112325A12003-06-19
US20090086041A12009-04-02
US20090207233A12009-08-20
Attorney, Agent or Firm:
ESMAILI, Shahrzad et al. (Cassels & Graydon LLPSuite 4000, Commerce Court West,199 Bay Stree, Toronto Ontario M5L 1A9, CA)
Download PDF:
Claims:
CLAIMS:

1 . A video conferencing system configured for enabling direct eye contact between a user and a remote user visually displayed on a display screen during a video conference, the system comprising:

a camera module located adjacent to the display screen;

a camera positioning system receiving an input providing an indication of a location on the display screen proximal to the eyes of the remote user as displayed on the display screen; and,

the camera positioning system configured for automatically displacing the camera module to the indicated location on the display screen to enable direct eye contact.

2. The video conferencing system of claim 1 , wherein the indication of the location is proximal to a midpoint between the eyes of the remote user.

3. The video conferencing system of claim 1 , further comprising: a docking station for the camera module, the docking station placed on the bezel of the display; the camera positioning system configured to displace the camera module to the docking station at a termination of the video conference.

4. The system of claim 1 , wherein the camera module is held in position on the display screen by at least one of magnets and a suction mechanism.

5. The system of claim 1 further comprising: a. an analysis module (32) configured to calculate the midpoint;

b. a communication module to communicate (33) the midpoint to the camera positioning system; and,

c. a displacement mechanism coupled to the camera positioning system to physically move the camera module to the indicated location.

6. The system of claim 5, wherein the displacement mechanism comprises at least one motor (72 and 73) used to move the camera module to the indicated location.

7. The system of claim 5, wherein the displacement mechanism comprises a matrix of magnetic modules (71 ) for moving the camera module to the indicated location.

8. The system of claim 5, wherein the camera module (50) is powered with RF coupling between the camera module (50) and a back plate (74).

9. The system of claim 1 , wherein the camera module (50) is powered by a rechargeable battery.

10. The system of claim 9, wherein the rechargeable battery is charged in the docking station (51 ).

1 1 . The system of claim 1 , wherein the camera module (50) is placed between a display layer and a transparent layer (54) of the display screen.

12. The system of claim 1 1 , wherein the display layer (15) and the transparent layer (54) are conductive.

13. The system of claim 12, wherein the camera module is powered by the conductive layer.

14. The system of claim 1 , wherein at least one motor is integral with the camera module (50) to cause movement thereof on the display screen.

15. The system of claim 1 , further comprising :

the camera positioning system receiving an updated input providing an indication of a new location on the display screen proximal to the midpoint between the eyes of the remote user as displayed on the display screen; and, the camera positioning system automatically displacing the camera module to the new indicated location on the display screen to compensate for movement of the remote user.

16. The system of claim 1 or 15, wherein:

the camera positioning system further comprises sensors for detecting an angle between a face of the user and the eyes of the remote user engaged in the video conference, the camera positioning system further automatically adjusting an angle of the camera module to direct the camera towards the face of the user being captured by the camera module.

17. The system of claim 1 further comprising a mirror coupled to the display screen and wherein the camera positioning system is configured for automatically displacing only the mirror rather than the camera module to the indicated location on the display screen to enable direct eye contact, the displaced mirror configured to be optically aligned with the camera module for providing images of the user thereto.

18. A computer readable medium comprising computer executable instructions for enabling direct eye contact between a user associated with a first camera and at least one remote users visually displayed on the display screen during a video conference, the at least one remote users captured from a second camera, the computer readable medium comprising computer executable instructions for:

obtaining a video feed associated with the at least one remote users captured during the video conference from the second camera;

analyzing the video feed to determine a location of at least one feature associated with a face of a particular one of said at least one remote users on the display screen; and,

providing the determined location to a camera positioning system for subsequently causing movement of the first camera on the display screen to the location.

19. The computer readable medium of claim 18, wherein determining a location of at least one feature further comprises determining a midpoint between the eyes of the remote user.

20. The computer readable medium of claim 18, wherein the computer executable

instructions are further configured for defining the particular one of said at least one remote users selected from a plurality of remote users, said particular remote user defined by determining movement of said at least one feature for indicating said particular remote user is speaking.

21 . An electronic device comprising a processor and memory, the memory computer

executable instructions for enabling direct eye contact between a user and a remote user visually displayed on a display screen during a video conference, the computer executable instructions for: receiving an input providing an indication of a location on the display screen proximal to the eyes of the remote user as displayed on the display screen; and, automatically displacing a camera module associated with the user to the indicated location on the display screen to enable direct eye contact during the video conference.

Description:
APPARATUS, SYSTEMS, AND METHODS FOR DIRECT EYE CONTACT VIDEO

CONFERENCING

TECHNICAL FIELD

[0001] The following relates to systems and methods configured to facilitate direct eye contact during video conferencing.

SUMMARY

[0002] It is recognized that video conferencing allows saving a great sum of money, an increase of productivity by allowing users to cut on travelling costs, but unnatural conferences reduce the effectiveness of the meeting. Much information is provided by the body language of the interlocutor, but when the eye contact is not present, a great deal of information is lost and awkward moments are introduced.

[0003] Having a direct eye contact with the interlocutor is beneficial. These lapses in eye contact result in an inferior communication experience.

[0004] Current attempts to address this problem focus on using multiple cameras along the bezel of the display followed by sophisticated rendering techniques to simulate the "centre view" image of the viewer as a composite of the 2-6 edge views, and presenting it to the other participant. Such solutions are intrinsically error-prone and difficult to implement due to high hardware costs.

[0005] Currently no system resolves this problem completely and effectively. Value has been recognized in systems and methods that can solve the lack of direct eye contact in videoconferencing. The principal of the technology is to generate the "centre view" optically rather than computationally.

[0006] Accordingly, there exists a need exists for an improved system and method for providing video conferencing. In one embodiment of the present invention, the camera is placed manually on the screen. Ideally, the location of the camera on the screen would match the position of the eyes on the image of the person one is speaking to, or at least be very close to that location. To minimize any clutter in front of the screen, one way to do this would be to hold the camera in place by means of magnetic attraction from a small magnet placed in a matching location behind the LCD screen. To further reduce the size of the camera, much of the electronics could be mounted on the rear of the screen, using RF coupling to provide power and data connectivity to the camera itself. With such a magnetic support arrangement, it is also very easy to reposition the magnet on the screen, in order to ensure the camera is always placed close to the eyes of image of the person with whom one is speaking.

[0007] In another embodiment, the camera in placed on the bezel in such a way that it captures the image reflected by a small mirror placed on the screen that would hold by means of magnetic attraction as describe previously.

[0008] In yet another embodiment, the camera or mirror could also be mechanically actuated to follow the head of the opposing participant (user engaged in video conference) using face tracking or similar techniques. This could help to maintain a sense of eye contact even if the participant moves around.

[0009] There is provided a video conferencing system configured for enabling direct eye contact between a user and a remote user visually displayed on a display screen during a video conference, the system comprising: a camera module located adjacent to the display screen; a camera positioning system receiving an input providing an indication of a location on the display screen proximal to the eyes of the remote user as displayed on the display screen; and, the camera positioning system configured for automatically displacing the camera module to the indicated location on the display screen to enable direct eye contact.

[0010] There is also provided an electronic device comprising a processor and memory, the memory computer executable instructions for enabling direct eye contact between a user and a remote user visually displayed on a display screen during a video conference, the computer executable instructions for: receiving an input providing an indication of a location on the display screen proximal to the eyes of the remote user as displayed on the display screen; and, automatically displacing a camera module associated with the user to the indicated location on the display screen to enable direct eye contact during the video conference.

BRIEF DESCRIPTION OF THE DRAWINGS

[0011] For a better understanding, embodiments will now be described by way of example only with reference to the appended drawings wherein:

[0012] Figure 1 is a schematic showing the angular difference between the point of gaze on the screen and the actual camera that is placed on the bezel of the display. [0013] Figure 2 is a schematic diagram illustrating the method used to maintain the camera (or mirror) at the desired location, in accordance with one embodiment.

[0014] Figure 3 is a schematic illustrating exemplary components involved in mechanically actuating the camera or mirror, in accordance with one embodiment.

[0015] Figure 4 is a schematic illustrating exemplary front components of an embodiment using a camera on the screen.

[0016] Figure 5 is a schematic illustrating exemplary internal or back components of an embodiment using a camera or mirror.

[0017] Figure 6 is a schematic illustrating the side view of an embodiment using a camera on the screen.

[0018] Figure 7 is a schematic illustrating the side view of another embodiment using a camera in-between a transparent element and the display.

[0019] Figure 8 is a schematic illustrating some of the front view components of an embodiment using a mirror on the screen.

[0020] Figure 9 is a schematic illustrating exemplary side view components of an embodiment using a mirror on the screen.

[0021] Figure 10 is a schematic illustrating some of the components of an embodiment to electro-mechanically move the camera or mirror.

DETAILED DESCRIPTION

[0022] It will be appreciated that for simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the examples described herein. However, it will be understood by those of ordinary skill in the art that the examples described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the examples described herein. Also, the description is not to be considered as limiting the scope of the examples described herein. [0023] It will be appreciated that the examples and corresponding diagrams used herein are for illustrative purposes only. Different configurations and terminology can be used without departing from the principles expressed herein. For instance, components and modules can be added, deleted, modified, or arranged with differing connections without departing from these principles.

[0024] In one aspect, there is provided a video camera system that enables direct eye contact and perspective-correct images for video conference or related applications. Traditional video conferencing systems place the camera on the bezel of the display which creates an angular difference between the point of gaze on the screen and the actual camera. This results in an image where both participants in the conference call see each other as looking down (if the camera is placed on top of the display). This breaks eye contact and results in an inferior communication experience.

[0025] The principal of the technology described herein is to generate the "center view". This can be achieved either by placing a small camera physically onto the screen or placing the camera on the bezel in such a way that it captures the image reflected by a small mirror placed on the screen. Ideally, the location of the camera (or mirror) on the screen would match the position of the eyes on the image of the person one is speaking to.

[0026] In Fig. 1 , we see a representation of the actual problem. A user 13 is engaged in a video call with a remote user 20. In this example, the camera 1 1 is placed on top of the display 10. This creates an angular difference 12 between the point of gaze on the screen and the actual camera that is placed on top of the display. This angular difference results in an unnatural user experience.

[0027] The following provides a method for placing a video capture device on a display, proximal to the eyes of (e.g. on level with), or very close to, the image of the person who is remote. The camera can also be replaced by a system composed of a mirror and camera placed on the bezel of the display. In one aspect, the camera is placed manually. In yet another aspect the camera is placed automatically in the desired location.

[0028] Turning now to Fig. 4, we see a user 20 on the display engaging a videoconference session. To remove the angular difference of the gaze during the session, the camera module 50 is manually placed near the center of the eyes 21 of the remote user displayed on a display screen 10. Once the call is finished the camera 50 is manually place on the bezel of the display 51 . [0029] The camera module 50 is composed of a video capture device, a wireless data transmitter and a power source. The description of the power source will be explained later.

[0030] To hold the camera module 50 in place, a placement apparatus such as one or more small magnets are placed behind the camera module 50. A magnetic material is then placed behind the display layer (LCD/OLED or other) and illumination layer 15 to engage the magnets behind the camera module 50.

[0031] In another embodiment of this invention, the camera module 50 can be held by a different placement apparatus such as a suction mechanism for holding the camera module 50 on the desired location on the display screen 10.

[0032] The live video feed is transmitted wirelessly to a receiver that is connected to the display or connected directly to a computer. The wireless transmission could be a wireless standard (i.e. IEEE802.1 1 , Bluetooth or other) or a proprietary protocol.

[0033] In one embodiment, the camera module is powered with a rechargeable battery, which is charged in the docking station 51 .

[0034] In another embodiment, the camera 50 is powered with RF coupling. The RF power transmitter is installed within the display housing, behind the display (LCD/OLED or other) and illumination layer 15 as shown in Fig. 5.

[0035] In yet another embodiment, the camera module 50 further comprises or is coupled to a camera positioning system (e.g. 43 shown in Fig. 3). The camera positioning system 43 receives desired positioning information for the placement of the camera module 50 and changes positioning of the camera to locate it automatically adjacent to the display and preferably, near the center of the eyes 21 of the remote user as provided on the display (e.g. display screen 10). In one aspect, the camera positioning system 43 receives information about the remote user's 20 positioning and placement on the display screen 10. Various embodiments of achieving said desired placement and positioning of the camera will be discussed with reference to Figures 2-10.

[0036] Turning now to Fig. 2, we show the flowchart of the process to automate the movement of the camera module on the display. The first step 102 is to start the video call. Using computer vision technologies, at step 104 the software then finds the eyes 21 of the remote user and calculates the position to place the camera 50. Preferably, the calculated position is proximal to the midpoint between the eyes (e.g. between points 21 a and 21 b shown in Fig. 4). In another aspect, the calculated position indicated is an area horizontally in line with the remote user 20. At step 106, the system sends the calculated desired coordinates of the camera 50 to the electronic camera positioning system (e.g. element 43 shown in Fig 3). In one aspect, if the remote user 20 moves 108, then the process goes back to step 104 and calculated new coordinates for placement of the camera 50 such as to be located proximal to the eyes of the user. At the end of the call 1 10, the camera module 50 is placed back on its dock 51 .

[0037] Fig. 3 represents the different modules of the embodiment. The computer 30 receives the video feed 31 (e.g. for display on the display screen 10) and then analyses the images in the Analysis Module 32. The analysis module 32 is configured to detect various features of the user's 20 face (e.g. eyes 21 a, 21 b, mouth 22) and to determine the location of the eyes from the user's 20 face. Preferably, the analysis module 32 locates a midpoint between the eyes of the user's 20 face (e.g. between points 21 a and 21 b).

[0038] In one aspect, if there are multiple users present on a display screen 10, the analysis module 32 is configured to locate a "lead" speaker among the multiple users such as the user 20 located closest to the center of the display screen and/or the user 20 speaking and/or the user selected by an external input (e.g. a touch selection of the user's face, manual selection of the presenter via the computer 30). In a preferred embodiment, this selection is based on predefined rules in the analysis module 32 for selecting the desired user 20 (e.g. from a multiple users) to locate the camera 50 adjacent to and for allowing direct eye contact between the user 13 and the desired user 20. The output of the analysis module 32 as provided to the output device 33 may be configured for approval by a user (e.g. via a user interface associated with the computer 30 for providing the output from module 32 and for receiving an input from a user via input means, such as a keyboard or touch screen input associated with the computer 30) or automatically provided to the camera positioning system 43 for positioning the camera module 50 accordingly.

[0039] In yet another embodiment, if there are multiple users 20 present on the display screen 10, then the analysis module 32 is configured to use an average position that is optimized for all users (e.g. showing the mid-point between two speakers and/or defining an optimized eye level between the number of speakers that is a mid-point between the combination of the eye levels of each of the users 20). Similarly, in this embodiment, the output of the analysis module 32 as provided to the output device 33 may be configured for approval by a user (e.g. via a user interface associated with the computer 30 for providing the output from module 32 and for receiving an input from a user via input means, such as a keyboard or touch screen input associated with the computer 30) or automatically provided to the camera positioning system 43 for positioning the camera module 50 accordingly.

[0040] In one aspect, when the video images provided from the video feed 31 are blurry, the analysis module 32 is configured to utilize a last known position approach to maintain the positioning of the camera module 50 at the last known/last defined position proximal to the eyes of the user 20. That is, in this embodiment, if the analysis module 32 is unable to define a new positioning for the camera module 50 based upon the eye positioning 21 a, 21 b of the user 20, then the analysis module 32 continues to provide instruction to the camera positioning system 43 to maintain the camera module 50 at the last position provided by the analysis module 32 until the analysis module 32 calculates a new positioning.

[0041] In yet another aspect, the analysis module 32 is configured to predict subsequent movement of the user 20 and of the user's face features (e.g. eyes 21 a, 21 b, and mouth 22) based on previously stored patterns of movement for the user 20. The patterns could be stored in a storage database coupled to the computer 30. The patterns can be based on tracking movement of a user 20 over a period of time during the video conference so that when a positioning is lost, the analysis module 32 is configured to define subsequent movement based on previous patterns. In one aspect, the patterns may be stored on the database of the computer 30 and associated with previous video conferencing sessions for use in subsequent sessions and for determining optimal camera positioning by the analysis module 32.

[0042] Once the desired positioning for optimal eye contact between the remote user 20 (e.g. displayed on the display screen 52) and the user 13 (e.g. user engaged in the video call) is found (e.g. as an XY co-ordinate information), the data is sent to the Input/Output Device 33 that sends the position to the electronic camera positioning system 43. The communication between the two devices can be USB/RS232 or any other wired or wireless communication protocol. The electronic camera positioning system 43 may be located within the camera module 50 or in electrical communication therewith (e.g. wirelessly communicating with each other) for controlling the positioning of the camera module 50 (e.g. angle of camera relative to the remote user 13 and placement of camera on the display screen 10). [0043] The camera positioning system 43 comprises a processor 42, a control system 70 and an input/output device 41 . The control system 70 is configured for communicating with the processor 42 and the input/output device 41 as well as external hardware displacement apparatus 51 for controlling the movement, displacement, location, and angle changes of the camera module 50 relative to the display screen 10. Examples of the hardware displacement system 51 can include one or more tracks and pulleys for moving the camera module 50 along the display or a series of motors and tracks for controlling the movement and other examples described herein.

[0044] In yet another embodiment, the displacement system 51 comprising vacuum suction mechanism and motors to move the camera module 50 similar to existing vertical cleaning robots. In yet another embodiment, the displacement system 51 comprises a mechanical system similar to the metronome mechanism as will be understood by a person skilled in the art, such as to use rods on which the camera moves along. In this case, the camera sits where the metal weight sits on the rod. One motor moves the rod left to right in a circular arc just like the metronome, another motor moves the camera up and down on the rod. In one aspect, such an accessory can be placed in front of the computer screen monitor and have it move the camera module 50 to any location on the screen 10 if the rod is long enough.

[0045] In yet another embodiment, the displacement system 51 comprises a swinging version of the above (e.g. a clip is provided at the top of the display screen 20 that holds a string or rod on which the camera 50 hangs). In this manner, swinging the string left to right, while simultaneously shortening or lengthening the string will get the camera 50 to the desired location on the screen 10 (e.g. as provided from the analysis module 32).

[0046] The operating system and the software components within computer module 30 and/or camera positioning system are executed by one or more processors (not shown) and are typically stored in a persistent store such as a flash memory, which may alternatively be a read-only memory (ROM) or similar storage element (not shown). Those skilled in the art will appreciate that portions of the operating system and the software components within modules 30, and 43 such as specific device applications (e.g. analysis module, control system), or parts thereof, may be temporarily loaded into a volatile store such as a RAM. Other software components can also be included, as is well known to those skilled in the art. [0047] Various individual components of the communication network 44 shown in Fig. 3 may be implemented as hardware and/or software, comprising a processor, and computer readable instructions stored in a memory or another non-transitory computer readable storage medium, for execution by one or more general purpose or a specialized processors, causing the processor(s) to perform camera positioning and placement for use within a video conferencing session between two users (e.g. user 13 and 20) as will be described in further detail below.

[0048] In one aspect, the camera positioning system 43 further comprises sensors for detecting an angle between a face of the user 13 and the eyes of the remote user 20 displayed on the display screen 10 such that the camera positioning system 43 further automatically adjusts an angle of the camera module to direct the camera module 50 towards the face of the user 13. In this way, once the camera module 50 has been shifted to a position on the display screen 10 that is preferably between the eyes of the remote user 20 on the display 10 then the camera angle is further adjusted by the camera positioning system 43 to be in line with the eyes of the user 13 (e.g. such as to minimize the angular difference between a first user 13 and the remote user 20).

[0049] In one embodiment, the camera module 50 is configured to tilt proportionally to the movement or head-turning of the user 13 as provided by the analysis module 32 and the camera positioning system 43. In this way, the remote user 20 is provided with perspective and viewpoint during the video conference session. Considering that the user 13 turns his head X degrees (e.g. 30 degrees) to the left. The camera positioning system 43 then directs the camera 50 to stay in position but also tilt 30 degrees to the left on axis, providing the remote viewer (e.g. remote user 20) with a view of the left side of the scene.

[0050] In one aspect, the analysis module 32 (Fig. 3) is configured to measure the distance between the user 13 and the associated screen (e.g. 10) to perform angle correction and to provide a desired angle change as output from the analysis module 32 to the camera positioning system 43 for controlling the angle of the camera 50 via the control system 70. In one example, the user turns his head by 10 degrees, but the associated screen with user 13 is 1 m away so the tilt of the camera 50 is defined by the analysis module 32 to be more than 10 degrees to capture the image that the user 13 would be seeing if he were actually in the scene on the display screen 10 (e.g. if the user were standing it and the screen where just a sheet of glass in his way). [0051] Fig. 5 illustrates an example of a mechanical solution to move the camera module on the display (e.g. displacement apparatus 51 shown in Fig. 3). In this embodiment, two motors (72 and 73) are used to move the camera module 50 horizontally and vertically. In one aspect, the system is built using timing belts and metal shafts (75 and 76). The purpose of the system is to move the back plate 74.

[0052] In one embodiment, the back plate 74 is electrically coupled with the camera module to transmit power to the camera module. To hold the camera module 50 in place, a placement apparatus such as magnets, or equivalent, are used on the back plate 74 and on the camera module 50 to hold it in place. The back plate 74 can also be used to wirelessly receive the video feed from the camera 50. The back plate is then connected (wired or wireless) to the computer 30.

[0053] Fig. 6 illustrates the side view of the embodiment. In between the mechanical module 40 and the front plate 53 is the display and illumination layer 15. In this view, it is noted that the camera module 50 is placed on the display.

[0054] As can be seen from Fig. 6, in one aspect, the front plate 53 refers to the front portion of the display screen 10. In this aspect, the camera 50 has a magnetic base which is stuck to the display screen but held in place by the magnetic attraction coming from the back plate (e.g. mechanical module 40 providing the back plate in Fig. 6).

[0055] Fig. 7 illustrates another embodiment. In this solution, the camera module 50 is placed in between the display layers 15 and a transparent layer 54, so that the module is placed inside the display enclosure. The mechanical layer 40 can be as described above.

[0056] In another embodiment, a transparent conducting film is applied on layers 54 and 15 so that the films are in contact with the camera module 50 to power it.

[0057] In yet another embodiment, small motors are included on the camera module 50 as examples of the displacement apparatus. The motors are used to move the camera module 50, so it stays near the midpoint between the eyes of the remote user 20. In this solution the back plate 40 is not needed. Instead, an electronic module can be used to connect to the computer and to the camera module 50.

[0058] In another embodiment in Fig. 10, the back plate 71 is used to move the camera module 50. Instead of a motor, a matrix of magnetic modules is used to create a magnetic field (e.g. as an example of the displacement apparatus) to hold and move the camera module 50. The camera 50 moves by turning each small magnetic module ON and OFF.

[0059] Fig. 8 illustrates another embodiment. In the embodiment, in place of the camera module 50, an angled mirror 60 is used. The light is then reflected on a camera sensor 62 that is positioned in the same vertical axis. As described above for the camera module 50, the angle mirror 62 moves to position itself near the midpoint between the eyes (e.g. between points 21 a and 21 b) of the remote user on the display 10. To move, the mirror uses the same technique as described above. The camera sensor moves horizontally on a rail, using timing belts or equivalent 63 (see also Fig. 9) to position itself to receive the signal. At the end of the video call, the mirror repositions itself at position 61.

[0060] Fig. 9 represents a side view of Fig. 8. In between the mechanical module 40 and front plate 53, there is the display and illumination layer 15. In this view, it can be seen that the mirror 60 is placed on the display.

[0061] As can be seen from Figs. 8 and 9, the mirror 60 optically replaces the camera 50. Accordingly, in this embodiment, the displacement mechanism 51 (discussed in Fig. 3) is configured for moving the mirror 60 on the screen 10 (e.g. in front of the illumination layer 15) instead of moving the camera 50. Therefore, by moving the mirror 60 on the screen 10, the camera 50 captures the image reflected by the mirror 60. Preferably, the control of the movement of the mirror 60 is provided by the positioning system 43 in combination with the computer 30 as discussed in relation to Fig. 3 where the positioning system 43 controls movement of the mirror 60 rather than the camera 50. One advantage of this embodiment is that the mirror is not an electronic component and thus doesn't need power or signal connection (other than to communicate with or receive communication from the positioning system 43).

[0062] One of the embodiments could be placed on a laptop, desktop computer display, TV or SmartTV.

[0063] It will also be appreciated that the examples and corresponding diagrams used herein are for illustrative purposes only. Different configurations and terminology can be used without departing from the principles expressed herein. For instance, components and modules can be added, deleted, modified, or arranged with differing connections without departing from these principles. [0064] The steps or operations in the flow charts and diagrams described herein are just for example. There may be many variations to these steps or operations without departing from the spirit of the invention or inventions. For instance, the steps may be performed in a differing order, or steps may be added, deleted, or modified.

[0065] Although the above has been described with reference to certain specific examples, various modifications thereof will be apparent to those skilled in the art.