Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TERMINAL-TO-MOBILE-DEVICE SYSTEM, WHERE A TERMINAL IS CONTROLLED THROUGH A MOBILE DEVICE, AND TERMINAL REMOTE CONTROL METHOD
Document Type and Number:
WIPO Patent Application WO/2018/178748
Kind Code:
A1
Abstract:
This invention is generally related to methods and devices for server administration. A terminal-to-mobile-device system (100) comprising a terminal controlled by a mobile device (160), the system (100) further comprising: a computing device (110) comprising a computing device processor (115), a random access memory (120), a read-only memory (125), a video adapter (130), computing device display means (135) and computing device data exchange means (140) and implementing functions of the terminal. The mobile device (160) comprises a mobile device processor (165), a decoding module (170), mobile device display means (175) and control means (180) configured to be manipulated by a user (105), the mobile device (160) being configured to exchange data with the computing device data exchange means (140). The computing device (110) is configured to transfer video data from the video adapter (130) to the computing device data exchange means (140) after the video adapter (130) sends a signal to the computing device processor (115) informing it about readiness of the video data and before sending the video data from video adapter (130) to the computing device display means (135). The computing device data exchange means (140) are configured to code the video data by using a coding parameter characterized by a number of frames to which the processed frame refers, and to transfer the coded video data to the mobile device (160). The mobile device (160) is configured to decode, by means of the decoding module (170), the video data obtained from the computing device (110), and to display, by means of the mobile device display means (175), the decoded video data in a virtual reality format. The mobile device (160) is configured to generate control signals based on signals from the control means (180) and to send the generated control signals to the computing device (110). The decoding module (170) is configured to use a minimum possible number of buffers and to decode without synchronization with the frame frequency; the decoding module (170) further being configured to use modes designed for a larger resolution or a larger frame frequency than that of an actual video stream. A method for remote control of a terminal by a user using the proposed system is also proposed.

Inventors:
DUBOV VALERIY VITALIEVICH (RU)
Application Number:
PCT/IB2017/052026
Publication Date:
October 04, 2018
Filing Date:
April 07, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DVR LLC (RU)
International Classes:
H04N21/2343; H04N19/105; H04N19/44; H04N21/2662; H04N21/414; H04N21/426; H04N21/6379
Foreign References:
US20140173674A12014-06-19
US8965460B12015-02-24
US20050141620A12005-06-30
US20120201308A12012-08-09
US8965460B12015-02-24
US9274340B22016-03-01
Other References:
FABRIZIO LAMBERTI ET AL: "A Streaming-Based Solution for Remote Visualization of 3D Graphics on Mobile Devices", IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, IEEE SERVICE CENTER, LOS ALAMITOS, CA, US, vol. 12, no. 2, 1 March 2007 (2007-03-01), pages 247 - 260, XP011157909, ISSN: 1077-2626
Attorney, Agent or Firm:
NILOVA, Maria Innokentievna (RU)
Download PDF:
Claims:
CLAIMS

1. A terminal-to-mobile-device system (100) comprising a terminal controlled by a mobile device (160), the system (100) further comprising:

a computing device (1 10) comprising a computing device processor (1 15), a random access memory (120), a read-only memory (125), a video adapter (130), computing device display means (135) and computing device data exchange means (140) and implementing functions of the terminal,

wherein the mobile device (160) comprises a mobile device processor (165), a decoding module (170), mobile device display means (175) and control means (180) configured to be manipulated by a user (105), the mobile device (160) being configured to exchange data with the computing device data exchange means (140),

wherein the computing device (1 10) is configured to transfer video data from the video adapter (130) to the computing device data exchange means (140) after the video adapter (130) sends a signal to the computing device processor (1 15) informing it about readiness of the video data and before sending the video data from video adapter (130) to the computing device display means (135),

wherein the computing device data exchange means (140) are configured to code the video data by using a coding parameter characterized by a number of frames to which the processed frame refers, and to transfer the coded video data to the mobile device (160),

wherein the mobile device (160) is configured to decode, by means of the decoding module (170), the video data obtained from the computing device (1 10), and to display, by means of the mobile device display means (175), the decoded video data in a virtual reality format,

wherein the mobile device (160) is configured to generate control signals based on signals from the control means (180) and to send the generated control signals to the computing device (1 10), and

wherein the decoding module (170) is configured to use a minimum possible number of buffers and to decode without synchronization with the frame frequency; the decoding module (170) further being configured to use modes designed for a larger resolution or a larger frame frequency than that of an actual video stream.

2. The system of claim 1 , wherein the data exchange means (140) of the computing device (1 10) are configured to establish wireless and/or wired connection to the mobile device (160). 3. The system of claim 1 , wherein the computing device (1 10) is configured to transfer audio data to the mobile device (160), and the mobile device (160) additionally contains an audio data player, while the data exchange means (140) of the computing device (1 10) are configured to transfer audio data to the mobile device (160) together with video data and/or separately from video data.

4. The system of claim 3, wherein the data exchange means (140) of the computing device (1 10) are configured to transfer encrypted video and/or audio data.

5. The system of claim 1 , wherein the data exchange means (140) of the computing device (1 10) are configured to ensure connection to the mobile device (160) by a local area network and/or the Internet.

6. The system of claim 1 , wherein the computing device (1 10) is remote in relation to the mobile device (160), while the data exchange means (140) of the computing device (1 10) are configured to exchange data with the mobile device (160) by the Internet.

7. The system of any of claims 1 -6, wherein the mobile device (160) represents one of the following devices: a mobile phone, a smartphone, PDA, a navigator, a handheld PC, a laptop, a tablet PC, a portable gaming device.

8. A method for remote control of a terminal by a user using the terminal-to- mobile-device system of claim 1 , the method comprising:

generating video data by the video adapter of the computing device (210), transferring the video data from the video adapter to the data exchange means after sending a signal to the processor of the computing device by the video adapter informing the computing device of readiness of the video data and before sending the video data from the video adapter to the display means of the computing device (220), coding of the video data using a coding parameter which is characterized by a certain number of frames to which the processed frame refers (230),

transferring the coded video data to the mobile device (240),

decoding the video data obtained from the computing device using a decoding parameter which is characterized by a lower number of frames to which the processed frame refers as compared to the coding parameter (250),

transforming the video data in the mobile device into a virtual reality format (260), displaying the transformed video data by the mobile device display means (270), registering control commands from the user by the control means of the mobile device (280),

generating control signals based on the control commands (285),

sending the control signals from the mobile device to the computing device (290).

9. The method of claim 8, further comprising generating audio data in the computing device, wherein the audio data is transferred to the mobile device together with the video data or separately from it.

10. The method of claim 9, wherein the video data and the audio data are transferred to the mobile device in an encrypted form.

1 1 . The method of claim 9 or 10, further comprising transforming video and/or audio data generated in the computing device based on the control signals, and transferring the transformed video and/or audio data to the mobile device. 12. A computer software configured to implement the method for remote control of a terminal by a user according to any of claims 8-1 1.

Description:
TERMINAL-TO-MOBILE-DEVICE SYSTEM, WHERE A TERMINAL IS CONTROLLED BY A MOBILE DEVICE, AND TERMINAL REMOTE CONTROL

METHOD

BACKGROUND OF THE INVENTION

Field of the Invention

This invention relates to terminal-to-mobile-device systems, wherein a terminal is controlled by a mobile device, and to the field of creating virtual reality. In particular, this invention relates to data transfer from a computing device to a mobile device.

Review of Prior Art

Currently, the virtual reality (VR) creation field is actively developing, representing, for example, an imitation of the surrounding world, which is created artificially using high-technology devices. One of the goals of virtual reality is a maximum accurate relay of realistic sensations to the virtual space using audio-visual and kinetic tools.

Virtual reality systems, where a usual smartphone is used instead of a special- purpose screen, have lately become more widespread with the appearance of mobile device models having a larger diagonal and Full HD or higher screen resolution. In this connection, the virtual reality (VR) creation field is an area of current interest. Not just the devices, but also software for their operation and their data content are being actively developed.

For example, US 8,965,460 discloses a mobile communication system and intelligent electronic glasses augmenting reality based on a network used by a mobile device. This mobile communication system is based on digital content including images and video that may be displayed using a plurality of mobile devices, smartphones, tablet computers, stationary computers, intelligent electronic glasses, smart glasses, headsets, watches, smart devices, vehicles and servers. Digital content can be acquired continuously by input and output devices and displayed in a social network. Images can have additional properties including types of voice, audio, data and other information. This content can be displayed and acted upon in an augmented reality or virtual reality system. The imaged base network system may have the ability to learn and form intelligent association between aspects, people and other entities; between images and the associated data relating to both animate and inanimate entities for intelligent image based communication in a network.

The disadvantage of this technology is that server operation is not determined and video stream transfer method is not specified.

There are also soft head mounted display goggles for use with mobile computing devices (US 9,274,340), comprising a soft main body made entirely of a soft and compressible material. The main body has a retention pocket configured to accept and secure the mobile computing device. A lens assembly in the goggles comprises two lenses configured to focus vision on respective areas of the display screen of the mobile computing device, which is divided into two images. The lens assembly is held within one or more apertures formed in the main one-piece body; the two lenses are mounted independent of each other, so that a split screen image can be viewed through the two lenses on the display screen.

The specified document describes a mechanism to determine location of the user and his/her head rotations and synchronization of this data with the server. However, the data transfer speed in the disclosed technology is low, and 3D image is formed in this case in the mobile device, which makes the system operation slow.

On top of that, currently there are numerous applications for virtual reality devices.

For example, there is the Riftcat application, which functions according to the following principle: computer games are launched on a computer, which are transmitted to a mobile phone connected to a common network with the computer. At the same time, the mobile phone can deliver the information to the game from some of its own sensors, for example, position sensors, which allows the game to react to such information, for example, to user's head rotations and tilting. The application is embedded with a game library, and such games can be downloaded and tested without resorting to external sources.

The disadvantage of the Riftcat application is that it is available only to users of Sdk Oculus or Sdk Htc Vive virtual reality games and headsets. Another disadvantage of this application is that the network data transfer is slow and unreliable, which results in a low quality image.

There is also the Trinus VR software package making it possible to transfer streaming video from a personal computer (PC) to an Android-based device, taking into account the position of the Android-based device in space. The disadvantage of this package is that the system operation speed is low.

Thus, most of the systems known in the prior art function only for games of particular developers. The common disadvantage of the known systems is their low speed of data transfer between the computing device and the mobile device, which affects the quality of interoperation with the application from the virtual reality and could impair the well-being and health of the users of such systems.

Due to these disadvantages of the known systems, the object of this invention is to create a universal system having a high rate of interoperation between a computing device and a mobile device.

SUMMARY OF THE INVENTION

The problem set forth is resolved by a proposed terminal-to-mobile-device system comprising a terminal controlled by a mobile device, the system further comprising: a computing device comprising a computing device processor, a random access memory, a read-only memory, a video adapter, computing device display means and computing device data exchange means and implementing functions of the terminal. The mobile device comprises a mobile device processor, a decoding module, mobile device display means and control means configured to be manipulated by a user, the mobile device being configured to exchange data with the computing device data exchange means. The computing device is configured to transfer video data from the video adapter to the computing device data exchange means after the video adapter sends a signal to the computing device processor informing it about readiness of the video data and before sending the video data from video adapter to the computing device display means. The computing device data exchange means are configured to code the video data by using a coding parameter characterized by a number of frames to which the processed frame refers, and to transfer the coded video data to the mobile device. The mobile device is configured to decode, by means of the decoding module, the video data obtained from the computing device, and to display, by means of the mobile device display means, the decoded video data in a virtual reality format. The mobile device is configured to generate control signals based on signals from the control means and to send the generated control signals to the computing device. The decoding module is configured to use a minimum possible number of buffers and to decode without synchronization with the frame frequency; the decoding module further being configured to use modes designed for a larger resolution or a larger frame frequency than that of an actual video stream.

The proposed system provides high speed of transfer of video data and commands between the computing device and the mobile device. Additionally, the system provides creation of a universal system making it possible to use in a mobile device any application requiring high speed of data transfer and large resources for their processing.

The technical effect is achieved due to the fact that, in particular, the computing device is configured to transfer video data from the video adapter to data exchange means after the video adapter sends a signal to the computing device processor informing it about readiness of video data and before sending video data from the video adapter to the computing device's display means, and the decoding module is configured to using of a minimum possible number of buffers, while there is no synchronization with the frame frequency when decoding; furthermore, the decoding module is configured to use modes designed for a higher resolution or frame frequency than the actual video stream.

In accordance with one embodiment, the data exchange means of the computing device are configured to establish wireless and/or wired connection to the mobile device.

In accordance with another embodiment, the computing device is configured to transfer audio data to the mobile device, and the mobile device additionally contains an audio data player, while the data exchange means of the computing device are configured to transfer audio data to the mobile device together with video data and/or separately from video data.

In accordance with another embodiment, the data exchange means of the computing device are configured to transfer encrypted video and/or audio data.

In accordance with another embodiment, the data exchange means of the computing device are configured to ensure connection to the mobile device by a local area network and/or the Internet In accordance with another embodiment, the computing device is remote in relation to the mobile device, while the data exchange means of the computing device are configured to exchange data with the mobile device by the Internet.

In accordance with another embodiment, the mobile device represents one of the following devices: a mobile phone, a smartphone, PDA, a navigator, a handheld PC, a laptop, a tablet PC, a portable gaming device.

The embodiments improve functionality of the proposed system, contributing to the universal nature of the proposed invention.

The above-mentioned effect is also achieved with the proposed method for remote control of a terminal by a user using the proposed terminal-to-mobile-device system, the method comprising: generating video data by the video adapter of the computing device, transferring the video data from the video adapter to the data exchange means after sending a signal to the processor of the computing device by the video adapter informing the computing device of readiness of the video data and before sending the video data from the video adapter to the display means of the computing device, coding of the video data using a coding parameter which is characterized by a certain number of frames to which the processed frame refers, transferring the coded video data to the mobile device, decoding the video data obtained from the computing device using a decoding parameter which is characterized by a lower number of frames to which the processed frame refers as compared to the coding parameter, transforming the video data in the mobile device into a virtual reality format, displaying the transformed video data by the mobile device display means, registering control commands from the user by the control means of the mobile device, generating control signals based on the control commands, sending the control signals from the mobile device to the computing device.

The method provides the technical effect in form of a high speed of transfer of video data and commands between the computing device and the mobile device due to the fact that, in particular, the method transfers video data from the video adapter to data exchange means before the video data is sent from the video adapter to display means of the computing device and decodes video data obtained from the computing device using a decoding parameter which is characterized by a lower number of frames to which the processed frame refers, as compared to the coding parameter. In accordance with one embodiment, the method comprises generating audio data in the computing device, wherein the audio data is transferred to the mobile device together with the video data or separately from it.

In accordance with another embodiment, the video data and the audio data are transferred to the mobile device in an encrypted form.

Preferably, according to another embodiment, the method further comprises transforming video and/or audio data generated in the computing device based on the control signals, and transferring the transformed video and/or audio data to the mobile device.

Furthermore, a computer software is proposed, which is configured to implement the method for remote control of a terminal by a user.

Thus, the proposed system, method and computer software make it possible to minimize latency of transfer of data and commands between the computing device and the mobile device and to use in the mobile device the applications requiring high speed data transfer and many resources for its processing.

BRIEF DESCRIPTION OF THE DRAWINGS

The concept of the invention is described below in greater detail with reference to the drawings attached, wherein:

Fig. 1 illustrates the layout of the terminal-to-mobile-device system in accordance with one of the implementations of this invention.

Fig. 2 illustrates the method for remote terminal control by the user in accordance with one of the implementations of this invention. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

This description discloses embodiments and specific features of the terminal-to- mobile-device system, where the terminal is controlled by a mobile device, and the method for remote terminal control by the user using said system and computer software configured to implement said method. It should be noted that the disclosed specific features of said device, method and computer software in any implementation may be intrinsic to various implementations in any combination, unless stated otherwise.

In this disclosure, the term terminal refers to any general or special-purpose computing device. When used in this disclosure, a "user" in singular form means any number of users.

Layout of the proposed terminal-to-mobile-device system 100 according to one implementation of this invention is shown at fig. 1 . System 100 consists mainly of computing device 1 10 and mobile device 160. In this implementation the computing device 1 10 is a personal computer (PC) and the mobile device 160 is a mobile phone that can receive and transfer data through wired and wireless networks. Other implementations can have any other types of computing devices (for example, personal computers, laptops, tablet PCs, game consoles) and mobile devices (for example, smartphones, PDAs, navigators, handheld PCs, laptops, tablet PCs, portable gaming devices). It should be noted that the mobile device being used should preferably have a screen of a size that makes it possible to create virtual reality without causing discomfort to the user.

The computing device 1 10 used as a terminal includes a computing device processor 1 15, a random access memory 120, a read-only memory 125, a video adapter 130, computing device display means 135 and computing device data exchange means 140. Data exchange means 140 are any wireless and/or wired data exchange means, known by specialists, and can ensure connection to the mobile device 160 by a local area network and/or the Internet. Other exemplary implementations may use a computing device 1 10 with additional or other components, in particular, such device may have no display means.

Mobile device 160 contains a mobile device processor 165, a decoding module 170, mobile device display means 175 and control means 180 configured to be manipulated by user 105. The mobile device can exchange data with computing device 1 10 using data exchange means 140 of computing device 1 10.

When system 100 is operating, computing device 1 10 transfers video data from video adapter 130 to data exchange means 140 after video adapter 130 sends a signal to processor 1 15 of the computing device informing it about readiness of video data and before video data is sent from video adapter 130 to display means 135 of the computing device or to any other relevant component of computing device 1 10 in absence of display means 135. This specific feature is one of the distinctions from the known systems, wherein video data is transferred from a computing device only after they are sent to display means of the computing device from the video adapter, which makes it possible to considerably minimize latency of data transfer from the computing device to the mobile device. As an example, this may be implemented using the fact that, in a rendering process, the function, using which an object will be shown on the display, is determined for each object in a standard API, and an absolute address of such function, i.e. address of such function in memory, is set. Knowing where the address library is, the offset address of such function, which is used in such library, is calculated. Thus, if the address of the function library for a particular application (a particular game), which appears only during loading of such library, when it is required by a processor, and the previously calculated address within such library are known, the function address for each rendering object becomes known for each application. Then, immediately before calling the display function, the processor operation is interrupted, and during the interruption the required functions in the required loaded libraries are replaced with those functions that makes it possible to convert all objects to the format required to display them on a particular target device. In order to capture video stream, a special- purpose library is used that makes it possible to acquire a frame right after it is perceived in a video adapter. After this procedure, the frame is coded and sent to the mobile device. A connection of a special audio device is also emulated at the software level to transfer audio from the game to the mobile device.

Video data may relate, without limitation, to any executed computer program, films, television broadcasting, video clips, augmented reality videos, etc.

Prior to transfer of video data to mobile device 150 data exchange means 140 code video data using the compression standard with a coding parameter setting a certain number of frames to which the current processed frame refers, and with a possibility of transferring coded video data.

In its turn, when receiving video data from data exchange means 140, mobile device 160 decodes the received video data using decoding module 170, which deploys a circular buffer in its operation and uses a decoding parameter setting a lesser number of frames, as compared to the number of frames to which the current processed frame set by the above said coding parameter refers. This specific feature is also not found in the known systems and makes it possible to minimize latency of data processing in the mobile device, since the decoding parameter in the known systems uses the same number of frames to which the current processed frame refers, as the coding parameter does. For example, a large number of frames that are processed by the decoding module is calculated for decoding in the mobile device. In the mobile device, compression standard profiles are changed during the decoding process, which makes it possible to force the decoding module to recognize them in such a way that video data decoding will require fewer buffers, less waiting time and higher priorities, due to which it is possible to decode video data earlier. Therefore, decoding module 170 can use the minimum possible number of buffers, while there is no synchronization with the frame frequency when decoding, and decoding module 170 can use the modes designed for a larger resolution or frame frequency than an actual video stream has.

In order to achieve the technical effect, it is also important that the decoding module is configured to create a circular buffer. For example, a mobile device requires 5 different spots for the video buffer for the decoding module; in this case, the buffer is set to use maximum 2 frames, but the full buffer still needs to be prepared. Therefore, when initializing this module, a circular buffer is created, i.e. an area in memory, which is to be repeatedly overwritten when new data is received from the computer. Hence, this makes it possible not to allocate memory once more for new data, but to use the old memory.

Moreover, a texture having an appropriate format for the coder may be specifically prepared in memory of the video adapter so that, at the coding stage, it would save the effort of copying data from system memory to video memory since this takes a long time. Owing to this coder, a reference to the prepared area in memory may be given for coding.

Decoded video data is displayed in the virtual reality format using display means

175 of the mobile device, for example, the display. Preferably, the virtual reality format should be created by dividing the display into two areas which, when viewed by the user with or without a virtual reality headset, create a virtual reality effect. However, any other known virtual reality formats, where video data based image is otherwise transformed for display, can be used.

Mobile device 160 can send to computing device 1 10 control signals generated in mobile device 160 based on signals from control means 180, for example, from a gyroscope or an accelerometer available in mobile device 160, as well as from any known manipulators, joysticks and other devices connected to a mobile phone and allowing a user in any way to control or interact with video data displayed to him/her in display means 175 of mobile device 160. Therefore, computing device 1 10 is controlled by mobile device 160 in terminal-to-mobile-device system 100. Based on said control signals, computing device 1 10 can additionally transform video data generated by it and transfer the transformed video data to mobile device 160.

In certain implementations, computing device 1 10 can transfer to mobile device 160 not only video data, but audio data also, which is played by mobile device 160 by an audio data player, while data exchange means 140 of computing device 1 10 transfer audio data to mobile device 160 together with video data and/or separately from video data. Accordingly, based on the control signals, computing device 1 10 can additionally transform audio data generated by it and transfer transformed video and audio data to mobile device 160.

Also, in certain implementations, data exchange means 140 of computing device 1 10 can transfer encrypted video and/or audio data.

In certain implementations, computing device 1 10 is physically remote in relation to mobile device 160, so data exchange means 140 exchange data with mobile device 160 by a local area network or the Internet.

Said terminal-to-mobile-device system 100 makes it possible to transfer data and commands from the computing device to the mobile device at a speed of 3 ms, which allows using in the mobile device such applications executed in the computing device, which cannot function in the mobile device due to the specific features of the mobile device hardware.

The method of remote terminal control by the user using the system described above in accordance with one of the implementations of this invention is illustrated at fig. 2.

According to this method, video data is generated by the video adapter of the computing device at stage 210, then video data from the video adapter is transferred to data exchange means at stage 220 after the video adapter sends the signal to the processor of the computing device informing it about readiness of video data and before sending video data from the video adapter to display means of the computing device.

Further, video data is coded using a coding parameter which is characterized by a certain number of frames to which the processed frame refers (stage 230), and coded video data is transferred to the mobile device (stage 240). Upon receipt of said video data by the mobile device, stage 250 is executed, where video data obtained from the computing device is decoded using a decoding parameter which is characterized by a lower number of frames to which the processed frame refers, as compared to the coding parameter, then stage 260, where video data in the mobile device is transformed into the virtual reality format, and then stage 270, where transformed video data is displayed by display means of the mobile device.

At last, at stage 280 control commands from the user are registered by control means of the mobile device, at stage 285 control signals are generated based on said control commands from the mobile device, and at stage 290 control signals from the mobile device are sent to the computing device.

Therefore, the user of the mobile device can remotely control the computing device.

The method described above uses transfer of video data from the video adapter to the data exchange means before sending video data from the video adapter to the display means of the computing device and decoding video data using a decoding parameter which is characterized by a lower number of frames to which the processed frame refers, as compared to the coding parameter. These stages, inter alia, are not known from the methods used in the prior art, thus enabling a higher speed of transfer of video data and commands between the computing device and the mobile device.

In addition to video data, audio data can be generated on the computing device, which is transferred to the mobile device together with the video data or separately from it in non-encrypted or encrypted form.

Also, in certain implementations video and/or audio data generated on the computing device is additionally transformed based on control signals from the mobile device, then transformed video and/or audio data is transferred to the mobile device.

The method disclosed above may be implemented, in particular, using computer software executing stages of the method, when it is used in combination with the required hardware.

Implementation example

In the described example, Windows-based PC games can be played on Android-based and iOS-based mobile phones.

Operation scheme

Phase 1 :

(M): Search and connection to PC (C): Initialization of API for SDK and mobile client. Forming a list of games

(M): Request for a game with parameters appropriate for the mobile phone

(C): Configuring game and PC settings, initialization of integration module, launch of game

Phase 2:

(I): Determination of integration mechanisms, launch of translation server

(M): Connection to translation server, obtaining audio/video streams, transfer of control flow Mobile application

Search and connection to PC

Streaming is registered in the network using Multicast DNS as VrStreaming tcp service. The protocol is based on Bonjour designed by Apple, in this case the following is used: jmDns for Android - http://jmdns.sourceforge.net/

TinySvcMDNS for PC - https://bitbucket.org/geekman/tinysvcmdns

Bonjour for iOS - https://www.apple.com/ru/support/bonjour/

This approach solves 2 problems

1. Search for computers can take place in any Intranet network, as compared to general broadcast requests. In this case, it is not important with what standard and equipment a network has been configured.

2. Speed of computer detection. mDns services report to the network what they need and what they can provide. Search and authorization take less than 1 second even in networks with thousands of devices. Integration and processing of sensors. Input device administration.

Input devices are currently processed on Android only.

Both Bluetooth and USB devices are supported.

Bluetooth:

For most cases, connection to input devices through Bluetooth is standardized and maintained by the Android system by dispatchKeyEvent

Android reports of an input device status change event, including device type, event code, entry name and value. Then, if it is a system event - volume, home button or back button - it is transferred to the system.

Then, if connection to the control center has already been established, the event is transferred to the gamepad class; if it has not been processed, the event in simplified form is transferred to the general event system. if (down) {

if (!appStreamerjoystickDown(event))

appStreamer.motionEvent(2, keyCode, 0);

} else {

if (!appStreamerJoystickUp(event))

appStreamer.motionEvent(2, keyCode, 1 );

}

Each of them has a configuration for different joysticks and various games. if (currentGame == SupportedGames.SKYRIM) {

if (keyCode == KeyEvent. KEYCODE_BUTTON_R2)

motionEvent(1 , 1 , flag);

if (keyCode == KeyEvent. KEYCODE_BUTTON_L2)

motionEvent(1 , 3, flag);

result = true;

}

Then, the processed data is transferred to the module for connection to the game.

Connection to PC control module.

Done by http protocol via port 47001

Obtain list of games /list

Load game /load?name=%1 &stereoMode=%2&streamMode=%3&mod=%4

The list of games is provided in JSON format. The game launch method returns a status after the game has been launched and the integration module has been initialized.

Connection to module integrated into the game.

TCP speed and UDP reliability are insufficient for this task, so eNet applied software is used.

This library makes it possible to control data integrity in the course of transfer by UDP and to make a decision as to when a packet was important and when not.

Based on such software, RTSP is replaced for configuration and video transfer. The size of a network packet is 1 ,292 bytes.

A buffer for a single frame is limited to 512 kb or to 406 packets, respectively.

When the buffer is full, packets are sorted in the order they were sent by the server. If any packet is lost, the buffer is rejected in full, thus, no broken frames are transferred to the decoder and no mosaic is created. h264 and h265 compression standards are used for data coding, where video is split into NAL units that can include video configuration, key frames and data on image change. Low latency of operation is achieved due to the use of reliable delivery of the data on image change.

The mechanisms for transfer of control and audio function in the same manner, but they involve less data and require much less process time and network traffic.

It is important for the decoder in the mobile phone that there be more than 5 different spots for video buffer; this is for smoother decoding in case of a severe load. In this case, though the buffer is actually set to use maximum 2 frames, the full buffer still needs to be prepared.

Therefore, when initializing this module, a circular buffer is created, i.e. an area in memory, which is to be repeatedly overwritten when new data is received from the computer.

Hence, this makes it possible not to allocate memory once more for new data, but to use the old memory. One and the same thread deals with data receipt and transfer in the mobile phone; first, it checks whether any module has sent new data, for example, a new angle of rotation or mouse move or command from a game controller, second, it checks whether any data has been received from the computer.

Each packet from the mobile phone and from the computer is marked with its own flag, by which the code determines what to do with such data. If the data with a flag denoting that it is audio has been received, the audio is decoded immediately in the same thread, since this is a high-speed operation of about 2 ms, while starting a new thread for such task would require extra synchronizations and context switches in mobile phones with few cores.

If it is video, the coder is sent a message that a new frame for decoding and display is saved at a particular address in the circular buffer. if (enet.readPacketBuffered(cb, channel, 0) > 0) {

cb.rewind();

//Log.i("NetworkThread", "got data["+channel[0]+"] size - "+cb.limit());

if (channel[0] == 0)

ParseFramePacketSimple(cb);

else if (channel[0] == 1 )

ParseAudioPacketSimple(cb);

}

Video decoding.

This module operates based on the class dealing with analysis of NAL units.

Its task is, knowing what hardware decoder is available in the mobile phone, to form such buffer from NAL units, which will be decoded as quickly as possible.

To that effect, the system is inquired for a list of possible decoders.

Software decoders are excluded from such list

blacklistedDecoderPrefixes.add("omx.google");

blacklistedDecoderPrefixes.add("AVCDecoder"); and settings specific for a particular decoder are formed needsSpsBitstreamFixup =

MediaCodecHelper.decoderNeedsSpsBitstreamRestrictions(select edDecoderName); needsBaselineSpsHack =

MediaCodecHelper.decoderNeedsBaselineSpsHack(selectedDecoder Name);

constrainedHighProfile =

MediaCodecHelper.decoderNeedsConstrainedHighProfile(selected DecoderName); isExynos4 = MediaCodecHelper.isExynos4Device();

if (needsSpsBitstreamFixup) {

Limel_og.info("Decoder "+selectedDecoderName+" needs SPS bitstream restrictions fixup");

}

if (needsBaselineSpsHack) {

Limel_og.info("Decoder "+selectedDecoderName+" needs baseline SPS hack");

}

if (constrainedHighProfile) {

Limel_og.info("Decoder "+selectedDecoderName+" needs constrained high profile");

}

if (isExynos4) {

LimeLog.info("Decoder "+selectedDecoderName+" is on Exynos 4");

}

This experience is based on the understanding that mobile phone decoders are optimized for maximum power saving while retaining the smoothest replay. Therefore, a large number of frames to be analyzed by the decoder is intended for decoding. Change in h264 profiles in the process (for example, the reframes attribute) makes it possible to control the decoder and to state that decoding such video requires fewer buffers, less waiting time and higher priorities. Due to this, it is possible to decode the image earlier. At the same time, the load is higher than during usual video viewing, but latency for games is more comfortable. Render.

There are 3 render types in the mobile client

Render in native SurfaceView. Makes it possible to display a fullscreen image directly from the decoder; this option is convenient since no extra thread for texture update and drawing by OpenGL is required. Thus, the video adapter and the processor are relieved, and the mobile phone heats up less. It is required that the computer form a stereo image in advance.

Render to a texture and drawing in Cardboard sdk.

Makes it possible to show video stream in fake stereo mode. And allows applying image distortions for any headset. This is the most comfortable option for 2D games.

Render to a texture and copying to Unity 3d texture.

Compatible with the Unity3d game engine; texture is converted from OES into regular texture2d and the identifier is transferred to Unity so that it can be further post- processed in any way.

Audio decoding and replay.

The Opus library is used for audio coding as a specific library optimized for transfer of real time audio by a network. PC application.

Setting video driver

Setting firewall

API server. For the mobile client and SDK. All logic is processed here.

Adding a game, installing extensions, installing libraries, launching the discovery service and connection to the mobile client.

Adding a game:

1. Specifying the game folder 2. Service recursively scans the entire selected directory searching for games compatible with the stream.

3. Compatible games are specified in the profiles. xml file in stream ing/cfg/ folder

4. If a compatible game is found, the application installs in the game the libraries required for integration

5. If the process is successful, it adds the game to the list.

Installing extensions:

When the game is launched, it is determined what modes are supported by streaming for such game, and if there is more than one mode, a choice of launch modes is proposed in the mobile client.

After selection, the client and the game are updated.

Launching the discovery service

1. Initializing the mDns library

2. Determining own name and address

3. Registering streaming service with such data

API server.

The game is fully interoperated by this class, primarily in cut-by mode directly to the game, except for cases when the game has not been initialized yet.

Or SDK is used.

For SDK operation and maintaining backward compatibility, data stream from the mobile phone is encrypted and reaches this server, then it is decompressed and transferred by a UDP Loopback device to the game that uses SDK.

Video capture There are many methods to acquire a video frame ready for display. References to BackBuffer in DirectX 9 and DirectX 1 1 are obtained and the Present method in Windows is called, which reports to the operating system that frame render is completed, after which the driver reads this BackBuffer and draws the image on the monitor display. Two ready frame events are reported: one likewise to Windows, and the other to own library.

Then, the required addresses are searched using a maintenance application.

The server application has an API for this purpose http://127.0.0.1 :47000/dxHooks x64

http://127.0.0.1 :47000/dxHooks x32

One of them returns system addresses for 64-bit PC architecture and the other for 32- bit architecture.

This maintenance application creates a hidden window for DirectX render, which is the same as the game itself. After all rendering objects are generated, a virtual address table is read, where, being guided by the official API, a function number is used: for example, 8 for the Present function in DXGI_Swapchain_present.

The function is received according to this number, and its absolute address is searched in memory. Then, its offset address is calculated by subtracting the displacement from the library downloading address and such offset address is saved for further transfer via API. Within the game, using these offset addresses and having the function for search for the dxgi.dll or d3d.dll library within the real game, they may be summed up, and the place where a particular game starts to show an image can be determined.

Then, the addresses are replaced. For this the process is interrupted, the current address is registered and overwritten with the required one, then the process is resumed.

This task is fulfilled by means of the EasyHook library, namely, using two methods LhlnstallHook(oldfunc, newfunc, NULL, h)

LhSetExclusiveACL(thread_ids, 1 , h) Further, the obtained texture needs to be cloned to prevent the video frame from being damaged and converted into a format that can be processed by the hardware coder in the video adapter.

For this purpose, DXVA2 Video Processor is used for DirectX 9:

Converting this buffer into nv12 texture. CreateVideoProcessor(DXVA2_VideoProcProgressiveDevice, &vd, D3DFMT_NV12, 0, &m_pDXVA2VideoProcessor);

HRESULT hr = m_pDXVA2VideoProcessor->VideoProcessBlt(pNV12Dst, &vpblt, &vs, 1 , NULL);

The following means are used for DirectX 1 1

ID3D1 1 VideoDevice and ID3D1 1 VideoContext:

HRESULT hr = d1 1 VideoContext->VideoProcessorBlt(d1 1 VideoProcessor,

videoProcOutputView, 0, 1 , &streams);

Video coding

Only hardware video coders, namely NVENC and AMD AMF, are used here; they produce a speed of 200-300 frames per second in Full HD video with adequate image quality. A texture having an appropriate format for the coder has been specially prepared in memory of the video adapter at the previous stage so that at the coding stage it would save the effort of copying data from system memory to video memory since this takes a long time.

Owing to this coder, a reference to the prepared area in memory may be given for coding.

For Nvidia and Windows 7, area mapping in video memory is used with coder memory through CUDA:

cuGraphicsD3D1 1 RegisterResource(&m_inputSurface, d1 1 Surf,

CU_GRAPHICS_REGISTER_FLAGS_NONE);

cuGraphicsResourceSetMapFlags(m_inputSurface,

C U_G RAP H I CS_MAP_RE SO U RC E_F LAGS_READ_O N LY) ;

Then, the coder is informed that all is set:

m_pNvHWEncoder->NvEncMaplnputResource(pEncodeBuffer-

>stlnputBfr.nvRegisteredResource, &pEncodeBuffer->stlnputBfr.hlnputSurface);

m_pNvHWEncoder->NvEncEncodeFrame(pEncodeBuffer, SframeCommand); When coding is completed, the coder copies into system memory only the coded data which is already of a small size, and this is executed quickly.

This data is transferred to the queue for sending to the mobile phone.

In this case, the following parameters are important

* h264 high profile

* a key frame is sent per each 600-900 frames, otherwise a frame is changed

* bitrate is over 20,000 kbps

* QP - 0

* gos length - 0

* B frames - 0

Audio capture

Com object in CoreAudio is used, which allows creating an audio device and activating it in the game. Thus, raw audio from the game reaches the virtual device.

There, it is converted using Libswresample into the format required for Opus. swrctx = swr_alloc_set_opts(NULL,

AV_CH_LAYOUT_STEREO,

AV_SAMPLE_FMT_FLT,

ga_samplerate,

CA2SWR_chlayout(w->nChannels),

CA2SWR_format(w),

w->nSamplesPerSec, 0, NULL); int outSamples = swr_convert(swrctx,

dstplanes, samples,

srcplanes, samplesln);

Audio coding The coder settings are optimized for 10 ms audio latency

They are designed for 48 hz frequency and stereo sound. After conversion, data is transferred to the translation server for sending to the mobile phone. const float * inpData = (const float * )data;

int len = opus_encode(encoder, inpData, samples, output, maxPacket);

if (len > 0) {

send(output, len);

}

Injection of control Directlnput and Xlnput event emulations are used to deliver a joystick

Windows messages are used to deliver a keyboard and a mouse.

Translation server

One of the modules sensitive to speed.

Main object in the integration module.

The unit dealing with connection to other modules, procedure for launching and settings for such modules and dealing with data transfer between the mobile phone, the game and the API. There is a system of settings which determines with what parameters and in what order other modules should be launched.

This depends on the following aspects:

- How a user has launched a game: using a computer launcher, in Windows or in Oculus. Or in some application a streaming library has been randomly uploaded, i.e. through SDK or Oculus, the game launches a server or SteamVR inquires the headset;

- Operating system version, video adapter manufacturer and model, driver version

Based on this data, a check is first performed as to whether video capture in the game needs to be initialized. This is not required for Oculus and SteamVR, but is required for Windows games. If it is required, it in turn scans process memory and determines what DirectX should be intercepted. On this basis, it sends a request to the API, uploads the required addresses and replaces the functions. Further, control in the game is captured using mouse look + xinput.

Audio is captured also, using CoreAudio.

Then, the stream which will receive commands from and send data to the mobile phone is launched. First, it endlessly waits for this mobile phone to connect. As soon as such mobile phone connects and sends a command to start streaming, the system reads the recommended quality settings and launches the coder with the appropriate parameters. void enetThread() {

enetlnit();

while (work) {

if (!enetWorkO)

this_thread::sleep_for(chrono::milliseconds(1 ));

}

enet_host_destroy(streamingServer);

}

The EnetWork() function determines whether there has been a message from the mobile phone (for example, a new angle of rotation, a command to move a mouse) or whether there is video/audio data to be sent to the mobile phone.

If there has been such data, then the queue should be inquired once more immediately after this command is processed.

If there has been no data, it is necessary to send the stream to sleep so as not to waste process capacity.

Overlay and post-processing This component deals with forming a stereo image directly on the PC.

Depending on the settings, it is possible to form stereo images and show them on the PC display or to form and send stereo images, but show only 2D images on the display. Overlay also includes displayed copyright and text description in a visible area on the monitor display, but at the same time, it does not transmit it to the mobile client.

The present invention is not limited by the particular implementations disclosed in the description for illustration purposes; it covers all possible modifications and alternative embodiments included in the scope of this invention determined by the claims.