Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMMUNICATION DEVICES AND METHODS
Document Type and Number:
WIPO Patent Application WO/2021/023819
Kind Code:
A1
Abstract:
To enable fast beam tracking and/or low latency a first communication device comprising a sensing circuitry configured to sense motion of the first communication device and to generate motion information representing the sensed motion, the motion information comprising rotation information indicating rotation and/or orientation of the first communication device, and a communication circuitry configured to transmit the motion information to a second communication device within a layer of a hierarchical layer model lower than a transport layer. A second communication device comprises a communication circuitry configured to receive motion information from a first communication device within a layer of a hierarchical layer model lower than a transport layer.

Inventors:
LOGHIN NABIL SVEN (DE)
FELLHAUER FELIX (DE)
CIOCHINA DANA (DE)
SCHNEIDER DANIEL (DE)
Application Number:
PCT/EP2020/072144
Publication Date:
February 11, 2021
Filing Date:
August 06, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SONY CORP (JP)
SONY EUROPE BV (GB)
International Classes:
A63F13/211; H04B7/06; H04L29/08
Foreign References:
US20170045941A12017-02-16
US20180309500A12018-10-25
Other References:
GUPTA GAURAV ET AL: "Indoor Localisation and Navigation on Augmented Reality Devices", 2016 IEEE INTERNATIONAL SYMPOSIUM ON MIXED AND AUGMENTED REALITY (ISMAR-ADJUNCT), IEEE, 19 September 2016 (2016-09-19), pages 107 - 112, XP033055423, DOI: 10.1109/ISMAR-ADJUNCT.2016.0052
Attorney, Agent or Firm:
WITTE, WELLER & PARTNERPATENTANWÄLTE MBB (DE)
Download PDF:
Claims:
CLAIMS

1. A first communication device comprising a sensing circuitry configured to sense motion of the first communication device and to generate motion information representing the sensed motion, the motion infor mation comprising rotation information indicating rotation and/or orientation of the first communication device, and a communication circuitry configured to transmit the motion information to a second communication device within a layer of a hierarchical layer model lower than a transport layer.

2. The first communication device as claimed in claim 1, wherein the hierarchical layer model is an Open Systems Interconnection, OSI, model.

3. The first communication device as claimed in claim 1, wherein the communication circuitry is configured to transmit the motion information within a physical layer or a MAC layer to the second communication device.

4. The first communication device as claimed in claim 1, wherein the sensing circuitry is configured to sense one or more of rotational speed, angular velocity and change of orientation of the first communication device and to generate rotation information indicating the sensed rotational speed and/or angular velocity and/or change of orientation.

5. The first communication device as claimed in claim 1 , wherein the sensing circuitry is configured to additionally sense one or more of transla tional speed, translational acceleration change of position of the first communication device and to generate translational information indicating the sensed translational speed and/or translational acceleration and/or change of position as part of the motion infor mation.

6. The first communication device as claimed in claim 1, wherein the communication circuitry is configured to transmit the motion information in a motion subelement of a WLAN protocol and/or to transmit the motion as frames aggregat ed with payload data frames.

7. The first communication device as claimed in claim 1, wherein the motion information includes one or more of:

- linear scale defining acceleration full-scale selection;

- rotation scale defining gyroscope full-scale selection;

- linear acceleration X-axis;

- linear acceleration Y-axis;

- linear acceleration Z-axis;

- angular rate pitch;

- angular rate roll;

- angular rate yaw;

- left and right eye horizontal position;

- left and right eye vertical position; or

- reserved bits for eye tracking, such as pupil sizes, blinks, saccades.

8. The first communication device as claimed in claim 1, wherein the communication circuitry is configured to receive multimedia data from the second communication device in response to the transmission of the motion information and wherein the first communication device further comprises an output interface configured to output a signal for perception by a user based on the received multimedia data.

9. The first communication device as claimed in claim 1, wherein the first communication device is a head-mounted display device, an unmanned vehicle, or an unmanned aerial vehicle.

10. The first communication device as claimed in claim 1, wherein the first communication device is a wireless backhaul.

11. The first communication device as claimed in claim 1 , further comprising an interface configured to enable accessing motion information on the layer of the hierarchical layer model lower than the transport layer.

12. A second communication device comprising a communication circuitry configured to receive motion information from a first communi cation device within a layer of a hierarchical layer model lower than a transport layer, the motion information representing motion of the first communication device and comprising rotation information indicating rotation and/or orientation of the first communication device

13. The second communication device as claimed in claim 12, further comprising a multimedia generation circuitry configured to generate multimedia data based on the motion information, wherein the communication circuitry is configured to transmit the generated multimedia data to the first communication device.

14. The second communication device as claimed in claim 12, further comprising a rendering circuitry configured to render video data based on the motion information, wherein the communication circuitry is configured to transmit the rendered video data to the first communication device for display.

15. The second communication device as claimed in claim 12, wherein the second communication device is a virtual, augmented, or mixed reality device.

16. The second communication device as claimed in claim 12, further comprising an interface configured to enable accessing motion information on the layer of the hierarchical layer model lower than the transport layer.

17. The second communication device as claimed in claim 12, wherein the communication circuitry is configured to inform to the first communication device of a schedule or a periodic allocation informing the first communication device when it can transmit updated motion information or motion information aggregated with data.

18. A first communication method comprising sensing motion of the first communication device, generating motion information representing the sensed motion, the motion infor mation comprising rotation information indicating rotation and/or orientation of the first communication device, and transmitting the motion information to a second communication device within a layer of a hierarchical layer model lower than a transport layer.

19. A second communication method comprising receiving motion information from a first communication device within a layer of a hierar chical layer model lower than a transport layer, the motion information representing motion of the first communication circuitry and comprising rotation information indicating rotation and/or orientation of the first communication device.

Description:
COMMUNICATION DEVICES AND METHODS

BACKGROUND

FIELD OF THE DISCLOSURE

[0001] The present disclosure relates to communication devices and methods, in particular a head-mounted display and a virtual, augmented, or mixed reality device, such as a video game console.

DESCRIPTION OF RELATED ART

[0002] Wireless communication offers many advantages. For example, communication in the mmWave bands (e.g. 60GHz for next generation WLAN systems, such as IEEE802.11ad, ay and beyond) offers large bandwidths and low latency, once the (analog) beams are established. Initial beam training and subsequent tracking for directive transmission are one of the challenges, compared with legacy bands, such as 2.4GHz and 5GHz, where omni-directional transmission and reception are possible. For fast moving applications such as communication between a virtual reality (VR) headset (also called head-mounted display (HMD)) and a base station or access point, this may be even more difficult. In case of a moving HMD, the initially selected transmit and receive beams may not be optimum anymore, which degrades performance, results in bit or packet errors, and can ultimately end up in a completely lost transmission link. These problems are not limited to communication in the mmWave bands, and the present disclosure is not limited to mmWave bands either.

[0003] Virtual reality (VR) projects video on a near-eye display, usually two displays for stereoscopic depth impression, integrated into an HMD. Besides video, also immersive 3D audio is possible via speakers or headset. VR has limited or no relation to the real physi cal world and can display either computer generated graphics (as in gaming or simula tions) or recoded 360° videos. In contrast, augmented reality (AR) enriches the real world view, e.g., highlighting interesting locations or indicating a direction for navigation pu r pose. Displays are usually semi-transparent and use waveguides to overlay virtual objects. In between VR and AR, many hybrid forms exist under the name of mixed reality (MR), e.g., using a non-transparent HMD display together with a camera attached to the HMD. The user’s hand movements can be detected by the camera and virtually projected into the HMD display. Even though the present disclosure relates to VR, AR, or MR, most of the explanations will relate to the domain of VR.

[0004] A problem of VR is latency. If the VR viewer is looking into a new direction (rotation / translation), the new content needs to be transmitted with minimum latency to the HMD, such that the viewing experience is in accordance with the motion. Any offset / lag may create motion sickness, which is commonly experienced by VR viewers. Thus, a so called “motion to photon latency” (time between motion of HMD and displayed updated content) should be below 20 ms.

[0005] The “background” description provided herein is for the purpose of generally presenting the context of the disclosure. Work of the presently named inventor(s), to the extent it is described in this background section, as well as aspects of the description which may not otherwise qualify as prior art at the time of filing, are neither expressly nor impliedly admitted as prior art against the present disclosure.

SUMMARY

[0006] It is an object to provide communication devices and methods that enable low latency.

[0007] According to an aspect there is provided a first communication device comprising a sensing circuitry configured to sense motion of the first communication device and to generate motion information representing the sensed motion, the motion infor mation comprising rotation information indicating rotation and/or orientation of the first communication device, and a communication unit configured to transmit the motion information to a second communication device within a layer of a hierarchical layer model lower than a transport layer.

[0008] According to a further aspect there is provided a second communication device comprising a communication unit configured to receive motion information from a first communication device within a layer of a hierarchical layer model lower than a transport layer, the motion information representing motion of the first communicatbn device and comprising rotation information indicating rotation and/or orientation of the first communi cation device.

[0009] According to further aspects there are provided corresponding communication methods.

[0010] Embodiments are defined in the dependent claims. It shall be understood that the disclosed devices and methods have similar and/or identical further embodiments as defined in the dependent claims and/or disclosed herein.

[0011] One of the aspects of the disclosure is a new transmission scheme, in particular for a WLAN scenario, where motion information (e.g. measurement data of an inertial meas urement unit (IMU)) of the first communication device (e.g. a station (STA), in particular an HMD) are transmitted from the first communication device to a second communication unit (e.g. an access point (AP), in particular a VR-supporting device such as a video game console, or another station as, e.g., used in direct device-to-device communication). For the WLAN scenario, the disclosure further specifies a signaling scheme to transmit such motion information (sometimes also called “motion data”) within the MAC layer. This provides a low latency interface at the second communication device by providing the motion information directly within a low layer, such as the MAC or PHY layer. This results in reduced latency between position/orientation changes at the first device and enables reception of accordingly modified video or image information at the first communication device.

[0012] More generally, in some systems (not necessarily VR applications) the motion information may be transmitted over a low layer according to a hierarchical layer model such as the OSI model, e.g. only over MAC layer. Such systems may not consist of the complete OSI stack, e.g., they may only use PHY and MAC layer (e.g. an IMU connected to a bike wheel, transmitting motion information to a sports device, where such data is stored).

[0013] A VR scenario, as one exemplary scenario in which the present disclosure may be applied, requires motion information on upper layers, such as application layer, where the graphic Tenderer is computing new video based on the user’s motion. The present disclo sure is making the motion information available at a lower layer (such as the MAC layer or network layer), without having to wait for its final interpretation on a higher layer (e.g., transport layer, where a packet can only be decoded after successful reception and assembling of multiple packets from lower layers). This allows for a fast bypass from low layer (such as MAC layer or network layer) to the higher layer on which the information is actually needed (e.g., application layer). Other payload information may be transmitted on a higher layer.

[0014] In an UAV (unmanned aerial vehicle) scenario, as another exemplary scenario in which the present disclosure may be applied, the transmit and receive beams of AP, UAV, and end-user may be updated according to motion information of end-user and of the UAV. Providing the motion information at a lower layer, as provided in a preferred embodiment, may thus be advantageous for this scenario as well. [0015] It shall be noted that the present disclosure does not only relate to mmWave communication (and corresponding beam tracking, e.g. according to IMU data), but the proposed solution is more generic and allows for low latency IMU transfer of data (e.g.

IMU data) over a low layer, such as the MAC layer, for any frequency band.

[0016] The foregoing paragraphs have been provided by way of general introduction, and are not intended to limit the scope of the following claims. The described embodiments, together with further advantages, will be best understood by reference to the following detailed description taken in conjunction with the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWING

[0017] A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:

Fig. 1 shows a schematic diagram of a first embodiment of a first communication device and a second communication device according to the present disclo sure;

Fig. 2 shows a schematic diagram of a second embodiment of a first communication device and a second communication device according to the present disclo sure;

Fig. 3 shows a flow chart of an embodiment of a first communication method accord ing to the present disclosure;

Fig. 4 shows a flow chart of an embodiment of a second communication method according to the present disclosure; Figs. 5A and 5B show exemplary implementations of a motion subfield for exchange of motion information;

Fig. 6 shows diagrams illustrating beamforming in case the position of STA is unknown to AP;

Fig. 7 shows a diagram illustrating beamforming in case the position of STA is known to AP;

Fig. 8 shows an exemplary implementation of the interfaces and the management entities for PHY and MAC;

Fig. 9 shows an exemplary implementation of how to use IMU data at the VR/drone

STA to infer receive beams to be used, followed by the transmit beams;

Fig. 10 shows an exemplary implementation of motion information aided beam measurement and realignment;

Fig. 11 shows an exemplary implementation of sending motion information in band;

Fig. 12 shows an exemplary implementation of multiple AP training triggered by motion information;

Fig. 13 shows an exemplary implementation of reliable transmission after RX beam change;

Fig. 14 shows an exemplary implementation of VR/drone motion track initiated by VR/drone STA drone; and

Fig. 15 shows an exemplary implementation of VR/drone motion track initiated by AP. DETAILED DESCRIPTION OF THE EMBODIMENTS

[0018] Virtual reality (VR) applications normally use head-mounted displays (HMD). Typically, high data rates are transmitted from, e. g., a gaming console to the HMD via cable, such as a HDMI cable. Cables however limit the range of motion for the VR user as well as immersive user experience. Some VR applications offer 6DOFs (6 degrees of freedom), meaning that the video (and audio) content does not only depend on the direction the user is looking at (3 dimensions = degrees of freedom), including pitch, yaw, and roll, but also on the position of the HMD (additional 3 dimensions), i.e. , location and height in the room (x-, y-, and z-coordinates). To account for user motions, feedback of data from an inte grated IMU (inertial measurement unit) is usually provided via the same cable from the HMD to the console. An IMU usually incorporates both an accelerometer sensor (to detect motion in translation) and a gyroscope sensor (to detect rotation changes). If the video (or audio) content changes only with respect to rotational changes, the device is denoted as 3DOF.

[0019] Common 3DOF HMDs for VR are often cell-phone based solutions (e.g., a smartphone acts as display and is placed into an HMD, which includes some additional optics, such as lenses), and provide 360° videos / games, wherein the content is changed according to the 3 rotational DOFs. 6DOF HMDs on the other hand often provide new video content, not only based on rotational changes, but also according to the 3-dimensional movements (additional 3DOFs of translations). Usually, no captured 360° videos are displayed, but computer-generated imagery (CGI), such as in games or in visualizations.

[0020] Additional motion tracking information could be obtained via camera (e.g. from inside the HMD, using visual SLAM (simultaneous localization and mapping), or from outside the HMD, capturing from some distance the location and orientation of the HMD or other localization technologies. Such information could also be fed back via (HDMI) cable, or via a wireless link, usually embedded into the uplink data.

[0021] One problem is latency: if the VR viewer is looking into a new direction (rotation / translation), the new content needs to be transmitted with minimum latency to the HMD, such that the viewing experience is in accordance with the motion. Any offset / lag will create motion sickness, which is commonly experienced by VR viewers. Thus, so called “motion to photon latency” (time between motion change of HMD and display of updated content) should be as low as possible, in particular below 20 ms.

[0022] Most existing wireless VR solutions currently use mmWave communications, due to the vast amount of bandwidth, allowing several Gbps of data rate for streaming of high quality video. However, such mmWave links are sensitive to fast movements of the communica tion device or station (STA) which is integrated into the HMD. When the narrow beams used in directive mmWave communication are not aligned properly anymore between access point (AP; e.g., built into the gaming console) and the STA, the link budget may not be sufficient to further support the required data rate. Deployment of multiple antenna arrays on both STA and AP side (so called hybrid MIMO architecture) further worsens the effect of STA movements in case high throughput spatial multiplexing is used (in case, spatial diversity is rather exploited, MIMO could offer more robustness, however). Even if the misalignment of the beams with respect to the new STA position would allow for sufficient link budget, the digital beamforming settings may still result in a mismatch between predicted and observed channel information. Thus, high speed tracking of the analog beams is desirable for mobile mmWave communication (and potentially tracking and correcting also digital precoder settings). Mobility may comprise two components: a translational component (i.e., the STA is moving in x-, y- and/or z-direction) and a rota tional component (i.e. the STA is rotating with respect to x-, y- and/or z-axis, called roll, yaw and pitch). For normal movements of human users, translation may be less detri mental (due to limited speed / acceleration capabilities) than rotation (resulting e.g. in self blockage, when the HMD is rotated by 180°). Other sources of mobility, such as moving objects in the surrounding environment, acting as moving reflectors/scatterers, however, will not be addressed by the present disclosure.

[0023] As a result of insufficient analog beam tracking, the video may not be properly transmitted to the HMD (error free and with low latency). Retransmissions for real-time applications cannot solve the issue of data loss, due to low latency constraints. [0024] In common approaches, motion information is sent via an uplink to the console / PC over some protocol, e.g. embedded into a video codec or as payload of IP (internet protocol) packets. The more layers (e.g., with respect to the Open Systems Interconnection model (OSI model; generally known in the art, e.g. from ISO/I EC standard 7498-1:1994)) an information has to pass, the longer the latency, until the data can be interpreted, e.g. after interface output of Layer 5 (presentation layer) or Layer 7 (application layer). Hence, to reduce the time for a communication device to obtain the motion data so that it can be used more quickly for setting one or more communication parameters of the communica tion, it is in some embodiments proposed to reduce the number of involved layers. This can also be useful to reduce the motion to photon latency.

[0025] Using a proprietary protocol for exchange of motion information further makes it difficult to achieve interoperability among HMD vendors. In an extreme case of such a (closed) system, it may not be specified publically how the transmitted motion data has to be interpreted.

[0026] It is thus proposed to establish a link between a motion sensory device (also called sensing unit herein; including e.g. an IMU and/or other sensors) and a transceiver (also called communication unit herein), especially a wireless transceiver for directive mmWave communications. In an embodiment it is further proposed to transmit the rotation) motion information (particularly comprising rotation information indicating rotation and/or orienta tion; optionally further comprising translational information indicating translation) of the (preferably mobile) STA (generally called first communication device herein) to an AP (generally called second communication device herein) directly within the WLAN protocol, which defines Layer 1 and Layer 2 of the protocol stack in this case. It should be noted that any other wireless protocol, such a cellular 3GPP systems protocol or Bluetooth (which may be sufficient for uplink with small data rate), or any wired protocol, such as Ethernet or USB, may generally also be used.

[0027] Referring now to the drawings, wherein like reference numerals designate identical or corresponding parts throughout the several views, Fig. 1 shows a first embodiment of a first communication device 1 and a second communication device 2 according to the present disclosure. [0028] The first communication device 1 , e.g. representing a STA integrated in an HMD, comprises a motion information unit 10 that obtains motion information 30 representing motion of the first communication device 1. The motion information unit 10 may comprise a sensing unit (e.g. an IMU sensor) that senses motion of the first communication device 1 and generates motion information 30 representing the sensed motion. In other embod- ments the motion information unit 10 may comprise an interface (e.g. a Bluetooth or WLAN interface, preferably operating in non-mmWave bands such as 2.4 or 5 GHz) that receives or retrieves motion information from an external sensor. For instance, an HMD may track its motion not directly, but via an external device, e.g. an external 3D camera or other motion tracking means.

[0029] A communication unit 11 (e.g. a WLAN or Bluetooth transceiver) communicates with a second communication device 2. For instance, the motion information 30 may be transmit ted to the second communication device 2, which, in an embodiment, may be done within a layer of a hierarchical layer model (in particular an OSI model) lower than a transport layer, in particular in a physical layer or a MAC layer (i.e. , the motion information 30 may be transmitted in this low layer directly end-to-end from the first communication device 1 to the second communication device 2. Generally, however, the motion information 30 may be transmitted in any other way to the second communication device 2.

[0030] As another example PHY protocol data units (PPDUs) repetition may be applied. When the VR STA detects it is in motion of a level larger than a certain threshold it can start a mode in which, packet transmission is duplicated and sent over (subsequently or in parallel (spatial multiplexing MIMO)) several beams spanning a predicted sector / neigh boring sector. Similarly, based on motion information the access point or or Wireless Network Controller (also called controller hereafter) can initiate a mode in which transmis sion is sent over several beams spanning a predicted sector or sectors.

[0031] In a further example (used in addition or as alternative), a VR/drone STA might change the used MCS (modulation and coding scheme) to a lower number (i.e., more robust MCS), if it detects its motion and thus allows the other STA/AP to decode the PPDU despite misalignment of beams. This PPDU mode adaptation can be used as an alterna tive to PPDU repetition or in addition to it. [0032] In a further example antenna weight vector (AWV) changes within the preamble may be provided. For instance, motion Information may be used by the service management entity (SME) to trigger a passive RX beam measurement and realignment, which if successful may lead to changing the RX beam. This beam (or beams in case of MIMO) can be further used in transmission towards the other STA. However, since the parame ters of the link may have significantly changed, several precautions can be taken for ensuring reliability.

[0033] For example, the transmission with the new beam can be performed with more stable estimation sequences, which allow the receiver to change and select best RX AWV. The channel estimation sequence can be formed of several segments being repeated, and within each segment the STA is allowed to change its beam, such that RX beam is realigned directly before packet decoding and not at the end of the previous frame. Best RX beam is used for the rest of the packet. An alternative is to precede the transmission of the first packet after a beam change by a directional beamtraining frame, e.g. a BRP frame with several training fields, to allow for finding the best RX AWV to receive the packet. This is then followed within short interval by the data packet.

[0034] In another embodiment, based on motion information, location tracking can be enabled in a multiple AP scenario. This would allow for possibly fast handovers or allow for neighbor ing APs to allocate possible resource units for the VR STA (or for a UAV / drone moving between infrastructure APs) to associate. This may require one or more of the following changes.

[0035] In the VR/drone case, the InMotion (flag from location tracking) should be activated not only by translation motions but also by rotation (pitch, roll, yaw).

[0036] To enable location tracking or fast beam adaptation, a burst of Location Notification Frames should be sent at regular intervals, as indicated by the AP or by a master AP, in several directions, preferably with training units attached, to allow neighboring APs to train their beams on the various TRNs. Additionally, they may contain the motion information to allow the APs to reduce the training overhead, i.e. , number of beams / sectors to be trained, number of timeslots to be used / type of training required (transmit and/or re ceive).

[0037] In a practical implementation, a master AP may be required to schedule the intervals in which frames for tracking with motion information are sent to the one or more APs or individual APs may indicate time intervals in which these may receive these types of frame. This may be used to reduce the overhead of control/beam training messages over the air.

[0038] If beacons from multiple APs are heard, the RSSI information for the best beams should be included in the location track notification frames.

[0039] Upon request of VRMotionT rk from a VR STA or Drone STA, a VR/Drone STA may receive an indication of the schedule of the neighboring APs beacon transmission, which are relevant for the current position, given the motion information.

[0040] Based on motion information, an AP can further send VR Motion Notification frames to neighboring APs, having as parameters the location and motion information, in order to increase the probability that the VR STA is received by at least one of them. Additionally, such a frame can request neighboring APs to schedule a beacon transmission (or trans mission of several control frames, containing at least BSS information), covering at least the angle sector corresponding to the new VR STA and schedule listening times corre sponding to the VRNotificationFrames.

[0041] Motion information can be sent over a different band (out-of-band) than the one in which the communication takes place (e.g., can be sent over sub 7GHz band).

[0042] In an embodiment the first communication device 1 may further comprise an output interface 13 that is able to receive any kind of multimedia data 31 comprising information for perception by a user, such as video data, image data, audio signals, force feedback (e.g., vibrations in controller or game suits (such as smart gloves)), artificial smell, odor, etc. The output interface may thus be configured to use the received multimedia data and convert it into a corresponding “signal” (e.g., visual signal, sound, smell, etc.) that can be perceived by a user’s sense. The output interface 13 may thus e.g. comprise a display that displays video data, e.g. a display integrated into an HMD device. The multimedia data 31 may thus comprise video data that is received from the second communication device 2 as payload, for instance in response to the transmission of the motion infor mation 30. In other words, the payload information contained within a frame sent in response to the one with the motion subelement may contain video data that e.g. depends on the position inferred from the motion information.

[0043] The second communication device 2, e.g. representing an AP integrated into a gaming console (or, more generally, a virtual, augmented, or mixed reality-supporting device) or representing another STA in direct device-to-device communication with the first commu nication device 1, comprises a communication unit 20 (e.g. a WLAN or Bluetooth trans ceiver) that receives the motion information 30 from the first communication device 1 within a layer of a hierarchical layer model lower than a transport layer, the motion infor mation representing motion of the first communication device 1 and comprising the rotation information.

[0044] The motion information 30 is preferably not embedded in the payload of a signal, which can only be interpreted at receiver side (i.e., at the second communication device 2) after assembling the packets on a higher layer, but rather the motion information 30 is prefera bly received on a lower layer (below transport layer), where it can be readily interpreted by the second communication device 2. Generally, however, the motion information 30 may be received in any other way from the first communication device 1.

[0045] The second communication device 2 may further comprise a multimedia generation unit 21 configured to generate multimedia data (in the sense as described above) based on the motion information. In an embodiment the multimedia generation unit 21 may com prise a rendering unit (e.g. an image processor or video codec) that renders video and/or audio and/or other data (such as force feedback for haptic feedback) based on the motion information 30. The generated multimedia data 31 (e.g. the rendered video data) are then transmitted (as payload data) to the first communication device 1 for output by the output interface 13 (e.g. for display on a display). [0046] More generally, the first communication device 1 and/or the second communication device 2 may adapt one or more communication parameters or content according to the motion information 30 received from the respective other communication device. For instance, an AP (not necessarily operating in mmWave bands) may request a channel estimation if it realizes that the STA moves too fast. Hence, the second communication 2 generally reacts to the transmission of motion information 30 from the first communication device 1 by generating and transmitting back adapted information to the first communication device 1. In particular, based on the motion information, an AP or STA may request from its communication counterpart PHY protocol data units (PPDUs) sent with channel estimation sequences, repeated within the packet to allow the other communication device to track the channel variations. Based on the level of channel instability, which can be inferred from the motion information, the number of such repetitions of the channel estimation within the packet can be decided.

[0047] In an embodiment both translation and rotation are signaled as motion information. Translational information may be signalled as acceleration as opposed to speed (acceleration being the derivative of speed), because IMUs output acceleration due to measurements of inertia forces. Often, IMUs output translation with respect to acceleration of gravity g (g about 9.8 m/s 2 on earth), e.g. acceleration in x-direction being 2.4g. As for rotation, an IMU usually outputs angular velocities (angular speed) in degree per second (dps) (or in radiants per second) from a gyroscope.

[0048] Fig. 2 shows a second embodiment of a first communication device 1 and a second communication device 2 according to the present disclosure. Different from the first embodiment illustrated in Fig. 1, the first communication device 1 additionally comprises a beamforming unit 12 and the second communication device 2 additionally comprises a beamforming unit 22.

[0049] The beamforming unit 12 performs beamforming and/or beam tracking based on the motion information 30. Further, the beamforming unit 12 may predict a future position of the first communication device based on the motion information 30 and adapt the direction and/or width of the beam based on the predicted future position. Similarly, the beamform ing unit 22 performs beamforming and/or beam tracking based on the motion information 30 to form a beam for use by the communication unit 20 to receive the motion information and/or to transmit payload data (the double arrow indicating that the image information is “encapsulated” and can no longer be interpreted directly). The beamforming unit 20 may further predict a future position of the first communication unit based on the motion information and to adapt the direction and/or width of the beam based on the predicted future position. Further, for mmWave STAs, the request and negotiations of beamforming and beam-tracking parameters may be done based on the motion information.

[0050] It shall be noted that the beamforming unit may be part of a parametrization unit (not shown) or may be generalized into a parametrization unit that is configured to set one or more communication parameters of the communication performed by the communication unit 11 based on the motion information 30. The parametrization unit (e.g. a processor or other computing means implemented in hard- and/or software) may be configured to set one or more of a physical layer parameter, a precoder setting, an error correction parame ter, a modulation parameter, a constellation parameter of a constellation used for modula tion, an interleaving parameter, a training sequence frequency, a medium access control, MAC, resource, a beam direction of a beam used for communication, and a beam width of the beam used for communication.

[0051] Fig. 3 shows a flow chart of an embodiment of a first communication method according to the present disclosure, which method is particularly configured to be performed by the first communication device 1. In a first step S10 motion of the first communication device 1 is sensed. In a second step S11 motion information representing the sensed motion is generated, wherein the motion information comprises rotation information indicating rotation and/or orientation of the first communication device 1. In a third step S12 the motion information is transmitted to the second communication device 2 within a layer of a hierarchical layer model lower than a transport layer.

[0052] Fig. 4 shows a flow chart of an embodiment of a second communication method according to the present disclosure, which method is particularly configured to be per formed by the second communication device 2. This method comprises the step S20 of receiving motion information from the first communication device 1 within a layer of a hierarchical layer model lower than a transport layer, the motion information representing motion of the first communication circuitry and comprising rotation information indicating rotation and/or orientation of the first communication device.

[0053] In an embodiment, instead of or in addition to the motion information the second communication device 2 may receive from the first communication device 1 one or more communication parameters and/or a beamforming request requesting the second com munication device to perform beamforming training and/or beam tracking and/or beam switching and/or to provide beamforming parameters. These beamforming parameters may include one or more of receive training units on which a responder during the beam forming procedure trains or measures one or more receive beams, or transmit beamform ing units in which an initiator of the beamforming training trains one or more transmit beams, or mixed transmit-receive units for mixed training. For TDD type beamforming, beamforming parameters may include one or more of specific transmit sectors, or an angle of departure range and resolution, or receive sectors, or angle of arrival range and resolution that should be tested and time intervals at which the measurements should be made.

[0054] Figs. 5A and 5B shows exemplary implementations of the signaling using a motion subelement that may be transmitted from the first communication device 1 to the second communication device 2. In this implementation the following information may be included as motion information (not all of this information must be included; in other embodiments only one or more of these elements are provided as motion information):

• Subelement ID is the ID of this subelement (Optional subelement IDs of Location Parameters element from WLAN specification)

• Length is the length of this subelement in octets

• Linear Scale defines the acceleration full-scale selection, 2bit (00: ±2g; 01: ±16g, 10: ±4g; 11: ±8g)

• Rotation Scale defines gyroscope full-scale selection, 2bit (00: 250dps; 01: 500dps; 10: 1000dps; 11: 2000dps)

• Linear acceleration X-axis 16bit, two’s complement

• Linear acceleration Y-axis 16bit, two’s complement

• Linear acceleration Z-axis 16bit, two’s complement • Angular rate pitch (transverse axis) 16bit, two’s complement

• Angular rate roll (longitudinal axis) 16bit, two’s complement

• Angular rate yaw (vertical axis) 16bit, two’s complement

• Left eye horizontal position 16bit, two’s complement

• Left eye vertical position 16bit, two’s complement

• Right eye horizontal position 16bit, two’s complement

• Right eye vertical position 16bit, two’s complement

• Reserved bits for eye tracking, such as pupil sizes, blinks, saccades

• Reserved bits are for future use and to guarantee that the length of the subelement is a multiple of 8 bits.

[0055] Generally, the motion information can be included into a MAC frame, in which case it may contain the information explained above, but it can also be a primitive residing in a man agement entity which controls the MAC and/or PHY layer.

[0056] Some IMUs have an integrated processor, which allows filtering. Such processors may also estimate the z-direction by detecting the center of gravity towards the earth. But usually, only (unfiltered) raw values are output on a periodic time base (even, if no move ment was detected). Another unit may trigger, if motion was detected and only then transmit the IMU data. Further, some sensor fusion may be implemented, in which additional sensors, such as pressure sensor (for absolute height detection) or magnetic sensor (for (outdoor) detection of North direction).

[0057] This new subelement could become one new option of the location parameters. The name for this subelement could be “6DOF motion subelement”, referring to the 6 degrees of freedom (DOFs), described by translation (three DOFs) and rotation (another three DOFs).

[0058] Further implementations for this subelement may include a Motion Indicator Field to signal, when movement was detected. Feedback of IMU data may either be based on periodic transmissions or may be event-triggered (if a certain acceleration / rotation speed threshold is exceeded). If this flag signals that no motion was detected, then there is no need to signal translation and rotation at all. This flag could also signal, if only rotation or if only translation is detected, allowing to skip the other information (translation or rotation, respectively). If translation velocities remain constant, then another flag could signal this as well (since acceleration would be 0), again allowing to skip the translation information.

[0059] IMU or other motion data could be transmitted either in its raw form or filtered. Filtering can be applied on several sides, e.g., inside the IMU processor (often optional), some dedicated filtering block (between the IMU and the STA), at the AP side (at some dedicated filtering block) or at an upper processing layer. Data could be fed back either periodically or event triggered. The uplink of such 6DOF data should follow some Quality of Service / Experience (QoS/QoE) levels, such that a maximum (worst-case) latency is not exceeded.

[0060] It should be noted that the fed back 6DOF motion data could further include information about the viewer’s eyes position: several VR applications consider so called “foveated rendering”, in which high resolution of video (at high bit rate) content is only displayed at the focused field of view (center of the eye = fovea with high density of cones, about 200 times higher compared with periphial view). Eye tracking is required to detect the eye position, e.g., via infrared imaging (invisible IR transmitted towards eye, and captured in camera, all located inside the HMD). The position of the eye inside the 3D video frame could be fed back exactly (in terms of pixel location) or only relatively (e.g., movement to upper left, ...), which would require some prior normalization / calibration.

[0061] The eyes’ position can be exploited in the rendering process, e.g., for calculating new images, considering the intended focus / depth inside the scene. In addition, the concept of foveated rendering allows low resolution video transfer outside of the fovea (where the eye is focusing), while only in a small area around the focus point, high resolution video is transmitted. When it comes to eye movements, it has to be differentiated between intended movements of the focus point and unintended ones. A prominent uncontrollable movement of the eyes is called saccade, which is a rapid movement between two fixation points. Further, eye tracking devices may also feedback the pupil’s diameter, which is an indication of the state of the user (arousal, boredom, ...). [0062] As illustrated above, conventionally motion information is transmitted end-to-end over a higher layer such as application layer. At the transmitter side (the first communication device 1), the motion information is traversing all lower layers from the hierarchical model, usually resulting in different segmentation of the packets. As an example, IMU data may be split into several MAC protocol data units (MPDUs) and further sent into one or more PHY protocol data units (PPDUs). At the receiver side (the second communication device 2), the operations are reversed. As a consequence of packet segmentation, the higher layer can only interpret the motion information once all segments have been received, thus introducing latency. Further, while conventionally the motion information 30 is embedded in payload on a higher layer, according to the present disclosure the motion information 30 is embedded in a layer below the transport layer such as in the MAC layer or physical layer and the first and second communication devices 1 and 2 each comprise a corresponding low layer interface. At this interface, the motion information can be readily interpreted.

[0063] Embodiments of the disclosure provide one or more advantages: One advantage is that the motion information of the STA can be made accessible by connected units (e.g. graphic card via further connections) of the AP in such a way that faster access to such information is made possible for the AP. Further, the motion information of the STA can be used at the AP to allow for fast and reliable adjustment of communication parameters, such as fast beam tracking, e.g., within the WLAN protocol, which only covers PHY and MAC layer by embedding the motion information in the MAC interface of the 802.11 protocol to make it accessible at the communication unit of the AP.

[0064] Especially for mmWave communication, fast beam tracking is preferred to maintain high throughput with high reliability. Information about the movement (e.g., in 6DOFs) of the STA can be exploited at the AP to correct the beams (analog and/or digital) either directly (exact calculation of new position based on some prior position knowledge and based on motion information) or indirectly (e.g., by estimating the new search space for beam refinement based on magnitudes of motion information (i.e., if there are large values a larger search space is used and vice versa). [0065] A simple and straightforward way to indirectly exploit motion information is as follows: if both rotation and acceleration would indicate a stationary situation (rotation = 0 in all 3 dimensions and acceleration = g, towards the center of gravity of the earth; assuming that absolutely constant speeds cannot be realized over a long time in practice (with friction and required re-acceleration)), in such static case, no beam tracking is required, once the established beams allow for sufficient data rate and reliability. On the contrary, if rotation and/or acceleration exceed high magnitudes (i.e. , some predefined threshold values), the AP is alarmed to update beams. The higher the magnitudes, the larger the search space should be (e.g., only checking for neighboring beam sectors as new alternatives, in case the magnitudes are large, but not too large; corresponding thresholds are subject to optimizations).

[0066] So far, it has been assumed that the AP has no information about the absolute position of the STA (or even no relative position with respect to its own position and orientation). If such additional information is available at theAP (e.g., by applying fine time measure ments (FTM), with or without additional exchange of angular information), more refined beam tracking would be possible, as will be described below.

[0067] Usually, motion information is piggybacked to other uplink data, such as game controller commands. It will be transmitted over the wireless LAN interface consisting of PHY and MAC layer, by traversing from higher layer down to MAC and finally PHY layer for physical transmission. The output from MAC layer is then processed over the remaining protocol stack, and the final content (i.e., the motion information) could be made accessible after decomposition at the higher layer. If the 802.11 MAC interface allows accessing the motion information (in standardized form) already on Layer 2, it could allow for faster access (low latency). The motion information is needed at the console (or PC) to compute the new content for upcoming frames. Ideally, the sensing rate of sensing the motion information is higher than the frame rate of the video, e.g. 90 or 120 (video) frames per second (fps).

[0068] Further, a standardized interface would ease interoperability with other HMD vendors and allows for use of the information for other applications such as beamforming and/or training. [0069] In the following, different cases will be explained that can appearfor mmWave beam tracking, when the AP is integrated into a stationary device (console, PC, ...) and the STA is allowed to be moving.

[0070] In a first main case, schematically illustrated in Fig. 6, the position of the first communication device (simply called STA 1 in the following) is unknown to the second communication device (simply called AP 2 in the following).

[0071] In a first sub-case of the first main case only rotation of STA 1 shall be assumed as schematically illustrated in Fig. 6A. It is assumed that at a particular time fc, mmWave beams 3, 4 (one for transmission, the other for reception) have already been aligned properly to ensure maximum data rate for communication. If the STA 1 is rotating (via any axis) by an angle a, the AP 2 does not need to know the new orientation of the STA 1 , but the STA 1 itself needs to correct its beam 4, if the rotation was large (a large enough such that a better beam would be more suitable). Without beam tracking at STA 1, the effective communication channel could possibly not allow for the required data rates (at time t-i).

The proposed solution implies that there is a link between IMU and communication unit, both located within STA 1, such that IMU data (i.e., the motion information), in a standard ized form, can be made accessible to the communication unit of STA 1 for beam correc tion (even when no signaling to AP 2 is required).

[0072] Ideally, the new beam 4' from the STA 1 (to be used in both uplink and downlink in case of reciprocal channel and hardware components) is simply rotated by the inverse angle -a. Feeding the motion information into the WLAN chipset could allow the STA 1 to exploit such knowledge directly. This corrected beam 4' is shown in Fig. 6A at time t 2 , when the VR experience can continue at a similar quality level.

[0073] For 3DOF content tracking, the rotation needs to be signaled to the game console / PC.

This rotation information could be signaled inside the proposed 6DOF motion subelement (as shown in Fig. 3), even though the AP 2 does not need to know this information for the purpose of beam tracking. [0074] In a second sub-case of the first main case only translation of STA 1 shall be assumed as schematically illustrated in Fig. 6B, in which a translation to the right is indicated. Move ments are detected from a non-zero acceleration (e.g. accelerating, and then slowing down at the new position). At time ¾, beams 3, 4 from both AP 2 and STA 1 would not be aligned properly anymore. Thus, the AP 2 needs to know that the STA 1 was moving in order to correct its beam 3 into corrected beam 3'.

[0075] In case the STA 1 is near the AP 2, the movement may be more detrimental than in case the STA 1 is far away from the AP 2 (thus, possibly remaining inside the same beam sector). Since the STA position is not known at AP side, no exact adjustment is possible. However, the acceleration magnitude already hints towards the potential search space for refined beam training: small magnitudes require only searches within the neighboring sectors, while large magnitude of accelerations would indicate that the search space should be extended. After finding the best new beams (e.g. during a beam refinement phase (BRP)), the AP 2 may interpret the new beam direction (if no adjustment was needed or adjustment only to a neighboring sector, even though the acceleration was relatively high, then most likely, the distance between AP and STA was/is large; this could be stored as prior information for next refinement processes, also considering the accel eration direction (e.g., moving towards or away from the AP)).

[0076] Translation only requires update of video / audio content, in case of 6DOF applications. In 3DOF applications, the experience will remain unchanged (still watching towards the same direction in the 360° video).

[0077] In a third sub-case of the first main case rotation and translation of STA 1 shall be assumed. This case is not illustrated in a separate figure as it is simply a superposition of the previous two sub-cases illustrated in Figs. 6A and 6B.

[0078] In a second main case, schematically illustrated in Fig. 7, the position of STA 1 is known to AP 2. It is particularly assumed that the AP 2 knows the position of the STA 1 either in earth coordinates (and its own position in the same coordinate system) or at least in relative terms (relative to its own position and orientation). This can be accomplished by prior positioning approaches, such as fine time measurement (FTM), angle of arrival (AOA), angle of departure (AOD) measurements or other methods.

[0079] In a first sub-case of the second main case only rotation of STA 1 shall be assumed. This is not illustrated in a particular figure. Substantially the same applies that has been explained above with reference to Fig. 6A. The only difference may be that the video content also changes according to the relative position of the STA 1 with respect to the AP 2 (e.g. displaying objects far away (i.e. small), if STA is far from AP and vice versa).

[0080] In a second sub-case of the second main case only translation of STA 1 shall be assumed as schematically illustrated in Fig. 7. Now, the new position at time ¾ can even be calcu lated at the AP 2, assuming that the origin of the coordinate system is at the STA position at time to (and aligned as shown in Fig. 7). This means, the STA 1 is positioned at time ¼ in the origin at (x, y) = (0, 0), and the stationary AP 2 is located at (x, y) = (0, y Ap ), i.e. at a distance d = y AP (the z-component being assumed to be zero in the example). If the STA 1 starts moving directly after time to with uniform acceleration a = (a x , a y , a z ), e.g. only in x- direction (a = (a x , 0, 0)), the new position at time ¾ is x 1 = Vz a ¾ - t 0 ) 2 · The situation at time t 2 º is shown on the right hand side of Fig. 7: the “correction angle” b for the AP 2 to steer the new beam 4' can be computed from its distance d = y AP and the new posi tion Xi as b — arctan (^ / d).

[0081] In the first main case, where the STA position is unknown, the new location xi can also be computed by the AP 2 once the direction of the acceleration is known. Usually, only the z- component (perpendicular to the horizon) can be estimated by the IMU from the static case, where only the gravitation is applying a force on the STA 1. Still, the magnitude of any acceleration allows estimating the spatial deviation after a certain time span, indicat ing how large the search space for beam refinement should be, as explained above.

[0082] Further, feedback of the motion information (in particular the acceleration information) allows predicting the new position as well, even before the new motion information update is fed back. Observing e.g. for a longer period (e.g., several 100ms) a uniform (i.e., constant) acceleration into one direction allows assuming that the acceleration remains the same for the next time interval. Thus, the beams may already be steered towards the predicted position and only updated in case of a mismatch (when the new motion infor mation is fed back, which contradicts the assumption).

[0083] Regarding beamforming further options exist. In an embodiment a beamforming or beamtracking protocol can be triggered by the motion information, if this motion infor mation is transmitted from a service management entity (SME) (of the first communication device 1) to the MAC (of the first communication device 1). Further, beamforming parame ters such as start, number of sectors, sector list, number of angles, angle list to be tested can be negotiated between the management entities of the two communication devices and only then start a transmission. The first communication device 1 may request the amount of receive training that it requires from the second communication device 2 (i.e. , the training units in which the first communication device 1 will change its beam) based on the indication inside the motion information (without requiring communication between the SMEs). Still further, the first communication device 1 may transmit the motion information together with training requests (the information as explained above) to the second com munication device 2 to allow the second communication device 2 to decide which transmit beams to use during the training/tracking. The motion information may then be sent in the beam refinement packets in this case, SME involvement is not required in this case. Even further, the first communication device 1 may transmit an estimate of its position (based on the IMU data) to the second communication device together with the training/tracking request to allow the second communication device to decide which beams to use during training/tracking, which may be sent in regular beam training packets.

[0084] The beam refinement packets may be in BRP (beam refinement phase) frames, or in announce frames, or in a control trailer attached to the packet and carrying control infor mation, or as a control frame aggregated with the data frame. Alternatively, the motion information can be sent in packets over a different band than the one in which the beam- formed communication takes place.

[0085] In this context it is noted that beam refinement packets are directional frames that are used to request and enable beam training. They contain the necessary parameters for the training, e.g. the number of sectors to be used, type of training to be performed (e.g., transmit / receive / mixed), type of feedback required and also the training units to allow the stations to measure the signal quality on different beam combinations. Announce frames are management frames sending control information. Control trailers are short control frames that are appended to packets and carry information to allow e.g., fast link adaptation.

[0086] It should be noted that, even though the preceding description mainly relates to VR, the above can be applied to AR or MR in analogy. More generally, the disclosure also relates to wireless communication with mobile devices, whose location can be exploited. As an example, a wireless backhaul may be established with unmanned aerial vehicles (UAVs), which bridge the distance between an end-user (using an STA) and an AP. The UAV may change its position to optimize coverage for the end-user. Also, balloons with WLAN equipment could create a wireless backhaul or a local hotspot. Another scenario includes only a mobile UAV, which is controlled via a wireless link using WLAN or other standards.

[0087] In an UAV scenario, as another exemplary scenario in which the present disclosure may be applied, the transmit and receive beams of AP, UAV and end-user may be updated according to motion information of end-user and of the UAV. Providing the motion infor mation at a lower layer may thus be advantageous for this scenario as well.

[0088] The motion information may make tracking or training unnecessary since the second communication device can adapt parameters perfectly based on motion information and can hence determine which transmit beams to use during beamforming and/or beam tracking and/or regular transmission.

[0089] In the following exemplary implementation details of the motion-based beamforming and directional communication will be described.

[0090] Fig. 8 shows an embodiment of the disclosed solution, in which the relation between PHY, MAC and corresponding interfaces, in particular MAC sublayer management entity (MLME) is shown. The interfaces (bold lines), in particular the management entities for PHY and MAC (PLME and MLME, respectively) are controlled by the Station Manage- merit Entity (SME), which is a separate management plane, allowing to set, get or modify primitives.

[0091] Based on IMU information, RX beam measurement for specific sectors corresponding to the rotation / translation sensor information is triggered. In Fig. 9, it is shown how to use the IMU data at a VR/drone STA to infer receive beamforming parameters to be used to maintain alignment to an AP. More specifically, IMU data (together with currently used beam information) can be mapped to angles or sectors information which will be sent as parameters by the management entity (specifically the MAC sublayer management entity (MLME) to the MAC. The Mapping Unit can be seen as one embodiment of a parametriza- tion unit for modifying one or more beam parameters. Upon receipt of this primitive, the MAC triggers the measurement of directional beams/beacons from an AP and instructs the PHY to use for the measurement, RX beams corresponding to the RX sector infor mation which is part of the primitive parameters. Additionally, time stamp information can be correlated with the RX sector information, based on both IMU motion information available from the local sensors as well as information from an AP schedule. In this way particular RX beams will be used at given time stamps, as instructed by the parameters of the primitive, for the measurement of the directional frames from the AP.

[0092] This primitive and the interworking of the management entity and the MAC of the

VR/drone STA are depicted in Fig. 10. Measurement reports for the requested sectors will be reported back to the management entity, which will decide the necessary further actions, e.g., a beam change or a trigger for a beam realignment with the AP. Further more, if better receive beams have been found, these can be further used in a follow up transmission to the AP.

[0093] When the AP has received the motion information (either from in-band or out-of band messages), it can use this to schedule directional beam transmissions which allows for training and data transmission with minimum training overhead. For example, upon knowledge of the motion info of the VR/drone STA, the AP may send directional frames e.g., beacons in the predicted directions (e.g. a subset of directions that would be used without that knowledge), to allow the recipient to constantly train its reception beams. The parameters of the directional frame transmission e.g., the transmit directions, the frequen- cy with which this transmission occurs, the type of information contained, is adapted based on the motion information, such that it allows the VR/drone STA to perform tracking and, when necessary, receive updated data (based on new position) without occupying the medium with too much control information. The transmit sectors in which directional beams should be sent can be inferred from the motion information. Based on the reported speed/acceleration parameters of the VR /drone STA as well as the local schedule information at the AP (when next transmissions are possible), a list of TX sec- tors/antennas/AWVs is created together with timestamps at which each of them should be tested. Based on the speed as well as uncertainty of measurement, the number of slots for each sector, in which a directional frame is repeated to ensure the reception/receive beam training at the VR/drone STA, can also be created. These parameters are sent as primitives by the management entity of an AP to the MAC, which will then construct directional frames and instruct the PHY to transmit with the required beams at the re quired timestamps. This primitive is referred to as MLME-VRBTrkReq in Fig. 10.

[0094] In Fig. 11 , the case of in-band transmission of motion information is shown. In this case, the management entity of the VR STA requests the MAC to send directional frames (DirFrame), e.g. a BRP frame, or an announce frame containing this information, possibly together with training information or requests. Alternatively, the primitive may instruct the MAC to aggregate some control frame carrying the motion information with data frames and send these together in a PPDU with tracking request and corresponding training units attached. If a beamforming training has occurred before, motion information can be sent together with beamforming feedback information or as part of a control trailer, which is attached to the data packet to give an indication of the future beams to use. This infor mation is forwarded upon receipt of the MAC of the AP to its management entity.

[0095] Based on the motion information received by one of the methods described above, the AP will decide upon the action to take, e.g., request a beam track, or a beam change or a reliable transmission with PPDU repetition over several directions and instruct the MAC accordingly. The beam directions to be used are decided by the management entity based on the motion information, i.e. , motion information is transformed to sectors in which the beamformed transmission or training should be performed at particular times. The direc- tions and times are then forwarded as parameters to the MAC to create the corresponding frames and instruct the PHY to start the transmission according to the given parameters.

[0096] Further, the SME might provide direct access to the received motion information to other entities that are not part of the communication unit included in a second communication device, allowing e.g. for adaption of image and/or rendering data as a result of the VR’s/Drone’s motion / movement or change of position.

[0097] MLME-VRTrkResponse primitive from the management entity instructs the MAC to create frames informing the VR/Drone STA of the employed transmit beams. Additionally, information regarding the time stamps at which a response from the STA can be received at the PCP/AP and with which receive beams is indicated in the MAC frames that are created upon receipt of the primitive. The response from the VR/Drone STA should contain updated motion information and directional measurements for one or more transmit beams, as indicated in the request.

[0098] Fig. 12 shows a managed network, in which either a Wireless Network Controller (not shown) or one of the APs is responsible for mobility management. Within a managed network, information regarding transmit and receive periods of multiple APs can be contained within the frames issued by the MAC in response of MLME-VRTrkResponse. Which APs are participating can be decided based on the motion information by a master AP or by a controller. In this case VR/Drone STA may be informed e.g., at which time a beacon transmission from a different AP is expected, such that the former can train beams with this and at which time it may send a response to this e.g., association request. This is shown in Fig. 12.

[0099] If a VR/drone STA detected its motion (InMotion flag), or if after a directional measurement, receive beams have been changed, the parameters for the subsequent transmission to the AP should be accordingly adapted, to allow the latter to realign its beams. This is shown in Fig. 13. Thus in this case data packets can be sent with a repetition of training units to allow the AP to realign its beam. Alternatively, a training packet, sent in several directions indicated by the mapping unit (mapping IMU data to TX beams) can precede the data transmission.

[00100] Figs. 14 and 15 depict possible flows regarding the transmission and use of motion information from a VR/drone STA to an AP, in order to achieve a fast (position) tracking as well as fast adaptation of the AV (audio/video) content according to the motion.

[00101] According to the embodiment shown in Fig. 14, based on IMU data, the management entity of a VR/drone STA instructs the MAC to form frames containing requests towards the AP to perform motion tracking. The frames sent by the MAC contain motion infor mation, which has been adapted from the IMU data. Upon receipt of the information, the management entity of the AP sends an acknowledgement containing timing information when a possible transmission from the VR STA may take place. Alternatively, a periodic schedule can be announced in these ACK (acknowledgement) frames. After a successful setup, the VR STA sends, at the timestamps indicated by the AP, updated motion infor mation, possibly together with other data/control messages.

[00102] According to the embodiment shown in Fig. 15, the AP sends a VR Motion Tracking Request to the VR STA, in which information about motion or level of motion is requested. Upon receipt of the Configuration Frame, the MAC of the VR STA forwards it to the management entity. In response, the management entity of the VR STA maps IMU data to motion information and instructs the MAC to embed this information in a response frame (e.g., the motion subelement shown in Fig. 5A or 5B) to be sent back towards the AP. The AP, after parsing the motion information from the VR STA, decides upon possible periodic scheduling or a periodic allocation, in which it can transmit data frames containing updat ed content to the VR STA and in which it can receive updated motion information frames back from the VR STA.

[00103] In the foregoing various embodiments have been described for applying the disclosure in various scenarios and applications. The described mobile STA may hereby e.g. be a AR/VR headset (also called HMD) or a drone (also called UAV) or any other arbitrary mobile device, such as a robot or robot arm that is movable in several degrees of freedom (e.g. 6 DOF).

[00104] Thus, the foregoing discussion discloses and describes merely exemplary embodiments of the present disclosure. As will be understood by those skilled in the art, the present disclosure may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present disclosure is intended to be illustrative, but not limiting of the scope of the disclosure, as well as other claims. The disclosure, including any readily discernible variants of the teachings herein, defines, in part, the scope of the foregoing claim terminology such that no inventive subject matter is dedicated to the public.

[00105] In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single element or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

[00106] In so far as embodiments of the disclosure have been described as being implemented, at least in part, by software-controlled data processing apparatus, it will be appreciated that a non-transitory machine-readable medium carrying such software, such as an optical disk, a magnetic disk, semiconductor memory or the like, is also considered to represent an embodiment of the present disclosure. Further, such a software may also be distribut ed in other forms, such as via the Internet or other wired or wireless telecommunication systems.

[00107] The elements of the disclosed devices, apparatus and systems may be implemented by corresponding hardware and/or software elements, for instance appropriated circuits or circuitry. A circuit is a structural assemblage of electronic components including conven tional circuit elements, integrated circuits including application specific integrated circuits, standard integrated circuits, application specific standard products, and field programma ble gate arrays. Further, a circuit includes central processing units, graphics processing units, and microprocessors which are programmed or configured according to software code. A circuit does not include pure software, although a circuit includes the above- described hardware executing software. A circuit or circuitry may be implemented by a single device or unit or multiple devices or units, or chipset(s), or processor(s).

[00108] It follows a list of further embodiments of the disclosed subject matter:

1. A first communication device comprising a sensing circuitry configured to sense motion of the first communication device and to generate motion information representing the sensed motion, the motion infor mation comprising rotation information indicating rotation and/or orientation of the first communication device, and a communication circuitry configured to transmit the motion information to a second communication device within a layer of a hierarchical layer model lower than a transport layer.

2. The first communication device as defined in any preceding embodiment, wherein the hierarchical layer model is an Open Systems Interconnection, OSI, model.

3. The first communication device as defined in any preceding embodiment, wherein the communication circuitry is configured to transmit the motion information within a physical layer or a MAC layer to the second communication device.

4. The first communication device as defined in any preceding embodiment, wherein the sensing circuitry is configured to sense one or more of rotational speed, angular velocity and change of orientation of the first communication device and to generate rotation information indicating the sensed rotational speed and/or angular velocity and/or change of orientation.

5. The first communication device as defined in any preceding embodiment, wherein the sensing circuitry is configured to additionally sense one or more of transla tional speed, translational acceleration change of position of the first communication device and to generate translational information indicating the sensed translational speed and/or translational acceleration and/or change of position as part of the motion infor mation.

6. The first communication device as defined in any preceding embodiment, wherein the communication circuitry is configured to transmit the motion information in a motion subelement of a WLAN protocol and/or to transmit the motion information along side with payload data.

7. The first communication device as defined in any preceding embodiment, wherein the motion information includes one or more of:

- linear scale defining acceleration full-scale selection;

- rotation scale defining gyroscope full-scale selection;

- linear acceleration X-axis;

- linear acceleration Y-axis;

- linear acceleration Z-axis;

- angular rate pitch;

- angular rate roll;

- angular rate yaw;

- left and right eye horizontal position;

- left and right eye vertical position; or

- reserved bits for eye tracking, such as pupil sizes, blinks, saccades.

8. The first communication device as defined in any preceding embodiment, wherein the communication circuitry is configured to receive multimedia data from the second communication device in response to the transmission of the motion information and wherein the first communication device further comprises an output interface configured to output a signal for perception by a user based on the received multimedia data.

9. The first communication device as defined in any preceding embodiment, wherein the first communication device is a head-mounted display device, an unmanned vehicle, or an unmanned aerial vehicle. 10. The first communication device as defined in any preceding embodiment, wherein the first communication device is a wireless backhaul.

11. The first communication device as defined in any preceding embodiment, further comprising an interface configured to enable accessing motion information on the layer of the hierarchical layer model lower than the transport layer.

12. A second communication device comprising a communication circuitry configured to receive motion information from a first communi cation device within a layer of a hierarchical layer model lower than a transport layer, the motion information representing motion of the first communication device and comprising rotation information indicating rotation and/or orientation of the first communication device

13. The second communication device as defined in embodiment 12, further comprising a multimedia generation circuitry configured to generate multimedia data based on the motion information, wherein the communication circuitry is configured to transmit the generated multimedia data to the first communication device.

14. The second communication device as defined in embodiment 12 or 13, further comprising a rendering circuitry configured to render video data based on the motion information, wherein the communication circuitry is configured to transmit the rendered video data to the first communication device for display.

15. The second communication device as defined in any one of embodiments 12 to 14, wherein the second communication device is a virtual, augmented, or mixed reality device.

16. The second communication device defined in any one of embodiments 12 to 15, further comprising an interface configured to enable accessing motion information on the layer of the hierarchical layer model lower than the transport layer. 17. The second communication device as defined in any one of embodiments 12 to 16, wherein the communication circuitry is configured to inform to the first communication device of a schedule or a periodic allocation informing the first communication device when it can transmit updated motion information or motion information aggregated with data.

18. A first communication method comprising sensing motion of the first communication device, generating motion information representing the sensed motion, the motion infor mation comprising rotation information indicating rotation and/or orientation of the first communication device, and transmitting the motion information to a second communication device within a layer of a hierarchical layer model lower than a transport layer.

19. A second communication method comprising receiving motion information from a first communication device within a layer of a hierar chical layer model lower than a transport layer, the motion information representing motion of the first communication circuitry and comprising rotation information indicating rotation and/or orientation of the first communication device.

20. The first or second communication device or method as defined in any one of the preceding embodiments, wherein the communication relates to wireless communication, in particular mmWave communication or communication in accordance with a IEEE802.11 standard.