Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR PROCESSING DISPLAY DATA
Document Type and Number:
WIPO Patent Application WO/2019/092392
Kind Code:
A1
Abstract:
There is a method of processing display data for a system, comprising: determining a first colour signature for a first frame, or a portion of a first frame, of generated display data; storing the first colour signature in a memory; determining a second colour signature for a second frame, or a portion of a second frame, of generated display data, wherein the second frame or portion of the second frame is for consecutively displaying to the user following the first frame, or portion of the first frame respectively, and wherein the portion of the second frame corresponds to the portion of the first frame; comparing the second colour signature to the first colour signature to determine a difference in the colour signatures; and comparing the difference in the colour signatures to a first threshold, wherein if the difference in the colour signatures is below the first threshold, the method further comprises identifying the second frame, or portion of the second frame, as a candidate for dropping.

Inventors:
JOVELURO, Prince (25 Short Road, Stretham CB6 3LS, CB6 3LS, GB)
COOPER, Patrick David (6 Shirley Close, Milton, Cambridge CB24 6BG, CB24 6BG, GB)
Application Number:
GB2018/052966
Publication Date:
May 16, 2019
Filing Date:
October 15, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DISPLAYLINK (UK) LIMITED (140 Cambridge Science Park, Milton Road, Cambridge CB4 0GF, CB4 0GF, GB)
International Classes:
H04N19/132; H04N19/137; H04N19/172; H04N19/18; H04N19/507; H04N19/587
Foreign References:
US20130266073A12013-10-10
Other References:
ZHU WEIJIA ET AL: "Hash-Based Block Matching for Screen Content Coding", IEEE TRANSACTIONS ON MULTIMEDIA, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 17, no. 7, 1 July 2015 (2015-07-01), pages 935 - 944, XP011584980, ISSN: 1520-9210, [retrieved on 20150615], DOI: 10.1109/TMM.2015.2428171
JUNG WOO LEE ET AL: "Hash aided Motion Estimation: Various Approaches", 11 March 2005 (2005-03-11), XP055300213, Retrieved from the Internet
Attorney, Agent or Firm:
HIRSZ, Christopher (Mathys & Squire LLP, The Shard32 London Bridge Street, London Greater London SE1 9SG, SE1 9SG, GB)
Download PDF:
Claims:
Claims

1. A method of processing display data for a system, the system comprising a host device for generating display data, and a display device for displaying the generated display data to a user, wherein the generated display data comprises frames of display data for displaying consecutively to a user, the method comprising:

determining a first colour signature for a first frame, or a portion of a first frame, of generated display data;

storing the first colour signature in a memory;

determining a second colour signature for a second frame, or a portion of a second frame, of generated display data, wherein the second frame or portion of the second frame is for consecutively displaying to the user following the first frame, or portion of the first frame respectively, and wherein the portion of the second frame corresponds to the portion of the first frame;

comparing the second colour signature to the first colour signature to determine a difference in the colour signatures; and

comparing the difference in the colour signatures to a first threshold, wherein if the difference in the colour signatures is below the first threshold, the method further comprises identifying the second frame, or portion of the second frame, as a candidate for dropping.

2. A method as claimed in claim 1 , wherein determining a colour signature for a frame, or portion of a frame, comprises compressing the frame, or portion of the frame, respectively.

3. A method as claimed in either of claims 1 or 2, wherein determining a colour signature for a frame, or portion of a frame, comprises determining an average colour value for the frame, or portion of the frame, respectively. 4. A method as claimed in any preceding claim, wherein determining a colour signature for a frame, or portion of a frame, comprises applying a transform to the frame, or portion of the frame, respectively, and wherein the colour signature corresponds to a generated DC value for the frame, or portion of the frame, respectively, following the transform.

5. A method as claimed in any preceding claim, wherein the colour signature for a frame, or portion of a frame, corresponds to any one or more of a red, green, blue, luma

5 and chroma signature.

6. A method as claimed in any preceding claim, wherein a frame, or portion of a frame, comprises at least 1000 pixels.

10 7. A method as claimed in any preceding claim, wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the method further comprises dropping the second frame, or portion of the second frame.

8. A method as claimed in any of claims 1 to 6, wherein, on identifying the second 15 frame, or portion of the second frame, as a candidate for dropping, the method further comprises determining whether to drop the second frame, or portion of the second frame, or whether to instead send the second frame, or portion of the second frame, to the display device.

20 9. A method as claimed in any preceding claim, wherein the memory is configured to store a predetermined number of colour signatures, and wherein the method further comprises tagging the colour signatures stored in the memory to indicate whether the corresponding frames, or portions of frames, were dropped.

25 10. A method as claimed in any preceding claim, wherein a counter is increased when a frame, or portion of a frame, is dropped, and wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the method further comprises comparing the counter value to a second threshold value to determine whether to drop the second frame, or portion of the second frame, wherein if the counter

30 value exceeds the second threshold value, the method further comprises dropping the second frame, or portion of the second frame.

1 1. A method as claimed in any preceding claim, wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the method further comprises determining a number of previous frames, or portions of frames, immediately preceding the second frame, or portion of the second frame, that were dropped, and comparing this number to a third threshold value, wherein if this number does not exceed 5 this threshold value, the method comprises dropping the second frame, or portion of the second frame, and wherein if this number of previous frames, or portions of frames, does exceed this third threshold value, the method comprises sending the second frame, or portion of the second frame, to the display device for displaying to the user.

10 12. A method as claimed in claim 1 1 , wherein the third threshold value is any one of 5, 6, 7, 8, 9 or 10.

13. A method as claimed in any preceding claim, wherein the method further comprises detecting movement of the display device, and if movement of the display

15 device is detected, the first threshold is reduced to provide for a higher similarity requirement.

14. A method as claimed in any preceding claim, wherein the generated display data is video data and a frame of the video data corresponds to image data.

20

15. A method as claimed in any preceding claim, wherein the frames of display data are generated, processed and sent to the display device at a rate of approximately at least 50 frames per second, or at least 60 frames per second, or at least 90 frames per second, or at least 120 frames per second for displaying to a user.

25

16. A method as claimed in any preceding claim, wherein the system is a virtual reality system.

17. A method as claimed in any preceding claim, wherein the host device and the 30 display device are wirelessly connected.

18. A method as claimed in any preceding claim, wherein the display device is a head mounted display.

19. A method as claimed in any preceding claim, wherein the host device and the display device are contained within a housing. 20. A method as claimed in claim 19, wherein the housing is the casing for any one of a mobile phone, a PDA, a tablet or any other handheld portable device.

21. A system for processing display data, the system comprising:

a host device for generating and processing display data, wherein the generated display data comprises frames of display data for displaying consecutively to a user, the host device comprising a processor and a memory; and

a display device connected to the host device configured to receive generated display data from the host device and to display the generated display data to a user; wherein the processor is configured to:

determine a first colour signature for a first frame, or portion of a first frame, of generated display data and to store the first colour signature in the memory;

determine a second colour signature for a second frame, or portion of a second frame, of generated display data, wherein the second frame, or portion of the second frame, is for consecutively displaying to the user following the first frame of generated display data, or portion of the first frame respectively, and wherein the portion of the second frame corresponds to the portion of the first frame;

compare the second colour signature to the first colour signature to determine a difference in the colour signatures; and

compare the difference in the colour signatures to a first threshold, wherein if the difference in the colour signatures is below the first threshold, the processor is further configured to identify the second frame, or portion of the second frame, as a candidate for dropping.

22. A system as claimed in claim 21 , wherein to determine a colour signature for a frame or portion of a frame, the processor is configured to compress the frame or portion of the frame, respectively.

23. A system as claimed in claim 21 or 22, wherein to determine a colour signature for a frame or portion of a frame, the processor is configured to determine an average colour value for the frame, or portion of the frame, respectively.

24. A system as claimed in any of claims 21 to 23, wherein to determine a colour 5 signature for a frame or portion of a frame, the processor is configured to apply a transform to the frame, or portion of the frame, respectively, and wherein the colour signature corresponds to a generated DC value for the frame, or portion of the frame, respectively, following the transform.

10 25. A system as claimed in any of claims 21 to 24, wherein the colour signature for a frame, or portion of a frame, corresponds to any one or more of a red, green, blue, luma and chroma signature.

26. A system as claimed in any of claims 21 to 25, wherein a frame, or portion of a 15 frame, comprises at least 1000 pixels.

27. A system as claimed in any of claims 21 to 26, wherein on identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor is further configured to drop the second frame, or portion of the second frame.

20

28. A system as claimed in any of claims 21 to 26, wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor is further configured to determine whether to drop the second frame, or portion of the second frame, or whether to instead send the second frame, or portion of the second

25 frame, to the display device.

29. A system as claimed in any of claims 21 to 28, wherein the memory is configured to store a predetermined number of colour signatures, and wherein the processor is configured to tag the colour signatures stored in the memory to indicate whether the

30 corresponding frames, or portions of frames, were dropped.

30. A system as claimed in any of claims 21 to 29, wherein the system includes a counter, and wherein the processor is configured to increase the counter when a frame, or portion of a frame, is dropped, and wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor is further configured to compare the counter value to a second threshold value to determine whether to drop the second frame, or portion of the second frame, wherein if the counter 5 value exceeds the second threshold value, the processor is further configured to drop the second frame, or portion of the second frame.

31. A system as claimed in any of claims 21 to 30, wherein, on identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor is

10 configured to determine the number of previous frames, or portions of frames, immediately preceding the second frame, or portion of the second frame, that were dropped, and to compare this number to a third threshold value, wherein if this number does not exceed this threshold value, the processor is configured to drop the second frame, or portion of the second frame, and wherein if this number does exceed this third

15 threshold value, the processor is configured to send the second frame, or portion of the second frame, to the display device for displaying to the user.

32. A system as claimed in claim 31 , wherein the third threshold value is any one of 5, 6, 7, 8, 9 or 10.

20

33. A system as claimed in any of claims 21 to 32, wherein the system comprises means for detecting movement of the display device, wherein the means are in communication with the processor, and wherein when movement of the display device is detected, the processor is configured to reduce the first threshold to provide for a higher

25 similarity requirement.

34. A system as claimed in any of claims 21 to 33, wherein the generated display data is video data and a frame of the video data corresponds to image data.

30 35. A system as claimed in any of claims 21 to 34, wherein the frames of display data are generated, processed and sent to the display device at a rate of approximately at least 50 frames per second, or at least 60 frames per second, or at least 90 frames per second, or at least 120 frames per second for displaying to a user.

36. A system as claimed in any of claims 21 to 35, wherein the system is a virtual reality system. 37. A system as claimed in any of claims 21 to 36, wherein the host device and the display device are wirelessly connected.

38. A system as claimed in any of claims 21 to 37, wherein the display device is a head mounted display.

39. A system as claimed in any of claims 21 to 38, wherein the host device and the display device are contained within a housing.

40. A system as claimed in claim 39, wherein the housing is the casing for any one of a mobile phone, a PDA, a tablet or any other handheld portable device.

Description:
Method and system for processing display data

The present invention relates to a method and system for processing display data, and in particular to a method and system for processing display data for use in virtual reality systems.

Virtual reality systems typically comprise a host computer for generating display data, and a display device, such as a virtual reality headset, for displaying the display data to a user. The host computer may compress and encode the display data for sending to the display device. For such systems to be effective, to be able to effectively simulate a real-life situation, the resolution of the display data must be high and must have fast refresh and update rates. For this reason, virtual reality is currently one of the most bandwidth intensive technologies, typically requiring around 90 frames per second to be sent between the host computer and display device. Attempts have been made to try and reduce the bandwidth requirements, but typically only by compressing and/or encoding the display data at the host computer before the display data is sent to the display device. Any way in which the bandwidth requirements could be reduced would clearly be beneficial for allowing the virtual reality technology to develop further.

The applicant has identified a method for being able to rapidly process display data, and so a method that is suitable for such virtual reality systems, to identify when frames are similar enough to be dropped, and so to reduce bandwidth requirements without compromising on the quality of the user experience.

According to an aspect of the present invention, there is provided a method of processing display data for a system, the system comprising a host device for generating display data, and a display device for displaying the generated display data to a user, wherein the generated display data comprises frames of display data for displaying consecutively to a user, the method comprising:

determining a first colour signature for a first frame, or a portion of a first frame, of generated display data;

storing the first colour signature in a memory;

determining a second colour signature for a second frame, or a portion of a second frame, of generated display data, wherein the second frame or portion of the second frame is for consecutively displaying to the user following the first frame, or portion of the first frame respectively, and wherein the portion of the second frame corresponds to the portion of the first frame, comparing the second colour signature to the first colour signature to determine a difference in the colour signatures; and

comparing the difference in the colour signatures to a first threshold, wherein if the difference in the colour signatures is below the first threshold, the method further comprises identifying the second frame, or portion of the second frame, as a candidate for dropping.

The applicant has found that when a colour signature for the second frame (or portion) is determined to be the same as, or within a predetermined range of the colour signature for the first frame (or portion), it can be assumed with reasonable probability that the second frame (or portion) is unchanged from the first frame (or portion), at least for the purposes of perception by the human eye i.e. a user would see no change between the first frame and second frame if they were displayed consecutively to the user. This may allow for the second frame (or portion) to be dropped, and so not sent to the display device for displaying to a user. Processing data rapidly and dropping frames, or portions of frames, in this way can greatly reduce the bandwidth requirements of the system, which is clearly beneficial. This method may be particularly efficient since the frames typically need to be compressed or processed before being sent to the display device in any case, and this method can take advantage of values that would be generated during compression/processing. This method therefore requires very little additional processing, but has the potential to vastly reduce the bandwidth requirements of the system.

Each frame or portion of a frame will typically have thousands or millions of pixels and will always have at least 1000 pixels. Corresponding portions of frames may be located at the same position, or location, within a frame.

Determining a colour signature for a frame, or portion of a frame, may comprise compressing the frame, or portion of the frame, respectively. Determining a colour signature for a frame, or portion of a frame, may comprise determining an average colour value for the frame, or portion of the frame, respectively. Determining a colour signature for a frame, or portion of a frame, may comprise applying a transformation to the frame, or portion of the frame, respectively, and wherein the colour signature corresponds to a generated DC value for the frame, or portion of the frame, respectively, following the transform. The transform may comprise Haar encoding. The transform may comprise a discrete cosine transformation (DCT). The colour signature for a frame, or portion of a frame, may correspond to any one or more of a red, green, blue, luma and chroma signature. Use of certain colour signatures may be more suitable for certain applications.

On identifying the second frame, or portion of the second frame, as a candidate for dropping, the method may further comprise dropping the second frame, or portion of the second frame. Given that it has been identified that the frames are unchanged for the purpose of perception by the human eye, it may be more efficient to just drop the frame or portion at this stage. This may be beneficial where it would be particularly helpful to reduce the bandwidth.

On identifying the second frame, or portion of the second frame, as a candidate for dropping, the method may further comprise determining whether to drop the second frame, or portion of the second frame, or whether to instead send the second frame, or portion of the second frame, to the display device. This second check may be performed to prevent longer term divergence, and error propagation in the system.

The memory may be configured to store a predetermined number of colour signatures, and wherein the method may further comprise tagging the colour signatures stored in the memory to indicate whether the corresponding frames, or portions of frames, were dropped.

The memory may be configured to store just the previous colour signature.

A counter may be increased when a frame, or portion of a frame, is dropped, and on identifying the second frame, or portion of the second frame, as a candidate for dropping, the method may further comprise comparing the counter value to a second threshold value to determine whether to drop the second frame, or portion of the second frame, wherein if the counter value exceeds the second threshold value, the method further comprises dropping the second frame, or portion of the second frame.

On identifying the second frame, or portion of the second frame, as a candidate for dropping, the method may further comprise determining a number of previous frames, or portions of frames, immediately preceding the second frame, or portion of the second frame, that were dropped, and comparing this number to a third threshold value, wherein if this number does not exceed this threshold value, the method comprises dropping the second frame, or portion of the second frame, and wherein if this number of previous frames, or portions of frames, does exceed this third threshold value, the method comprises sending the second frame, or portion of the second frame, to the display device for displaying to the user. The method thereby reduces the chance of any longer- term divergence, and error propagation in the system. When it is determined that a certain number of consecutive frames have been dropped by the system, and so have not been sent to the display device for displaying to a user, the method ensures that the current frame is then sent to the display device for display to the user.

The third threshold value may be any one of 5, 6, 7, 8, 9 or 10. The third threshold may also be below 5 or above 10. It may be possible to alter the third threshold depending on the application. Particular thresholds may be more suited to particular applications.

The method may further comprise detecting movement of the display device, and if movement of the display device is detected, the first threshold may be reduced to provide for a higher similarity requirement. This may help reduce the chance of error propagation in the system. The first threshold may be reduced further where detected movement is faster.

The generated display data may be video data and a frame of the video data may correspond to image data.

The frames of display data may be generated, processed and sent to the display device at a rate of approximately at least 50 frames per second, or at least 60 frames per second, or at least 90 frames per second, or at least 120 frames per second for displaying to a user.

The system may be a virtual reality system. This method may be particularly advantageous in such systems.

The host device and the display device may be wirelessly connected. Having a method that allows for reduced bandwidth requirements may provide for greater ease in use of wireless display devices, and provide for a greater ability in further developing such systems. The wireless connection may use any one or more or Wi-Fi, Radio, Internet or any other suitable technology.

The display device may be a head mounted display, and/or may comprise augmented reality glasses.

The host device and the display device may be contained within a housing. The housing may be the casing for any one of a mobile phone, a PDA, a tablet or any other handheld portable device.

The memory may be a buffer. This may be particularly suitable. Reduced storage requirements may free up the processing capacity of the system for other uses. Any other suitable memory or memory unit could be used.

According to an aspect of the present invention, there is provided a system for processing display data, the system comprising:

a host device for generating and processing display data, wherein the generated display data comprises frames of display data for displaying consecutively to a user, the host device comprising a processor and a memory; and

a display device connected to the host device configured to receive generated display data from the host device and to display the generated display data to a user; wherein the processor is configured to:

determine a first colour signature for a first frame, or a portion of a first frame, of generated display data and to store the first colour signature in the memory;

determine a second colour signature for a second frame, or a portion of a second frame, of generated display data, wherein the second frame, or portion of the second frame, is for consecutively displaying to the user following the first frame of generated display data, or portion of the first frame respectively, and wherein the portion of the second frame corresponds to the portion of the first frame;

compare the second colour signature to the first colour signature to determine a difference in the colour signatures; and

compare the difference in the colour signatures to a first threshold, wherein if the difference in the colour signatures is below the first threshold, the processor is further configured to identify the second frame, or portion of the second frame, as a candidate for dropping.

The applicant has found that when a colour signature for the second frame (or portion) is determined to be the same as, or within a predetermined range of the colour signature for the first frame (or portion), it can be assumed with reasonable probability that the second frame (or portion) is unchanged from the first frame(or portion), at least for the purposes of perception by the human eye i.e. a user would see no change between the first frame and second frame if they were displayed consecutively to the user. This may allow for the second frame (or portion) to be dropped, and so not sent to the display device for displaying to a user. Processing data rapidly and dropping frames, or portions of frames, in this way can greatly reduce the bandwidth requirements of the system, which is clearly beneficial. This system may be particularly efficient since the frames typically need to be compressed or processed before being sent to the display device in any case, and this system can take advantage of values that would be generated during compression/processing. This system therefore requires very little additional processing, but has the potential to vastly reduce the bandwidth requirements of the system.

To determine a colour signature for a frame or portion of a frame, the processor may be configured to compress the frame or portion of the frame, respectively.

To determine a colour signature for a frame or portion of a frame, the processor may be configured to determine an average colour value for the frame, or portion of the frame, respectively.

To determine a colour signature for a frame or portion of a frame, the processor may be configured to apply a transform to the frame, or portion of the frame, respectively, and the colour signature may correspond to a generated DC value for the frame, or portion of the frame, respectively, following the transform. The transform may comprise Haar encoding. The transform may comprise a discrete cosine transformation (DCT).

The colour signature for a frame, or portion of a frame, may correspond to any one or more of a red, green, blue, luma and chroma signature.

On identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor may be further configured to drop the second frame, or portion of the second frame.

On identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor may be further configured to determine whether to drop the second frame, or portion of the second frame, or whether to instead send the second frame, or portion of the second frame, to the display device.

The memory may be configured to store a predetermined number of colour signatures, and the processor may be configured to tag the colour signatures stored in the memory to indicate whether the corresponding frames, or portions of frames, were dropped.

The memory may be configured to store just the previous colour signature.

The system may include a counter, and the processor may be configured to increase the counter when a frame, or portion of a frame, is dropped, and on identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor may be further configured to compare the counter value to a second threshold value to determine whether to drop the second frame, or portion of the second frame, wherein if the counter value exceeds the second threshold value, the processor is further configured to drop the second frame, or portion of the second frame.

On identifying the second frame, or portion of the second frame, as a candidate for dropping, the processor may be configured to determine the number of previous frames, or portions of frames, immediately preceding the second frame, or portion of the second frame, that were dropped, and to compare this number to a third threshold value, wherein if this number does not exceed this threshold value, the processor may be configured to drop the second frame, or portion of the second frame, and wherein if this number does exceed this third threshold value, the processor may be configured to send the second frame, or portion of the second frame, to the display device for displaying to the user. The third threshold value may be any one of 5, 6, 7, 8, 9 or 10. The third threshold may also be below 5 or above 10. It may be possible to alter the third threshold depending on the application.

The system may comprise means for detecting movement of the display device, wherein the means are in communication with the processor, and wherein when movement of the display device is detected, the processor is configured to reduce the first threshold to provide for a higher similarity requirement. The means may comprise an accelerometer or any other suitable device. The first threshold may be reduced further where detected movement is faster.

The generated display data may be video data and a frame of the video data corresponds to image data.

The frames of display data may be generated, processed and sent to the display device at a rate of approximately at least 50 frames per second, or at least 60 frames per second, or at least 90 frames per second, or at least 120 frames per second for displaying to a user.

The system may be a virtual reality system. The host device and the display device may be wirelessly connected. The display device may be a head mounted display, and/or may comprise augmented reality glasses.

The host device and the display device may be contained within a housing. The housing may be the casing for any one of a mobile phone, a PDA, a tablet or any other handheld portable device.

The memory may be a buffer. This may be particularly suitable. Reduced storage requirements may free up the processing capacity of the system for other uses. Any other suitable memory or memory unit could be used.

Any one or more features from one embodiment or aspect of the present invention as described herein may be incorporated into any other embodiment or aspect of the present invention, as appropriate and applicable.

Embodiments of the present invention will now be described, by way of example only, with reference to the accompanying drawings, in which:

Figure 1 shows a block diagram overview of a system for processing display data;

Figure 2 shows a block diagram overview of a system for processing display data wherein the display device is a headset;

Figure 3 shows a block diagram overview of a system for processing display data, wherein the host device and display device are contained within a single casing, for example in a smartphone or other such mobile computing device;

Figure 4 shows a method for processing display data in accordance with an embodiment of the present invention;

Figure 5 shows an exemplary Haar encoding process.

Figure 1 shows a block diagram overview of a system according to the current art.

A host computer [1 1] is connected to a display control device [12], which is in turn connected to a display device [13]. The host [1 1] contains an application [14], which produces display data. The display data may be produced and sent for compression either as complete frames or as canvasses, which may, for example, be separate application windows. In either case, they are made up of tiles of pixels, where each tile is a geometrically-shaped collection of one or more pixels.

The display data is sent to a compression engine [15], which may comprise software running in a processor or an appropriate hardware engine. The compression engine [15] may perform an encoding of the data, to convert the data into a format that may then be further compressed, minimising data loss.

The compression engine [15] may then further compress the data and thereafter send the compressed data to an output engine [16]. The output engine [16] manages the connection with the display control device [12] and may, for example, include a socket for a cable to be plugged into for a wired connection or a radio transmitter for a wireless connection. In either case, it is connected to a corresponding input engine [17] on the display control device [12]. The input engine [17] is connected to a decompression engine [18]. When it receives compressed data, it sends it to the decompression engine [18] or to a memory from which the decompression engine [18] can fetch it according to the operation of a decompression algorithm. In any case, the decompression engine [18] may decompress the data, if necessary, and performs a decoding operation. In the illustrated system, the decompressed data is then sent to a scaler [19]. In the case where the display data was produced and compressed as multiple canvasses, it may be composed into a frame at this point.

If scaling is necessary, it is preferable for it to be carried out on a display control device [12] as this minimises the volume of data to be transmitted from the host [1 1] to the display control device [12], and the scaler [19] operates to convert the received display data to the correct dimensions for display on the display device [13]. In some embodiments, the scaler may be omitted or may be implemented as part of the decompression engine. The data is then sent to an output engine [1 10] for transmission to the display device [13]. This may include, for example, converting the display data to a display-specific format such as VGA, HDMI, etc.

In one example, the display device is a virtual reality headset [21], as illustrated in Figure 2, connected to a host device [22], which may be a computing device, gaming station, etc. The virtual reality headset [21] incorporates two display panels [23], which may be embodied as a single panel split by optical elements. In use, one display is presented to each of a viewer's eyes. The host device [22] generates image data for display on these panels [23] and transmits the image data to the virtual reality headset [21].

In another example, the headset is a set of augmented reality glasses. As in the virtual reality headset [21] shown in Figure 2, there are two display panels, each associated with one of the user's eyes, but in this example the display panels are translucent.

The host device [22] may be a static computing device such as a computer, gaming console, etc., or may be a mobile computing device such as a smartphone or smartwatch. As previously described, it generates image data and transmits it to the augmented reality glasses or virtual reality headset [21] for display.

The display device may be connected to the host device [1 1 , 22] or display control device [12] if one is present by a wired or wireless connection. While a wired connection minimises latency in transmission of data from the host to the display, wireless connections give the user much greater freedom of movement within range of the wireless connection and are therefore preferable. A balance must be struck between high compression of data, in particular video data, which can be used to enable larger amounts of data (e.g. higher resolution video) to be transmitted between the host and display, and the latency that will be introduced by processing of the data.

Ideally, the end-to-end latency between sensing a user's head movement, generating the pixels in the next frame of the VR (virtual reality) scene and streaming the video should be kept below 20ms, preferably below 10ms, further preferably below 5ms.

The wireless link should be implemented as a high bandwidth short-range wireless link, for example at least 1 Gbit/s, preferably at least 2 Gbit/s, preferably at least 3 Gbit/s. An "extremely high frequency (EHF)" radio connection, such as a 60GHz radio connection is suitable for providing such high-bandwidth connections over short-range links. Such a radio connection can implement the WiFi standard IEEE 802.1 1 ad. The 71 - 76, 81 -86 and 92-95 GHz bands may also be used in some implementations.

The wireless links described above can provide transmission between the host and the display of more than 50 frames per second, preferably more than 60 frames per second, further preferably more than 90 frames per second, or even as high as 120 frames per second.

Figure 3 shows a system which is similar in operation to the example shown in

Figure 2. In this case, however, there is no separate host device [22]. The entire system is contained in a single casing [31], for example in a smartphone or other such mobile computing device. The device contains a processor [33], which generates display data for display on the integral display panel [32]. The mobile computing device may be mounted such that the screen is held in front of the user's eyes as if it were the screen of a virtual reality headset.

In accordance with an embodiment of the present invention, for example with reference to Figure 4, there is a system and method for processing display data. This system and method may comprise any or all of the features of the systems described above in relation to Figures 1 -3. In this system, a computing device, or host, contains an application, which generates image, or display, data, for playing back to a user as video.

When such display data is generated by the host, it comprises frames of display data for consecutive viewing by the user, and each frame comprises a plurality of pixels, wherein each pixel may comprise values for the levels of red (R), green (G), and blue (B) therein. This is known as RGB. The pixel data can be processed as separate, or as a combination of, R, G, and B values, and/or the pixel data can be converted to luma (Y) and chroma (α, β) values where luma indicates the luminescence of the pixel and chroma indicates its colour. Luma may be calculated as follows: Y= k(R+G+B), where k is a constant to appropriately scale the Y value. The two chroma values may comprise parts of the original RGB value as follows: a = aR+bG+cB; and β = a'R+b'G+c'B, where a, b, c and a', b' and c' are constants. A simple transform often used makes a=0, b=1 , c=1 and a'=1 , b'=1 , c'=0, resulting in: a = G+B; and β = G+R. These constants are generally pre-programmed and are not changed to adapt to different circumstances. Where there is a preponderance of a colour, corrections can be made before the display data is displayed to a user.

The display data is sent for processing/compression as complete frames, or as tiles of pixels; the frames are made up of tiles of pixels, where each tile is a geometrically-shaped collection of one or more pixels. Processing/compression of a frame (or tile) in one embodiment involves performing a transform on the pixel values for that frame. This is preferably a Haar encoding process, but may also be a Discrete Cosine Transformation (DCT). This results in a single DC value being generated for a frame (or tile), associated with a particular pixel for that frame (or tile), and which provides a colour signature for that frame (or tile), and represents the average colour of the pixels in that frame (or tile). It will be appreciated that this could also be performed on individual tiles in the frames, as well as or instead. For a frame size of N pixels, performing such a transform also results in generation of N-1 AC values, one for each of the remaining pixels in the frame, these AC values representing the colour change between the pixels, across the frame. This may be performed on the display data in the RGB colour space and/or in a colour space related to luma and chroma.

These and/or other display data values may then be used in further processing of the image data, and for sending to an output engine in the host, which manages the connection with a display control device, and so display device. As such, the output engine is connected to a corresponding input engine on the display control device, and so display device.

Before being sent to the output engine, the DC value for a frame, and in some cases the corresponding AC values, are stored in a memory unit of the host. The memory unit may be a short term or temporary memory, for instance a buffer.

This allows the DC value of a frame to be compared to the DC value of the previous frame (and in some cases similarly comparing the AC values too). Whilst the above and foregoing describes the process in relation to use of the DC values for frames, the process could also use the DC value for tiles of frames, and/or the corresponding AC values for frames or tiles of frames in a similar manner.

The DC value of the current frame being processed is compared to the DC value of the previous frame that is stored in the buffer. This could be a DC value for each colour component, or for a combination. Either way, when the current DC value is within a predetermined range of DC (or colour) values of the corresponding previous DC value, wherein the predetermined range of DC (or colour) values is centred on the previous DC value for the previous frame, it can be assumed that the frame is unchanged (at least according to the perception of the human eye), and so the frame is identified as a candidate for dropping. In some embodiments, the current frame can just be dropped at this stage. If the frame is dropped, it is not forwarded to the display control device or display device for displaying to a user. The DC value corresponding to the dropped frame may be tagged to indicate that the frame was dropped. This current DC value is then stored in the buffer for comparison with the corresponding DC value for the next frame. In some embodiments, a further check is performed on the candidate before deciding whether the frame should be dropped.

The buffer may store a predetermined number of the DC values, for instance x DC values (where x may for instance be 1 , 2, 3, 4, 5, 6, 7, 8, 9, 10 or more). As mentioned, the DC values may be tagged or marked with whether the associated frames were dropped. Therefore, once a candidate for dropping is determined, before dropping the frame associated with the current DC value, it is determined whether a predetermined number of the previous frames, preceding the current frame, for instance whether the previous y frames (where y may for instance be 5, 6, 7, 8, 9, 10 or more), were sent for further processing and to the display device, using the previous y DC values stored in the buffer. If the previous y frames were not sent for further processing and to the display device, then the current frame is nonetheless sent for further processing for sending to the display device. This may be carried out by determining the number of DC values in the buffer, preceding the DC value associated with the current frame, that are tagged to indicate that their associated frame was dropped, and if the number of previous frames that were dropped exceeds a predetermined threshold value, then the current frame is nonetheless sent for further processing and/or for sending to the display device. This prevents the propagation of errors in the system. Otherwise, the current frame is dropped, not sent for further processing, and not sent to the display device. Being able to process the data quickly in this way, and to determine whether to drop frames quickly in this way therefore enables a reduction in the bandwidth required, while still providing the user of the display device with an acceptable level of visual quality.

Rather than tagging the DC values in the buffer with whether the associated frames were dropped, it may be preferable to make use of a counter. There could be a counter that counts the frames as they are dropped, and then re-sets when a frame is not dropped. This counter could therefore instead be compared to a threshold value to determine whether a candidate for dropping should be dropped. The memory may only need to store a single colour signature, or colour value or DC value.

In some embodiments, the display device may comprise an accelerometer, or other means for detecting and/or measuring motion of the display device. The host device may be configured to monitor movement of the display device, in particular whether the display device is moving and how fast the display device may be moving. In some embodiments, when the display device is moving, the frame may be sent regardless. In some embodiments, the similarity requirements for the DC values, or other colour values, may be stricter when it is determined that the display device is moving and/or when it is determined that the display device is moving at a velocity above a particular threshold. For instance, when the DC value of the current frame is compared to the DC value of the previous frame that is stored in the buffer, the predetermined range of colour values may be narrower when the display device is moving. This is to ensure that it can still be assumed that the frame is unchanged, at least according to the perception of the human eye, and so the user.

The above method and system of an embodiment of the present invention has been described in relation to the use of a transform generally, but particular methods of compression and/or encoding may be preferable in certain situations. It may be preferable to use Haar encoding/ transformation process. This may produce suitable colour signatures or DC/AC values for use as described above.

A Haar transformation process that may be implemented in conjunction with the systems described herein will now be explained with reference to Figure 5, and Figures 1 -3. The Haar transform takes place on the host [1 1], specifically in the compression engine [15]. Decompression takes place on the display control device [12], specifically in the decompression engine [18], where the data is put through an inverse Haar transform to return it to its original form.

In the example shown, a group of four tiles [41] has been produced by the application [14] and passed to the compression engine [15]. In this example, each tile [41] comprises one pixel, but of course may be much larger. Each pixel [41] has a value indicating its colour, here represented by the pattern of hatching. The first pixel [41 A] is marked with dots and considered to have the lightest colour. The second pixel [41 B] is marked with diagonal hatching and is considered to have the darkest colour. The third pixel [41 C] is marked with vertical hatching and is considered to have a light colour, and the fourth pixel [41 D] is marked with horizontal hatching and is considered to have a dark colour. The values of the four pixels [41] are combined using the formulae [44] shown to the right of Figure 5 to produce a single pixel value [42], referred to as "W", which is shaded in grey to indicate that its value is derived from the original four pixels [41], as well as a set of coefficients [43] referred to in Figure 5 as "x, y, z". The pixel value [42] is generated from a sum of the values of all four pixels: ((A+B)+(C+D)). The three coefficients [43] are generated using the other three formulae [44] as follows:

· x: (A-B)+(C-D)

y: (A+B)-(C+D)

z: (A-B)-(C-D)

Any or all of these values may then be quantised: divided by a constant in order to produce a smaller number which will be less accurate but can be more effectively compressed and rounded. "W" may be used as a colour value for the tile, or frame where appropriate, alone and/or in combination with any or all of the coefficients.

The above embodiments and examples are described by way of example only, and are in no way intended to limit the scope of the present invention as defined by the appended claims.