Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A SYSTEM AND A METHOD OF PROCESSING DATA, A PROGRAM ELEMENT AND A COMPUTER-READABLE MEDIUM
Document Type and Number:
WIPO Patent Application WO/2006/046203
Kind Code:
A1
Abstract:
A system (100) of processing data comprises a first processing unit (101a) adapted to convert data from a recording format, in which data are recordable, to a partially processed intermediate format before replaying data, in such a manner that the data in the partially processed intermediate format contain additional data compared to the data in the recording format, and a second processing unit (101b) adapted to convert data from the intermediate format to a fully processed replay format, in which data are replayable, during replaying data.

Inventors:
VAN DER HEIJDEN HENRICUS (NL)
Application Number:
PCT/IB2005/053490
Publication Date:
May 04, 2006
Filing Date:
October 25, 2005
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KONINKL PHILIPS ELECTRONICS NV (NL)
VAN DER HEIJDEN HENRICUS (NL)
International Classes:
H04N9/79; H04N5/76
Domestic Patent References:
WO2001013625A12001-02-22
Foreign References:
EP0735748A21996-10-02
US5287420A1994-02-15
US20040252232A12004-12-16
Other References:
HAGHIRI M ET AL: "A LOW BIT RATE CODING ALGORITHM FOR FULL MOTION VIDEO SIGNAL", SIGNAL PROCESSING. IMAGE COMMUNICATION, ELSEVIER SCIENCE PUBLISHERS, AMSTERDAM, NL, vol. 2, no. 2, 1 August 1990 (1990-08-01), pages 187 - 199, XP000243477, ISSN: 0923-5965
Attorney, Agent or Firm:
Röggla, Harald (Triester Strasse 64, Vienna, AT)
Download PDF:
Claims:
CLAIMS
1. A system (100) of processing data, comprising a first processing unit (101a) adapted to convert data from a recording format, in which data are recordable, to a partially processed intermediate format before replaying data, in such a manner that the data in the partially processed intermediate format contain additional data compared to the data in the recording format; and a second processing unit (101b) adapted to convert data from the intermediate format to a fully processed replay format, in which data are replayable, during replaying data.
2. The system (100) according to claim 1, being adapted to process video data or audio data.
3. The system (100) according to claim 1, comprising a recording unit (102) adapted to record data in the recording format and adapted to provide recorded data to the first processor unit (101a).
4. The system (100) according to claim 1, comprising a replay unit (103) adapted to replay data in the replay format and adapted to be provided with data to be replayed by the second processor unit (101b).
5. The system (100) according to claim 1, comprising a storage unit (104) adapted to store recorded data in the recording format and adapted to store partially processed intermediate data in the partially processed intermediate format.
6. The system (100) according to claim 1, wherein data in the recording format correspond to a first quality level, and wherein data in the replay format correspond to a second quality level, the second quality level indicating a higher quality than the first quality level.
7. The system (100) according to claim 1, comprising a determining unit (105) adapted to determine, based on a userdefined time interval between recording data and replaying data or based on available system resources or based on system resources expected for the future, whether the first processing unit (101a) is controlled to convert recorded data from the recording format to the intermediate format before replaying data in the replay format, or whether the first processing unit (101a) is controlled to convert recorded data from the recording format directly to the replay format.
8. The system (100) according to claim 1, comprising a determining unit (105) adapted to determine, at a time of replaying data, whether data in the intermediate format have already been generated by the first processing unit (101a) and are thus available so that the second processing unit (101b) is controlled to convert data from the intermediate format to the replay format, or whether data in the intermediate format are not available so that the first processing unit (101a) is controlled to convert recorded data from the recording format directly to the replay format.
9. The system (100) according to claim 1, wherein converting data to a partially processed intermediate format includes calculating motion vector data.
10. The system (100) according to claim 9, wherein converting data to a fully processed replay format includes a temporal up conversion.
11. The system (100) according to claim 9, wherein converting data to a fully processed replay format includes a motioncompensated de interlacing.
12. The system (100) according to claim 1, wherein converting data to a partially processed intermediate format includes a color analysis.
13. The system (100) according to claim 1, wherein converting data to a partially processed intermediate format includes creating and analyzing a color histogram.
14. The system (100) according to claim 13, wherein converting data to a fully processed replay format includes an enhancement using a modification of a color histogram.
15. The system (100) according to claim 1, wherein the recording format is a compressed data format.
16. The system (100) according to claim 1, wherein converting data to a partially processed intermediate format includes decompressing data at least partially.
17. The system (100) according to claim 1, realized as an integrated circuit.
18. The system (100) according to claim 1, realized as a personal video recorder or as a personal computer television system or as a portable audio player or as a DVD player or as an MP3 player.
19. The system (100) according to claim 1, wherein the first processing unit (101a) and the second processing unit (101b) are combined to one common processing unit.
20. A method of processing data, comprising the steps of: converting data from a recording format, in which data are recordable, to a partially processed intermediate format before replaying data, in such a manner that the data in the partially processed intermediate format contain additional data compared to the data in the recording format; and converting data from the intermediate format to a fully processed replay format, in which data are replayable, during replaying data.
21. A program element, which, when being executed by a processor (101a, 101b), is adapted to carry out a method of processing data comprising the steps of: converting data from a recording format, in which data are recordable, to a partially processed intermediate format before replaying data, in such a manner that the data in the partially processed intermediate format contain additional data compared to the data in the recording format; and converting data from the intermediate format to a fully processed replay format, in which data are replayable, during replaying data.
22. A computerreadable medium, in which a computer program is stored which, when being executed by a processor (101a, 101b), is adapted to carry out a method of processing data comprising the steps of: converting data from a recording format, in which data are recordable, to a partially processed intermediate format before replaying data, in such a manner that the data in the partially processed intermediate format contain additional data compared to the data in the recording format; and converting data from the intermediate format to a fully processed replay format, in which data are replayable, during replaying data.
Description:
A system and a method of processing data, a program element and a computer-readable medium

FIELD OF THE INVENTION

The invention relates to a system of processing data.

The invention further relates to a method of processing data.

Moreover, the invention relates to a program element.

Further, the invention relates to a computer-readable medium.

BACKGROUND OF THE INVENTION

Video is the technology of processing electronic signals representing moving pictures. A major application of video technique is television, but it is also widely used in engineering, scientific, manufacturing and security applications. A personal video recorder (PVR) is an electronic device that records television shows to a hard disk in digital format. A PVR makes a "time shifting" feature more convenient, wherein time shifting is the recording of television shows to a storage medium to be viewed at a time convenient to a user. A digital PVR brings new freedom for time shifting, as it is possible to start watching the recorded show from the beginning even if the recording is not yet complete. PVR technology also allows for trick modes, such as pausing live TV, instant replay of interesting scenes, skipping advertising, and the like. Many PVR recorders use the MPEG format for encoding analog video signals.

Another upcoming technology is the PC-TV technology which allows watching TV on a personal computer. US 5,287,420 discloses a method of image compression for personal computer applications, which compresses and stores data in two steps. An image is captured in real¬ time and compressed and stored to a hard disk. At some later time, the data is further compressed in non-real-time.

However, it is a shortcoming of US 5,287,420 that, in case that video input data have a relatively poor quality, a replay of the video data in a sufficient quality is not allowed.

OBJECT AND SUMMARY OF THE INVENTION

It is an object of the invention to allow playing back data in a sufficient quality, even when recorded data suffer from a relatively poor quality. In order to achieve the object defined above, a system and a method of processing data, a program element and a computer-readable medium according to the independent claims are provided.

The system of processing data according to the invention comprises a first processing unit adapted to convert data from a recording format, in which data are recordable, to a partially processed intermediate format before replaying data, in such a manner that the data in the partially processed intermediate format contain additional data compared to the data in the recording format. Further, data from the intermediate format are converted to a fully processed replay format by a second processing unit, in which data are replayable, during replaying data. Moreover, a method of processing data is provided comprising the steps of converting data from a recording format, in which data are recordable, to a partially processed intermediate format before replaying data, in such a manner that the data in the partially processed intermediate format contain additional data compared to the data in the recording format, and converting data from the intermediate format to a fully processed replay format, in which data are replayable, during replaying data. "Replayable" data particularly refer to data which are ready to be sent to a non-processing ("dumb") digital or analog display or recording device.

Beyond this, a program element is provided, which, when being executed by a processor, is adapted to carry out the steps according to the above-mentioned method of processing data.

Further, a computer-readable medium is provided, in which a computer program is stored which, when being executed by a processor, is adapted to carry out the steps of the above-mentioned method of processing data.

The processing of data of the invention can be realized by a computer program, i.e. by software, or by using one or more special electronic optimization circuits, i.e. in hardware, or in hybrid form, i.e. by means of software components and hardware components.

The characteristic features according to the invention particularly have the advantage that recorded video (or audio) data are partially processed to be converted from a recording format to an intermediate format before being played back. Thus, an idle time of the system

can be used efficiently to start the processing of data before playing back these data. Such a pre-processing or partial processing of input data may be performed in such a manner that particularly numerically expensive algorithms may be carried out before playing back the data, which would not be possible to be done in real-time since the time needed for performing such a quality- improving algorithm is in many cases larger than the maximum available processing time in the frame of a real-time processing and playback scheme. The partially processed data may be then stored in the intermediate format. Starting from such an intermediate format, only the remaining part of processing has to be completed during real¬ time playback to achieve data in the recording format allowing a reproduction of data in real- time and simultaneously with an improved quality.

The pre-processing of the data according to the invention slightly increases the amount of data to be stored since, in addition to the recorded data, data which result from the pre-processing are stored additionally, to allow to improve the quality of the playback data compared to the recorded data. For instance, the pre-processing may include the estimation of motion vectors, which motion vectors, in combination with the recorded data, allow a reproduction with an improved quality by allowing to implement sophisticated features like motion compensated temporal up-conversion or de- interlacing. Motion vectors are calculated by a relatively expensive and time-consuming algorithm carried out between data recording and data reproduction, wherein the resulting motion vectors only require very small amounts of additional memory space but allow to significantly improve the quality of data playback.

Thus, according to an important aspect of the invention, an ahead-of-playback calculation and storage of intermediate video processing data is enabled, particularly for PC- TV and PVR applications.

According to one aspect, the invention teaches leveraging storage capacity to overcome video processing power shortage by recording a video, doing part of the video processing, such as motion vector estimation, during and after recording and storing the intermediate results - not as fully-processed video data, but only minimally needed data. Then, during a subsequent playback of the video, the final steps of the algorithms may be completed, preferably in a real-time manner. Thus, preferably the more expensive parts of the processing algorithm, and in particularly everything which can not be done in real-time, may be performed during an idle time of the system, so that a very efficient usage of the system resources is combined with very small additional amounts of memory required for storing the partially processed intermediate format data. Completing the final steps of the algorithm, during the playback of the video, may include calculating the proper de-interlaced lines based

on previously estimated motion vector data, or calculating missing frames for temporal up- conversion.

The invention is based on the recognition that future home entertainment TV recording and TV viewing applications (particularly video applications) such as personal video recorders (PVRs) and hybrid PC-TV systems can combine two factors. First, practically limitless frame buffer storage of broadcast video is enabled through automatic (i.e. no user action required) MPEG-2 encoding (or any other video compression method) and storage on hard disk. In contrast, the video processing facilities in traditional TVs typically only have access to the past two or three frames. Second, general-purpose processors are often unsuitable for video processing applications involving many operations per pixel, rather than dedicated hardware. However, it will be far more economical in many cases to use off- the-shelf processors rather than develop and produce dedicated ICs. The first aspect makes it possible to overcome disadvantages due to the lack of dedicated hardware (see second aspect) provided that a part of the video processing can be done during and after recording the program, not in real-time.

Using sophisticated video processing algorithms (for example for de-interlacing or temporal up-conversion - also known as motion judder removal) that can not be run in real¬ time on the available computing hardware becomes a possibility by calculating and storing intermediate video processing results between recording and playback of a program. Another disadvantage overcome by the invention has to do with marketing and testing of new video processing algorithms. By introducing new algorithms first in an offline software form as described in this application, it becomes possible to test how an algorithm is received in the market before an - often much more expensive - real-time solution in hardware is implemented. Further, since the video processing can be essentially done in software and is not restricted to a specific hardware implementation (for example, without real-time requirements, there is no critical threshold for processing speed), upgrading the video processing algorithms in devices is possible, in contrast to conventional TVs and VCRs. During the recording of a program, the video application according to a preferred embodiment of the invention will start the preliminary video processing work needed for high quality playback, using whatever processing power is not devoted to the recording task. Typically, the video processing task will not be finished by the time the program is ended. Two scenarios are possible:

1. The user watches during the recording of the program, or so soon thereafter that

processing is not completed. In this case, the system will use "medium" quality processing that can be done in real-time. In this case, it is determined by a determining unit that it is not possible or suitable to carry out the pre-processing step.

2. The user does not request playback of the program, until processing is finished. In this scenario, playback will use high quality processing und consideration of the cashed data.

A user flexibly has a choice which is easy to understand: to view the recording immediately with "medium" quality video processing, or to give the system time to calculate the needed data and then view the recording with high quality video processing.

In a preferred embodiment the calculation of motion vectors is performed during idle time, for example using the 3DRS ("three-dimensional recursive search") algorithm.

An advantageous application in the context of implementing motion vectors is de- interlacing (or "interlaced to progressive scan format conversion") which can be done in a cheap and efficient way (for example, using line doubling) or in a more sophisticated motion compensated way. The latter option yields better quality, but requires motion vectors. It is possible to estimate the motion vectors offline, and use these motion vectors to de-interlace the video during playback of the video.

"De-interlacing" is the process of converting interlaced images of video into non¬ interlaced form. Interlaced video draws only half of the lines on the screen for each frame (alternatively drawing the odd and even lines for each frame), taking advantage of the time it takes for an image to fade on a CRR ("cathode ray tube") to give the impression of double the actual refresh rate, helping to prevent flicker. Basic methods of de-interlacing include "combination", where the even and odd frames are combined into one image and then displayed, and "extension", wherein each frame (with only half the lines) is extended to the entire screen. Another application in the frame of calculating motion vectors is a so called

"temporal up-conversion" which also requires motion vector information. Having motion vectors already available during playback, the real time processing requirements are greatly reduced. Thus, a video or audio playback device can be manufactured to output enhanced or high definition signals from standard definition signals. Such devices may include an integrated sealer to up-convert the standard definition video to high definition video. This up- conversion process may improve the perceived picture quality of standard definition video.

Preferred applications of the invention are personal video recorders (PVR), especially with progressive output which requires de- interlacing. Newly developed algorithms that have not yet been implemented in hardware can be deployed (and therefore tested in the market)

earlier and cheaper in an offline scenario as outlined in this description. Another preferred application of the invention are television sets with built-in hard disk. A further application is a PC-TV application. While US 5,287,420 merely discloses compression of data and hence in so that compressed image data use less memory than primary data, the invention goes exactly against this teaching by describing a system that stores more data than initially recorded. According to the invention, the initially recorded data are stored, and additional data that is generated offline. Thus, the invention improves the reproduction quality at a cost of a slightly increased storage space. In contrast to this, US 5,287,420 improves the compression ratio at the cost of the picture quality. Particularly, it is in the frame of the present invention to go from a standard definition video signal (for instance 720x576i@50Hz for PAL) to something suitable for displaying on, for example, a 1280xl024p@60Hz LCD panel. In a sense, the recorded signal is treated as a "compressed" signal from which the invention intends to "decompress" a high resolution signal using video or audio processing techniques. By doing some of this "decompression" not in real-time (i.e. not during playing back) some more space will be used than the original recording of a standard definition interlaced signal requires. Thus, the invention provides a system that stores a video data stream and then generates and stores some more data.

The invention introduces ahead-of-time non-real-time calculation and storage of intermediate video processing data. An important idea is that a part of the data that is needed for high quality playback is calculated (and temporarily stored) ahead of time, rather than at the time of playback. The invention improves the playback quality, and not the compression properties. The invention involves optional video processing that is done during system idle time, not in real-time.

In other words, the invention does a part of the video processing already during recording or directly after recording, and performs the final processing step during playback of the video.

Referring to the dependent claims, further preferred embodiments of the invention will be described in the following.

Next, preferred embodiments of the system of processing data will be described. These embodiments may also be applied for the method of processing data, for the program element and for the computer-readable medium.

The system of the invention may be applied to process video data or audio data. In general, the invention may be applied to any kind of data which may be processed before being replayed, and which can be improved concerning quality by such a processing before

being replayed. For instance, video data may be processed by calculating motion vectors. For instance, audio data may be processed by calculating reverberation to be added to the primary audio data to improve the subjective quality of audio replay as perceived by a human listener.

The system of the invention may comprise a recording unit which may be adapted to record data in the recording format and which may be adapted to provide recorded data to the first processor unit. The system may further comprise a replay unit adapted to replay data in the replay format and adapted to be provided with data to be replayed by the second processor unit. Such a replay unit may comprise a personal computer, an LCD, an audio player, or the like. Further, a storage unit may be provided which may be adapted to store recorded data in the recording format and to store partially processed intermediate data in the partially processed intermediate format. Such a storage unit may be, for instance, a hard disk, a RAM memory, a flash memory or an optical storage medium like a DVD.

Data in the recording format may have a first quality level, and data in the replay format may have a second quality level, the second quality level indicating a higher quality than the first quality level. In other words, by processing data starting from the recording format into the replay format via calculating data in the intermediate format, the quality of replay data may be improved. For instance, incoming video data may have a replay rate of 24 Hz. However, modern LCD panels are able to replay visual data with a frequency of 60 Hz or even with 75 Hz. In order to close this gap and thus to improve the quality of the replayed video, motion vectors may be calculated. On the basis of these motion vectors, temporal up- conversion may be performed to generate intermediate pictures. Alternatively, incoming video may have a frequency of 60 Hz and be interlaced. In this case, motion vector information can be used to perform motion compensated de-interlacing for displaying the video on a progressive scan display (such as an LCD). Temporal up-conversion and de- interlacing may be performed alternatively or additionally.

The system of the invention may comprise a determining unit adapted to determine, based on a user-defined time interval between recording data and replaying data or based on available system resources or based on system resources expected for the future, whether the first processing unit is controlled to convert recorded data from the recording format to the intermediate format before replaying data in the replay format, or whether the first processing unit is controlled to convert recorded data from the recording format directly to the replay format. According to this embodiment, the system can be flexibly controlled according to results from a check whether the time between recording and playing back is sufficient to

carry out the pre-processing of the invention. If yes, pre-processing is carried out and the quality of the reproduced data may be increased compared to the recorded data. If no, it is not possible to improve the quality by pre-processing the data, since the time is not sufficient to finish the algorithm. Therefore, the system flexibly decides whether a video signal quality improvement is possible or not. The decision whether intermediate data should be generated or not may also be taken on the basis of the fact whether sufficient system resources (e.g. CPU capacity, memory space) are presently available or are expected to become probably available in the near future.

Alternatively to the previously described embodiment, a determining unit may be implemented which is adapted to determine, at a time of replaying data, whether data in the intermediate format have already been generated by the first processing unit and are thus available so that the second processing unit is controlled to convert data from the intermediate format to the replay format, or whether data in the intermediate format are not available so that the first processing unit is controlled to convert recorded data from the recording format directly to the replay format. In other words, according to this embodiment, the first processing unit starts to calculate intermediate data in any case during and directly after recording. However, in the event of a trigger to start a playback (e.g. when a user presses a "play" button on a replay device), the determining unit checks whether the first processing unit has already finished the calculation of intermediate data, i.e. whether intermediate data (like motion vectors) are already available. If yes, the determining unit controls the second processing unit to generate replay data under consideration of the previously estimated intermediate data. If no, the determining unit controls the first processing unit to generate replay data directly from the recorded data. The decision is thus based on the fact whether intermediate data are available or not. In case that the first processing unit is controlled to convert recorded data from the recording format directly to the replay format, data in the replay format may be replayed, wherein any mismatch between the framerates of the recorded format and replay format is overcome by using frame repetition. For example, if 24 Hz input signals shall be reproduced on a 60 Hz display device, a particular frame is simply repeated several times. Alternatively, in case that the first processing unit is controlled to convert recorded

(interlaced) data from the recording format directly to the (progressive (non-interlaced)) replay format, data in the replay format may be replayed using a line repetition method, wherein an interlaced recorded format may be converted to a progressive replay format using line doubling. A line repetition method just repeatedly displays a particular line to convert

interlaced video to a progressive scan format.

Converting data to a partially processed intermediate format may include calculating motion vector data. The calculation of motion vector data is in many cases computationally expensive and thus needs a considerable time to be carried out. The system of the invention may use an idle time, i.e. a time in which resources of the system are free, to calculate the motion vector data. Motion vector data can be stored with a very small amount of memory space and, however, allow to significantly improve the quality of displayed data. In case that motion vector data have been previously recorded, it is then possible to replay, based on the recorded data and the additionally calculated motion vector data, video data in a significantly improved manner, without the necessity to store huge amounts of additional data.

Converting data to a fully processed replay format may include a temporal up- conversion. Such a temporal up-conversion may include using previously estimated motion vector data to improve the picture quality.

Alternatively or additionally, converting data to a fully processed replay format may include a motion-compensated de-interlacing. Again, such a de-interlacing may be based on a previously performed motion vector analysis and allows a real-time playback of video data in a significantly improved quality.

Further, converting data to a partially processed intermediate format may include a color analysis, in particularly, the creation and the analysis of a color histogram. Converting data to a fully processed replay format may include an enhancement using a modification of a color histogram, using previously created and analyzed color histogram information.

Data in the recording format may be data in a compressed data format. A preferred compression format is the MPEG-2 format used for encoding audio and video data. However, alternative compression schemes may be applied, for example MPEG-4.

The system of the invention may be realized as an integrated circuit, particularly as a semiconductor integrated circuit. In particularly, the system can be realized as a monolithic IC which may be fabricated in silicon technology.

The system of the invention may be realized as a personal video recorder (PVR) or as a personal computer television system (PC-TV), or as a portable audio player or as a DVD player or as an MP3 player.

In the system of the invention, the first processing unit and the second processing unit may be combined to one common processing unit. In other words, both processing units may be combined to or integrated in a single common processing unit, e.g. one CPU (central

processing unit).

The aspects defined above and further aspects of the invention are apparent from the examples of embodiment to be described hereinafter and are explained with reference to these examples of embodiment.

BRIEF DESCRIPTION OF THE DRAWINGS

The invention will be described in more detail hereinafter with reference to examples of embodiment but to which the invention is not limited.

Fig. 1 shows a system of processing data according to a preferred embodiment of the invention.

Fig. 2 to Fig. 12 show flow charts according to methods of processing data according to preferred embodiments of the invention.

DESCRIPTION OF EMBODIMENTS The illustration in the drawing is schematically. In different drawings, similar or identical elements are provided with the same reference signs.

In the following, referring to Fig. 1, a video processing system 100 according to a preferred embodiment of the invention will be described.

The video processing system 100 comprises a recording unit 102 for recording video data. An output of the recording unit 102 is coupled to an input of a determining unit 105 for determining if a provided time interval between recording video data and reproducing video data is sufficient for pre-processing data before replaying data. More specifically, the determining unit 105 is adapted to determine, based on a user-defined time interval between recording data and replaying data, whether a first processing unit 101a is controlled to convert the recorded data from the recording format (denoted as A) to an intermediate format (denoted as B) before replaying data in the replay format (denoted as C), or whether the first processing unit 101a (or alternatively the second processing unit 101b) is controlled to convert the recorded data from the recording format directly to the replay format. In case that such a pre-processing is not possible since a user of device 100 wishes to replay the recorded data prompt after the recording, the determining unit 105 controls the first processing unit 101a to directly convert the recorded data into replayable data and to forward replayable data to a replay unit 103. In case that pre-processing is possible since a user of device 100 does not control device 100 to replay the recorded data prompt after the recording, the determining unit 105 provides the first processing unit 101a with the recorded data for a pre-processing.

The first processing unit 101a pre-processes the recorded data and may use a storage unit 104 for storing and accessing data. When a user - later — wishes to replay the data, the pre- processed data are further processed by a second processing unit 101b to generate replayable data which are then provided to the replay unit 103 for replaying these data. Thus, Fig. 1 shows the video processing system 100 with the first processing unit

101a being adapted to convert data from a recording format, in which the data are recorded by the recording unit 102, to a partially processed intermediate format before replaying the data by the replay unit 103. The data are converted, by the first processing unit 101a, in such a manner that the data in the partially processed intermediate format contain additional data compared to the recorded data, namely the original data plus data appropriate to improve the playback quality. Later, the second processing unit 101b converts data from the intermediate format to a fully processed replay format in which the data are replayed by the replay unit 103. The latter processing step is carried out during replaying the data, i.e. in a real-time manner. The storage unit 104 is adapted to store recorded data in the recording format and to store partially processed intermediate data in the partially processed intermediate format.

The data recorded by the recording unit 102 in the recording format have a first quality level (namely data with a framerate of 24 Hz), and the data in the replay format to be replayed by the replay unit 103 have a second quality level (namely data with a framerate of 60 Hz), so that the second quality level indicates a higher quality than the first quality level. The components of the video processing system 100 are realized as an integrated circuit in silicon technology. The video processing system 100 is a personal video recorder.

As an alternative to the embodiment of Fig .1, the determining unit 105 may be adapted to determine, at a time when a replay of the data is requested, whether data in the intermediate format (i.e. motion vectors) have already been generated by the first processing unit 101a and are thus available. If such a calculation is already finished at the time at which data shall be replayed, the second processing unit 101b is controlled to convert data from the intermediate format to the replay format. In contrast to this, if such a calculation is not yet finished at the time at which data shall be replayed so that data in the intermediate format are not available, the first processing unit 101a is controlled to convert recorded data from the recording format directly to the replay format. In other words, according to this embodiment, the first processing unit 101a starts to calculate intermediate data in any case during and directly after recording. However, in the event of a trigger to start a playback (e.g. when a user presses a "play" button on a replay device 103), the determining unit 105 checks whether the first processing unit 101 has already finished the calculation of intermediate data,

i.e. whether intermediate data (like motion vectors) are already available. If yes, the determining unit 105 controls the second processing unit 101b to generate replay data under consideration of the previously estimated intermediate data. If no, the determining unit 105 controls the first processing unit 101a (or alternatively the second processing unit 101b) to generate replay data directly from the recorded data. The decision is thus based on the fact whether intermediate data are available or not.

In the following, referring to Fig. 2 to Fig. 12, preferred embodiments of methods of processing data according to the invention will be described. The processing unit 101 is adapted to carry out the corresponding method steps. In the following, referring to Fig. 2, Fig. 3 and Fig. 4, a first embodiment related to motion compensation up-conversion will be described.

Fig. 2 shows a flow chart 200 which illustrates receiving and storing data. As can be seen from Fig. 2, a TV signal is received and is converted into video data by receiving means 201. These video data are provided to a compression means 202 to generate compressed video data which are then stored in a data storage means 203. The data storage means 203 may be a hard disk, a RAM memory, a flash memory or an optical storage medium like a DVD. Fig. 2 shows receiving and storing a TV signal as compressed data (usually MPEG-2 or MPEG-4 variant or other formats like Digital Video DV). An alternative embodiment is to bypass this compression step, and use a DVD or other digital carrier signal directly as the data storage means 203.

In the following, referring to Fig. 3, a flow chart 300 will be described. The flow chart 300 illustrates an offline processing scheme, i.e. a method of generating motion vectors, which method can be carried out in the first processing unit 101a. As can be seen in Fig. 3, a decompression unit 301 is provided with compressed video data stored in the data storage means 203. The decompression means 301 partially decompress the compressed video data to generate video data. The decompression means 301 may be adapted, for instance, to only partially, not fully, decompress data. For instance, in case that the video data are in the MPEG format, the decompression unit 301 may not consider color data, but only luminance data for the sake of decompressing. A motion vector estimation unit 302 estimates motion vector data from the video data. A compression means 303 generates compressed motion vector data from the motion vector data using a lossless compression algorithm. In an additional data storage means 304, the compressed motion vector data are stored.

Fig. 3 illustrates an embodiment for an offline processing job. The motion vector

estimation can be a numerically expensive process. The estimation process can be started as soon as the data storage means 203 is (partially) filled and the system has spare computing resources. The resulting data, typically one (vx, vy) data pair for each 8x8 pixel block in each video frame, can be stored in the additional data storage means 304. To reduce the data size, an additional lossless compression step is carried out in the lossless compression unit 303.

In the following, referring to Fig. 4, a flow chart 400 will be described which illustrates the playback of film material (provided with a framerate of 24 Hz) on a 60 Hz display, i.e. a replay with a framerate of 60 Hz.

As can be seen in Fig. 4, the compressed video data stored in the data storage means 203 are provided to a decompression means 401a which may be an MPEG-2 decoder. The decompression means 401a generate 24 Hz video data from the compressed video data and provide these 24 Hz video data to a decision means 402. In the decision means 402, it is decided whether motion vector data are available, i.e. have been previously calculated in the motion vector estimation unit 303. In case that no motion vector data are available, a frame repetition means 403 generates 60 Hz video data from the 24 Hz video data using a simple repetition of frames. In other words, the frame repetition means 403 generates video data with a frequency of 60 Hz from video data having a frequency of 24 Hz by simply repeating different frames a plurality of times. The video data with the frequency of 60 Hz are provided to a 60 Hz display device 405, for instance a liquid crystal device (LCD), for displaying the processed video data at a rate of 60 Hz.

However, in case of the decision means 402 deciding that motion vector data are available - as a consequence of a motion vector estimation which has been carried out by the motion vector estimation means 302 - the video data with a frequency of 24 Hz are provided to a temporal up-conversion means 404 to generate video data with a frequency of 60 Hz with a quality which is improved with respect to the video data having a frequency of 24 Hz. These pre-processed video data with a frequency of 60 Hz are provided to the display device 405 to be displayed.

As can be further seen from Fig. 4, decompression means 401b (which may be related to a Huffman- like lossless compression) are provided with compressed motion vector data which are delivered from the additional data storage unit 304. These motion vector data are provided to the temporal up-conversion unit 404, which up-conversion unit 404 uses these motion vector data to provide the video data with a frequency of 60 Hz having an improved quality compared to the video data having a frequency of 24 Hz.

Thus, Fig. 4 shows an example of how the computed motion vectors can be used in a

film rate up-conversion. As input, a 24 Hz film sequence is used (this is usually determined from an analysis of the 60 Hz interlaced TV signals by the "film detector") that shall be displayed on a 60 Hz display 405. With the help of the pre-computed motion vectors, the video can be up-converted in a simple way up to 60 Hz. If such motion vectors are not available (for instance since a time interval between recording the data and replaying the data is too small for calculating the motion vectors), frame repetition is carried out.

In the following, referring to Fig. 2, Fig. 3 and Fig. 5, a second embodiment of the method of processing video data according to invention will be described, which is related to motion compensated de-interlacing. The method steps as described above referring to Fig. 2 and Fig. 3 are also carried out in the case of the second embodiment. The results of these calculations, denoted as "a" and "b", are provided, as shown in the flow chart 500, for de- interlacing using motion compensation according to the second embodiment.

The compressed video data which had been generated previously are provided to a decompression means 401a to generate video data 48Oi (as in NTSC (National Television System Committee) TV signals - similar examples exist for PAL and HDTV interlaced modes). The decision means 402 decide whether motion vector data are available, i.e. if motion vectors have been generated by the motion vector estimation means 302. If no, a simple line repetition is carried out by a line repetition means 501 converting video data 48Oi to video data 48Op, for display on a progressive display 503. If motion vector data are available, the video data 48Oi are provided to a motion compensated de-interlacing means 502 for generating video data 48Op using previously estimated motion vector data. These video data 48Op are then displayed on the progressive display device 503.

Thus, Fig. 5 shows a second example of using pre-calculated motion vectors to improve the quality of displaying video data. In this case, a 48Oi 60 Hz (NTSC, National Television System Committee) signal shall be de-interlaced for display on the 48Op (progressive) display 503. For this purpose, a motion compensated de- interlacing algorithm (for example "majority select") is used. If no motion vectors are available, the system falls back on a very basic de- interlacing algorithm by just repeating all lines once. In the following, referring to Fig. 2, Fig. 6, Fig. 7 and Fig. 8, a method of processing video data according to a third embodiment of the invention will be described which introduces motion adaptive de-interlacing.

The method steps as indicated in Fig. 2 are carried out in the case of the third embodiment like in the case of the first embodiment.

The algorithm shown in Fig. 6 is similar to the algorithm as shown in Fig. 3 but differs in that the motion vector estimation means 302 are replaced by de-interlacing analysis means 601. The de-interlacing analysis means 601 convert video data into de-interlacing data, which de-interlating data are provided to the lossless compression unit 303. Thus, Fig. 6 shows another embodiment of offline processing, introducing de- interlacing parameters, i.e. another way of de-interlacing which is still cheaper than motion-compensated de-interlacing, namely motion adaptive de-interlacing.

A flow chart 700 shown in Fig. 7 illustrates details of the functionality of the de- interlacing analysis means 601. Image pixels are provided to a moving decision means 701 for deciding whether a particular image is (locally) moving. If yes, a parameter

"use_previous" is set to a value of "0" in a set means 702, and a determination means 703 determines an optimum interpolation direction L. If the moving decision means 701 has decided that the image is not locally moving, the parameter "use_previous" is set, by a set means 703, to a value of "1", and L is set to "0". A storing means 705 is connected with outputs of units 703, 704 and stores the values of "use_previous" and "L".

According to the algorithm shown in Fig. 7, it is first determined if the image is (locally) moving. If it is not, the pixel from the previous field is inserted. If it is, the system tries to determine the best interpolation direction. The motion detection and interpolation direction values are stored. Fig. 8 shows a flow chart 800 illustrating a playback diagram belonging to the procedure described in Fig. 6.

A decompression means 401a is provided with data stored in the data storage means 203. After decompressing these data, the thus generated video data 48Oi are provided to a decision means 801 for deciding if de- interlacing parameters (as distinguished according to the procedure shown in Fig. 7) are available. If not, the video data 48Oi are provided to a line repetition means 501 to generate video data 48Op to be displayed on a progressive display device 503. If the decision means 801 decides that de- interlacing parameters are available, then the video data 48Oi are converted to video data 48Op using motion adaptive de- interlacing as performed by a motion adaptive de-interlacing means 802. The motion adaptive de-interlacing means 802 are supplied with de-interlacing parameters from a decompression unit 401b which receives data stored in the additional data storage means 304, see "c".

In the following, referring to Fig. 2, Fig. 9 to Fig. 12, a method of processing video data according to a fourth embodiment of the invention will be described which introduces

histogram modification enhancement.

The flow chart 900 shown in Fig. 9 is similar to the flow chart 300 shown in Fig. 3 and the flow chart 600 shown in Fig. 6 and illustrates how the compressed video data stored in the data storage means 203 (see Fig. 2) are processed to generate color histogram data. For this purpose, the compressed video. data are partially decompressed in a decompression unit 301, wherein the decompressed data are provided as video data to a color histogram creation and analysis means 901. The color histogram creation and analysis means 901 is adapted to generate color histogram data by applying a corresponding algorithm. The results of this calculation step are provided as color histogram data to a lossless compression unit 303 which generates compressed color histogram data which are stored in the additional data storage means 304.

The method steps as shown in Fig. 9 show an embodiment which implements image enhancement using histogram modification. A well-known technique for increasing color contrast where not all available colors are used at the same time is implemented. This method is also used in TV systems, but there it suffers from the fact that the algorithm needs to "look into the future". Color use must be known well in advance to prevent temporal inconsistencies.

In the following, a flow chart 1000 as shown in Fig. 10 will be described which shows details of the method for determining and storing the color usage per scene. So a according to the Fig.10 a histogram analysis is shown. Firstly, image color pixels are provided to a measure means 1001 for measuring color histograms. In a decision means 1002, it is detected whether scene changes have been present. If yes, a storing means 1003 stores frame number as scene boundary, and allocates a new empty scene average histogram. The results of the storing means 1003 are provided to an adding means 1004 for adding histogram data to scene average. Further, in case that no scene changes are detected by decision means 1002, the result of this detection is provided to the adding means 1004.

Fig. 11 shows a flow chart 1100 illustrating, for different frame references, corresponding histogram blocks. Thus, (uncompressed) histogram data stored in a memory are shown in Fig. 11. Thus, Fig. 11 gives an overview of storage of color data. Per scene, one histogram is stored. The frame numbers that signify boundaries between scenes are also stored, so the whole provides a mapping of frame number to average histogram data.

In the following, referring to Fig. 12, a flow chart 1200 will be described illustrating a playback with color and brightness enhancement using histogram modification. Similar like shown in Fig. 3, Fig. 4 and Fig. 8, Fig. 12 shows that a decompression unit 401a is provided

with data stored in data storage means 203 shown in Fig. 2. Further, data stored in the additional data storage means 304, shown in Fig. 9, are provided to the decompression unit 401b, as shown in Fig. 12.

Data provided by the data storage means 203 are decompressed in a decompression unit 401a and are provided to a decision means 1201 checking whether histogram data are available (i.e. if history data have been produced by the color histogram creation and analysis means 901). If no, the video data are displayed on a display device 1202 (a liquid crystal display, LCD). If yes, the video data are treated in an enhancement using histogram modification means 1204 which is supplied with current scene histogram data from a look-up means 1203 looking up for histograms for a current scene. This look-up means 1203 contains color histogram data provided by the decompression unit 401b. The enhancement means 1204 enhance the provided video data and display a video data on a display device 1202. Thus, color and brightness enhancement using histogram modification is carried out.

It should be noted that the term "comprising" does not exclude other elements or steps and the "a" or "an" does not exclude a plurality. Also elements described in association with different embodiments may be combined.

It should also be noted that reference signs in the claims shall not be construed as limiting the scope of the claims.