Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS AND APPARATUSES FOR MULTI-PASS ADAPTIVE QUANTIZATION
Document Type and Number:
WIPO Patent Application WO/2015/065669
Kind Code:
A1
Abstract:
A video encoding method for encoding a stream of baseband video data. The stream of baseband video data is received as a plurality of coding units. Statistics of each coding unit in the plurality of coding units are gathered. A quantization parameter (QP) for each coding unit is determined from the corresponding statistics. The coding unit is trial encoded using the QP to generate a trial encoded coding unit; and the QP is updated based on the trial encoded coding unit. Trial encoding the coding unit and updating the QP are repeated until the trial encoded coding unit meets a predetermined criterion. Then the coding unit is final encoded using the updated QP to generate a final encoded coding unit.

Inventors:
NOVOTNY PAVEL (CA)
Application Number:
PCT/US2014/059661
Publication Date:
May 07, 2015
Filing Date:
October 08, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MAGNUM SEMICONDUCTOR INC (US)
International Classes:
H04N19/124; H04N19/50
Foreign References:
US20130121403A12013-05-16
US20110051806A12011-03-03
US20120230400A12012-09-13
US20120076202A12012-03-29
US20090257489A12009-10-15
Attorney, Agent or Firm:
SWETT, Michael et al. (IP Docket - SE701 5th Ave,Suite 610, Seattle Washington, US)
Download PDF:
Claims:
CLAIMS

1. A video encoding method for encoding a stream of baseband video data, the method comprising:

receiving the stream of baseband video data as a plurality of coding units; and for each coding unit in the plurality of coding units;

gathering statistics of the coding unit;

determining a quantization parameter (QP) tor the coding unit from the corresponding statistics;

trial encoding the coding unit using the QP to generate a trial encoded coding unit;

updating the QP based on the trial encoded coding unit;

repeating trial encoding the coding unit and updating the QP until the trial encoded coding unit meets a predetermined criterion; and

final encoding the coding unit using the updated QP to generate a final encoded coding unit.

2. The method of claim 1, wherein each coding unit of the plurality of coding units is one of

a frame of the stream of baseband video data;

a slice of the stream of baseband video data;

a macrobiock of the stream: of baseband video data; or

a combination or sub-combination thereof,

3 , The method of clam's 1 , wherein:

the stream of baseband video data includes baseband video data including a plurality of frames;

each frame of baseband video data includes a plurality of coding units; and

for each coding unit in the plurality of coding units, trial encoding the coding units includes intra frame encoding the coding unit using the QP to generate the trial encoded coding unit.

4. The method of claim 3, wherein for each coding unit in the plurality of coding units, final encoding the coding unit includes one of: predictive frame encoding die coding unit using the updated QP to generate a final encoded coding unit or

bidirectional predictive frame encoding the coding unit -using the updated QP to generate a final encoded coding unit.

5. The method of claim 3, wherein for each coding unit in the plurality of coding units:

trial encoding the coding unit includes one of:

MPEG-2 encoding the coding unit using the QP to generate die trial encoded coding unit;

MPEG-4 encoding the coding unit using the QP to generate the trial encoded coding unit; or

H.264 encoding the coding unit using the QP to generate the trial encoded coding unit; and

final encoding the coding unit includes one of:

MPEG-2 encoding the coding unit using the updated QP to generate a final encoded coding unit;

MPEG-4 encoding the coding unit using the updated QP to generate a final encoded coding unit; or

H.264 encoding the coding unit using the updated QP to generate a final encoded coding unit.

6. The method of claim 3, wherein:

for each coding unit in the plurality of coding unit, updating the QP includes:

calculating an. average bit size of the plurality of trial encoded coding units of a trial encoded frame including the trial encoded coding unit;

determining a target bit size of the trial encoded coding unit based on:

the average bit size of the plurality of trial encoded coding units of the trial encoded frame including the trial encoded coding unit;

comparing a bit size of the trial encoded coding unit to the target bit size of the trial encoded coding unit; and

determining the updated QP for the coding unit based on the corresponding trial statistics and the current QP by ; reducing the QP when the bit size of the trial encoded coding unit is less than the target bit size; or

increasing the QP when the bit size of the trial encoded coding unit is greater than, the target bit size.

7. The method of claim 3, wherein, for each coding unit in the plurality of coding units, updating the QP includes:

calculating an average bit size of the plurality of trial encoded coding unit of a trial encoded frame including the iriai encoded codin unit;

calculating a distortion between:

the trial encoded coding unit; and

the corresponding coding unit of the stream of baseband video data;

determining a target bit size of the trial encoded coding unit based on:

the average bit size of the plurality of trial encoded codin units of the trial encoded frame including the trial encoded coding unit; and

the distortion between the trial encoded coding unit and the corresponding data packet of the stream of baseband video data ;

comparing a bit size of the trial encoded coding unit to the target bit size of the trial encoded coding unit; and

determining the updated QP for the data packet based on the corresponding trial statistics and the current QP by;

reducing the QP when the bit size of the trial encoded coding unit is less than the target bit size; or

increasing the QP when the bit size of the trial encoded coding unit is greater than the target bit size.

9, The method of claim 8, wherein calculating the distortion between the trial encoded coding unit and the corresponding coding unit of the stream of baseband vide data uses at least one of:

a sum of absolute differences between the trial encoded coding unit and the corresponding coding uni t of the stream of baseband video data;

a sum of squared differences between the trial encoded coding unit and the corresponding coding unit of the stream, of baseband video data; or a structural similarity index between the trial encoded coding unit and the corresponding coding unit of the stream of baseband video data.

I.0. The method of claim I, wherein, for each coding unit in the plurality of coding units, updating the QP includes:

gathering trial statistics of the trial encoded coding unit; and

determining the updated QP for the coding unit based on the corresponding trial statistics and the current QP.

I I . The method of claim H wherein, for each coding unit in the plurality of coding units:

gathering trial statistics of the trial encoded coding unit includes comparing a bit size of the trial encoded coding unit to a target bit size; and

deterniiniag the updated QP for the coding unit based on the corresponding trial statistics and the current QP includes:

reducing the QP when the bit size of the trial encoded coding unit is less than the target bit size; or

increasing the QP when the bit size of the trial encoded coding unit is greater than the- target- bit size.

12. The method of claim 1, wherein the predetermined criterion is one of:

an absolute difference between a bit size of the trial encoded coding unit and a target bit size of the trial encoded coding unit is less than a predetermined value;

a ratio of the bit size of the trial encoded coding unit to the target bit size of the trial encoded coding unit is within a predetermined range; or

the QP of the coding unit has been updated a predetermined number of times,

13, A video encoder comprising:

an input buffer to receive a stream of baseband video data one coding unit at a time; a first processor module coupled to the input buffer, the first processor module configured to gather statistics of the received coding unit and determine a quantization parameter (QP) for the recei ved coding unit from the gathered statistics; a trial encoding module coiipled to die input buffer and d e first processor module, the trial encoding module adapted to encode the coding unit using the QP to generate a trial encoded coding unit;

a second processor module coupled to the first processor module and the trial encoding module, the second processor module adapted to:

determine if the trial encoded coding unit meets a predetermined criterion: and if the trial encoded coding unit meets the predetermined criterion, set a final QP equal to the current QP, otherwise:

update the QP based on the trial encoded coding unit; and

instruct the trial encoding module to repeat trial encoding the coding unit using the updated QP; and

a final encoding module coupled to the input buffer and the second processor module, the final encoding module adapted to encode the coding unit using the updated QP to generate a final encoded coding unit.

14. The video encoder of claim 13, wherein;

the stream of baseband video data includes baseband video data including a plurality of frames;

each frame of the plurality of frames includes a plurality of coding units and the trial encoding module is an intra frame encoding module.

.1 . The method of claim 14, wherein the final encoding module includes one of: a predictive frame encoding module; or

bidirectional predictive frame encoding module.

16. The video encoder of claim 14, wherein;

the trial encoding module includes one of:

an MPEG-4 encoding module; or

an H.264 encoding module; and

the final encoding module includes one of:

an MPEG-4 encoding module; or

an H.264 encoding module.

17 , The video encoder of claim 14, wherein: the second processor module is further adapted to:

calculate an average bit size of the plurality of trial encoded coding units of a trial encoded frame including the trial encoded coding unit;

determine a target bit size of the trial encoded coding unit based on:

the average bit size of the plurality of trial encoded coding units of the trial encoded frame including the trial encoded coding unit; and

compare a bit size of the trial encoded coding unit, to the target bit size of the trial encoded coding unit;

the predetermined criterion is one of:

an absolute difference between the bit size of the trial encoded coding umt and the target bit size of the trial encoded coding unit is less than a predetermined value; or

a ratio of the bit size of the trial encoded coding unit to the target bit size of the trial encoded coding unit is within a predetermined range; and

updating the QP includes

reducing the QP when the bit size of the trial encoded coding unit is less than the target bit size; and

increasing the QP when the bit size of the trial encoded coding unit is greater than the target bit size.

18. The video encoder of claim 14, wherein:

the second processor module is further adapted to:

calculate an average bit size of the pluraliiy of trial encoded coding units of a trial encoded frame including the trial encoded coding unit;

calculate a distortion between:

the trial encoded coding unit; and

the corresponding coding unit of the stream of baseband video data; determine a target bit size of the trial encoded coding unit based on:

the average bit size of the plurality of trial encoded coding units of the trial encoded frame including the trial encoded coding unit; and

the distortion between the trial encoded coding unit and the corresponding coding unit of the stream of baseband video data; and compare a bit size of the trial encoded coding unit to the tai'get bit size of the trial encoded coding unit; the predetermined criterion is one of:

an absolute difference between the bit size of the trial encoded coding unit and the target bit size of the trial encoded coding unit is less than a predetermined value; or

a ratio of the bit size of the trial encoded coding unit to the target bit size of the trial encoded coding unit is within a predetermined range; and

updating the QP includes:

reducing the QP when the bit size of the trial encoded coding unit, is less than the target bit size; and

increasing the QP when the bit size of the trial encoded coding unit is greater than the target bit size.

1 , A video encoding method for encoding a macro-block (MB) of baseband video data, the method comprising:

receiving the MB;

gathering MB statistics of the MB;

determining a first quantization parameter (QP) for the MB based on the MB statistics;

trial encoding the MB using the first QP to generate a first encoded MB;

determinin a first visual qualit (VQ) of the first encoded MB;

determining a second QP for the MB from the first QP and the first VQ;

trial encoding the MB using the second QP to generate a second encoded MB;

determining a second VQ of the second encoded MB;

if the first VQ is better than the second VQ, setting a final QP equal to the first QP, otherwise determining the final QP for the MB from the second QP and the second VQ: and final encoding the MB using the final QP to generate a final encoded MB.

20, The method of claim 1 , wherein:

trial encoding the MB using the first QP includes intra frame encoding the MB using the first QP to generate the first encoded MB;

trial encoding the MB using the second QP includes intra frame encoding the MB using the second QP to generate the second encoded MB; and

final encoding the MB includes one of: predictive frame encoding the MB using the final QP to generate the final encoded MB; or

bidirectional predictive frame encoding the MB using the final QP to generate the final encoded MB.

21, The method of claim 19, wherein:

trial encoding the MB using the first QP includes one of:

MPEG-4 encoding the MB using the first QP to generate the first encoded

MB; or

H.264 encoding the MB using the first QP to generate the first encoded MB; trial encoding the MB usin the second QP includes one of:

MPEG-4 encoding the MB using the second QP to generate the second encoded MB; or

H.264 encoding the MB using the second QP to generate the second encoded MS; and

final encoding die MB include* one of:

MPEG-4 encoding the MB using the final QP to generate the final encoded

MB; or

I f.264 encoding the MB using the final QP to generate the final encoded MB.

22. The method of claim 19, wherein :

determinin the first VQ of the first encoded MB includes:

determining a bit size of the first encoded MB; and

calculating a ratio of a target bit size to the bit size of the first encoded MB; determining the second QP for the MB includes:

calculating a first delta QP; the first delta QP being proportional to a logarithm: of the ratio of the target bit size to the bit size of the first encoded MB;

comparing the first delta QP to a first delta QP range:

if the first delta QP is less than the first delta QP range, setting the first delta QP equal to a minimum first delta QP value; and

if the first delta QP is greater than the first delta QP range, setting the first delta QP equal to a masurium first delta QP value; and

subtracting the first delta QP from the first QP to determine the second QP; determining the second VQ of the second encoded MB includes: determining a bit size of the second encoded MB; and calculating a ratio of the target bit size to the bit size of the second encoded MB; an

determining the final QP for the MS includes:

calculating a second delta QP, the second delta QP being proportional to a logarithm of die ratio of the target bit size to die bit size of the second encoded MB; comparing the second delta QP to a second delta QP range:

if the second delta QP is less than the second delta QP range, setting the second delta QP equal to a minimum second delta QP value; and

if the second delta QP is greater than the second delta QP range, setting the second delta QP equal to a maximum second delta QP value; and subtracting the second delta QP from the second QP to determine the final QP.

Description:
METHODS AND APPARATUSES FOR MULTI-PASS ADAPTI VE QUANTIZATION

CROSS-REFERENCE

{001} This application claims priority to U.S. Non- Provisional Application No.

14/071 41 filed November 4, 2013, which application is incorporated herein by reference, in its entirety, for any purpose.

TECHNICAL FIELD

}002j Embodiments of the present invention relate generally to video encoding and examples of adaptive quantization for encoding are described herein. Examples include methods of and apparatuses for adaptive quantization utilizing feedback to ensure adequate visual quality.

BACKGROUND

(0031 Video encoders are often used to encode baseband video data; thereby reducing the number of bits used to store and transmit the video. In .most cases the video data is arranged in coding units representing portion of the overall baseband video data, for example: a frame; a slice; or a macroblock (MB). A typical video encode may include a macroblock-based block encoder, o tputting a compressed bitstream. This encoder may be based on a number of standard codecs, such as MPEG-2, MFEG-4, or H.264. A main birrare and visual quality (VQ) driving factor in such example video encoders is typically the MB level quantization parameter (QP). A number of standard techniques may be used to select the QP for each MB.

|004| In example video encoders, the QP determines a scale for encoding the video data. Generally, smaller QPs lead to larger amounts of data being retained, during quantization processes and larger QPs lead to smaller amounts of data being retained during quantization processes.

SUMMARY

j0 5J Example methods for encoding a stream of baseband video data are disclosed herein. An example method ma include recetviog the stream of baseband video data as a plurality of coding units, and for each coding unit in the plurality of coding units: gathering statistics of the coding unit, determining a quantization parameter (QP) for the coding unit from the corresponding statistics, trial encoding the coding unit using the QP to generate a trial encoded coding unit, updating the QP based on iiie trial encoded coding unit repeating trial encoding the coding unit and updating the QP until the trial encoded coding unit meets a predetermined criterion, and final encoding the coding unit using the updated QP to generate a final encoded coding unit.

Example video encoders are disclosed herein. An example video encoder may include an input buffer to receive a stream, of baseband video data one coding unit at a time. The example video encoder may further include a first processor module coupled to the input buffer. The first processor module may be configured to gather statistics of the received coding unit and determine a quantization parameter (QP) for the received coding unit from the gathered statistics. The example video encoder may further include a trial encoding module coupled to the input buffer and the first processor module. The trial encoding module may be adapted to encode the coding unit using the QP to generate a trial encoded coding unit. The example video encoder ma further include a second processor module coupled to the first processor module and the trial encoding module. The second processor module may ¬ be adapted to determine if the trial encoded coding unit meets a predetermined criterion and if the trial encoded coding unit meets the predetermined criterion, set a final QP equal to the current QP, otherwise: update the QP based on die trial encoded coding unit, and instruct the trial encoding module to repeat trial encoding the codin unit using the updated QP. The example video encoder may further include a final encoding module coupled to the input buffer and the second processor module. The final encoding module may be adapted to encode iiie coding unit using the updaied QP to generate a final eneoded coding unit.

|007j Example video encoding methods tor encoding a macrobiock (MB) of baseband video data are disclosed herein. Example methods may include receiving the MB; gathering MB statistics of the MB, determining a first quantization, parameter (QP) for the MB based on the MB statistics, trial encoding the MS using the first QP to generate a first encoded MB. determining a first visual quality (VQ) of the first encoded MB, determining a second QP for the MB from the first QP and the first VQ, trial encoding the MB using the second QP to generate a second encoded MB, determining a second VQ of the second encoded MB, if the first VQ is better than the second VQ, setting a final QP equal to the first QP, otherwise determining the final QP for the MB from the second QP and the second VQ, and " final encoding the MB using the final QP to generate a final encoded MB. BRIEF DESCRIPTION OF THE FIGURES

j0 8| Fig. I is a flowchart of an example video encoding method including an example of a multi-pass adaptive quantization process according to the present disclosure;

\Q99] Fig, 2A is a schematic block diagram of an example video encoding system incorporating a multi-pass adaptive quantization module according to the present disclosure;

{0010| Fig. 28 is a schematic block diagram of an example multi-pass adaptive quantization module according to the preseni disclosure.

JOOllj Fig. 3 is a flowchart illustrating an example video encoding method including a multi-pass adaptive quantization process according to the present disclosure.

{0012| Fig. 4 is a schematic illustration of a media delivery system according to the present disclosure.

(0013) Fig. 5 is a schematic illustration of a video distribution system that may make use of video encoding systems described herein.

DETAILED DESCRIPTION

{00 | Various example embodiments described herein include multi-pass adaptive quantization techniques. Examples of multi-pass adaptive quantization techniques described herein may advantageously support the provision (e.g., generation) of encoded bitstreams that have a more uniform visual quality. Example multi-pass adaptive quantization techniques ma advantageously allow the properties of the codec (e.g. encoder) to be taken into account during the encoding procedure b providing feedback into the adaptive quantization process that may calibrate the adaptive quantization at an operating state of the encoder. This may result in a robust way of delivering the expected VQ throughout the encoded video data. In this way, uniform VQ may be achieved by encoding each coding unit (e.g. macroblock) with a suitable number of bits.

}0O15) Baseband video streams typically include a plurality of pictures (e.g., fields, frames) of video data. Video encoding systems often separate these coding units further into sins! ier coding units such as macroblocks. Coding units include, but are not limited to, sequences, slices, macroblocks, pictures, group of pictures, and blocks.

|0 16i| Video encoders generally perform bit distribution (e.g. determine a number of bits to be used to encode respective portions of a video stream). The bit distribution may be designed to achieve a balanced visual quality. Typical approaches to bit distribution may utilize adaptive quantization methods operating on statistics extracted from die video while not accounting for the properties of the encoder itself. Typically, the baseband video is analyzed and statistics about the video are gathered. These statistics may be used to calculate the QP for each coding unit (e.g. MB). Once the QP for each MB is determined, the MB may be encoded. However, this approach may result in a less than reliable VQ, For example, areas of high texture or particular significance to viewer, such as faces, may be encoded with too little information to meet a desired VQ level.

[0017 | Example methods and video encoders described herein include feedback in example adaptive quantization procedures. The feedback may advantageously improve VQ and/or produce a more even VQ across ail or a portion of a bitstream..

|0{il8| Fig, 1 is a flowchart illustrating an example video encoding method described herein. In this example method a stream of baseband video data is received at act 100. Ttie stream of baseband video data may be received directly from the video source (e.g.. a video camera, etc.) or it may be accessed from a data storage unit or streamed from a network, such as the Internet. As noted above, the stream of baseband video data may include baseband video data representing a plurality of frames of video data and each of these frames may include a number of coding units, such as raacrobloeks.

[0019J For each coding unit, statistics of the coding unit are gathered at act 102, and an initial quantization parameter (QP) for the coding unit is determined from the corresponding statistics at act 104. Elements 102 and 104 may be performed using a number of methods known in the art,

f I ) 26| The coding unit is trial encoded, at act 1 6, using the initial QP to provide (e.g., generate) a trial encoded coding unit. This trial encodin may be performed using any standard video encoding codec, such as MPEG-2, MPEG-4, or H.264. It may be useful to use the same codec for trial encoding the coding units as is to be used for the final encoding of the coding units; however, this may not be necessary in all examples. For example, it may be useful to use a less processor-intensive codec tor the trial encoding process to increase throughput of the example video encoding method.

{0021] The resulting trial encoded coding unit is analyzed to update the QP at act 108. Updating the QP may include decoding and gathering trial statistics of the trial encoded coding unit similar to the statistics gathered for the baseband encoding unit in 1 2, These statistics may be compared to the baseband video data statistics to determine how well the trial encoded coding unit matches to the baseband coding unit. The QP may then be updated depending on whether the statistics from the trial encoded coding unit match the statistics from the baseband coding unit closely enough. If the statistics indicate thai the VQ is likely to ha ve been significantly degraded by the trial encoding process, then the QP for that coding unit may be decreased to increase the number of bits for that coding unit.

j0O22) Another example embodiment is based on the number of bits of each coding unit within the encoded video data. As discussed above, the number of bits may scale inversely with the QP, it is noted, however, that for many video encoding codecs, the number of bits depends not only on the QP, but also on the complexity of the image being represented by the encoded coding unit. This may lead to a situation in which, for many codecs, encoded coding units with a similar bit rate may have a similarly perceived VQ. Thus, another example approach to ensuring adequate VQ may include attempting to approximately normalize the number of bits for each coding unit within a frame or other coding unit of encoded v ideo data.

(00231 in an example embodiment, gathering trial statistics of the trial encoded coding unit may include comparing a bit size of a trial encoded coding unit to a target bit size. And then the updated QP may be determined by reducing the QP when the bit size is less than the target bit size, or increasing the QP when the bit size is greater than die target bit size,

|0024| The target bit size for the trial encoded coding units may be determined in a number of ways. One example approach to determining the target bit size may be to use the same target bit size for ail of the trial encoded coding units in the frame. This common target bit size may be a preselected number based on experience, or it may be the average bit. size for all of the trial encoded coding units within a frame or other coding unit.

{0025] Alternatively, the target bit size may vary for each trial encoded coding unit. These variable target bit sizes may be preselected based on one or more properties of the corresponding baseband coding unit such as a common target bit size (e.g., frame average bit size) scaled using a weighting factor. The weighting factor may be calculated based on a property of the corresponding baseband coding unit, or ma be based on a comparison of the respective images generated by the corresponding baseband coding unit and the trial encoded coding unit. One example of performing this comparison is to determine the distortion between the baseband coding unit and the trial encoded coding unit. Numerous methods known in the art may be used to calculate distortion, including, but not limited to: calculating a sum of absolute differences (SAD); calculating a sum of the squared differences (SSD); determining a structural similarity index baseband (SSIM), or combinations thereof.

|0 26i| The trial encoded coding unit may be e valuated to determine whether it meets a predetermined criterion in l it ) . This evaluation is illustrated in Fig, I as occurring following updating of the QP at act 108. This ordering is illustrative only, based on examples in which the predetermined criterion may be based on information gathered while updating the QP and where the updated QP ma be useful to improve perceived VQ of the resulting final encoded video even when the predetermined criterion has been met; however, it is contemplated that the act 1 10 may occur before the QP is updated in some examples and that in. such an. embodiment the QP may not be updated once the predetermined criterion has been met. As illustrated in Fig, I, if the trial encoded coding unit meets the predetermined criterion, the coding unit is final encoded, at act 112, and if there is another coding unit, as determined at act 1 14, the example process is repeated for that coding unit. This process may be iteratively performed until the predetermined criterion is met.

{0027} The predetermined criterion may involve a comparison between the baseband coding unit and the trial encoded coding unit * such as a distortion calculation being less than a predetermined value. Alternatively, the predetermined criterion may involve a comparison between the bit size of the trial encoded coding unit and the target bit size of the trial encoded coding unit, such as their absolute difference being less than a predetermined value or their ratio being within a predetermined range. The predetermined criterion may also or instead include that the QP of the coding unit has been updated a predetermined number of times. Further, it is contemplated that rather than a single predetermined criterion, there ma be several criteria, any of which may cause the trial encoding and QP updating cycle (acts 106, 108, and 1 10) to end and the coding unit to he finally encoded using the most recently updated QP, e.g. the final QP. For example, in one example embodiment, the coding unit may be trial encoded and the QP updated until either the bit size of the trial encoded coding unit is close enough to its target bit size or the QP has been updated a predetermined number of times (e.g. four times).

{0028| Final encoding of the codi ng un it using the updated QP, at act 1 12, generates a final encoded coding unit. This final encoding may employ any coding standard, such as MPEG-2, MPEG-4, or H.264.

} ' 0029| it is noted that many codecs not only utilize information within a frame of video data to reduce the size of the encoded video data (compared to the baseband video data), but may also utilize information about frames (or other coding units) that come either temporally before or after a current frame to further compress the video data, or frames (or other coding units) having a particular spatial relationship to the current coding unit, in video encoders using such codecs encoded frames of video data may be intra (I) frames, predictive (P) frames, or bidirectional predictive (B) frames. However, perceived VQ of a frame may be more affected by information within the frame itself Additionally, I-frame encoding is ofte less processor-intensive than B- or F-frame encoding. Therefore, it may be useful to restrict the iterative trial encoding of the coding unit, at act .1 6, to i-frame encoding. Then an of I -frame, B-frame, or P-frame encoding ma be used for final encoding of the coding unit, at act 1 12.

{ ' 0030} Figs. 2A and 2B illustrate an example video encoding system. As illustrated in Fig, 2A, the example video encoder includes: input buffer 202 which receives a stream of baseband video data 200; a first processor module 204 coupled to input buffer 202; multipass adaptive quantization module 206 coupled to input buffer 202 and first processor module 204; and final encoding module 208 coupled to input buffer 202 and multi-pass adaptive quantization module 206. Fig. 2B illustrates multi-pass adaptive quantization module 206 in greater detail. Various arrows in these figures illustrate transfer of data between elements.

{ ' 0031} it is noted that the various elements of the example video encoder of Figs. 2A and 2B ma be built from electronic circuitry components and may include one or more application specific integrated circuits (ASICs). Alternatively, one or more of these elements may be implemented using one or more computing systems programmed to perforin the functions of the element. The computing systems may include one or more processing units (e.g. processors) and electronic media encoded with executable instructions for performing the functions of one or more elements.

{0032} Input buffer 202 receives baseband video data 200 and transfers it to first processor module 204, multi-pass adaptive quantization module 206,. and final encoding module 208. The transfer may precede one coding unit at a time in some examples. In various example embodiments, these coding units may be frames, slices, or MBs of video data, it is noted that in some example embodiments, input buffer 202 may be a frame grabber, which receives baseband video data one frame at a time. An example frame grabber may transfer baseband video data one frame at. a time to first processor oiodule 204, and multi-pass adaptive quantization module 206, but only transfer one slice or MB at a time to final encoding module 208. This may allow the first processor module to determine frame wide statistics and the multi-pass adaptive quantization module to use those frame wide statistics and/or generate post encoding frame wide statistics, even though the encoding is ' being done at a slice or MB level.

{0033] First processor module 204 is adapted to gather statistics of the received coding unit and determine a QP for the received coding unit from the gathered statistics. Examples of statistics that may be gathered and methods of determining a QP for the received coding unit from the gathered statistics are described in detail above with reference to the example method of Fig. 1.

{0034] Multi-pass adaptive quantization module 206 includes: trial encoding module 212 coupled to input buffer 202, and first processor module 204; and second processor module 214 coupled to first processor module 204, and trial encoding module 212, Transfer of data, such as baseband coding unit statistics and initial QPs from first processor module 204 to both trial encoding module 212 and second processor module 214 is illustrated by data arrows 204 \ Data arrow 206" illustrates transfer of final QPs from multi-pass adaptive qua tization module 206 to final encoder module 208

{0035] Trial encoding module 212 may encode the coding unit transferred from input buffer 202 using the current QP (either the initial QP transferred from first processor module 204 or the updated QP transferred from second processor module 214} to generate a trial encoded coding unit. As described in detail above, with reference to the example method of Fig. 1 , trial encoding module 212 may include any standard codec, such as MPEG-2, MPEG- 4, or H.264. Although not necessary, it may be useful for trial encoding module 212 to only be adapted to perform l.-frame video encoding.

{0036] The trial encoded coding unit is passed to second processor module 234, which is adapted to determine if the trial encoded coding unit meets a predetermined criterion, if the trial encoded coding unit meets the predetermined criterion, second processor module 234 sets the final QP to be equal to the current QP; otherwise, second processor module 214 updates the QP based on the trial encoded coding unit. If the predetermined criterion was met, second processor module 214 transfers the final QP to final encoding module 208. If the predetermined criterion was not met, second processor module 214 transfers the updated QP to trial encoding module 212 and instructs it to repeat trial encoding the coding unit using the updated QP. Examples of these operations that may be performed by second processor module 214 are also discussed above with reference to the example method of Fi g. 1.

{0037] Final encoding module 208 ma encode the coding unit using the updated QP to generate a final encoded coding unit. This encoding module may be implemented using any standard video encoding module, such as an MP.EG-2, MPEG-4, or H.264 encoding module. As noted above, whereas It may be useful in some examples for trial encoding module 212 to be an {.-.frame encoding module, final encoding module 208 may be an I- frame, P-frame, and/or B-frame encoding module.

{0038] Fig. 3 is a flowchart illustrating an example video encoding method for encoding a macroblock (MB) of baseband video data. While a raacrobloek is used as an example coding unit in Fig. 3, i other examples, other coding units may be used. Many portions of this example method are similar to those of the example embodiments of Figs. I , 2A, and 28. The example method applies a two-pass adaptive quantization method for encoding a MB that may provide a streamlined approach to ensuring adequate VQ for the resulting encoded video stream. Although this example method is described in terms of encoding an MB, one skilled in the art may understand that it may be used for other sized or structured coding units of video data as well, and that this example method may be generalized to larger numbers of passes through the adaptive quantization procedure.

]0039) The MB is received, at act 300, and statistics of the MB are gathered, a act 302. These statistics are used to determine a first QF for the MB, at act 304 and the MB is trial encoded using the first QP, at act 306, These acts of the example method may be accomplished using any of the example procedures described herein with reference to the embodiments of Figs. 1, 2 A, and 2.8.

J004 ] A first VQ of the first encoded MB is determined, at act 308. As discussed herein, a number of approaches may be used to determine the first VQ, such as calculating distortion.

{0041] One example approach to determining the first VQ is to determine the bit size of the first encoded MB and then calculate a ratio of a target bit size to the bit size of the first encoded MB. This ratio is first VQ. The target bit size for the MB may be determined using any of the procedures described herein with reference to the example embodiments of Figs, 1, 2A, and 2B. in this example embodiment, the closer the first VQ is to 1 , the closer the bit size of the first encoded MB is to its target bit size and, thus, the better the first QP. {0042 j A second QP for ike MB is determined from the first QP and the first VQ, at act 310. As noted above, increasing the QP reduces the number of bits used to encode the MB and, thus, typically lowers the resulting VQ. Likewise, decreasing the QP increases the number of bits used to encode the MS and typically increases the resulting VQ. Following the example approach for calculating the first VQ from at act 308 above, determining the second QP for the MB in at act 310 ma iiiclude calculating a first delta QP, the first delta QP being proportional to the logarithm of the first VQ (e.g. the ratio of the target bit size to the bit size of the first encoded MB). The base of the logarithm, and the proportionality constant are dependent on the specific codec used, for example, if an H.264 codec is used the logarithmic base is 2 and the proportionality constant is 6, This first delta QP is then subtracted from the first QP to calculate the second QP.

(0043] It is noted that in this example approach the first delta QP may sometimes have an artificially large absolute value. Therefore, it may be useful to bound the range of the first delta QP. This may be accomplished by comparing the first delta QP to a first delta QP range. If the first delta QP is less than the first delta QP range, the first deha QP may be set to a minimum first delta QP value; and i the first delta QP is greater than the first delta QP range, the first delta QP may be set to a maximum first delta QP value.

(0044] Once the second QP is calculated, the MB is trial encoded again using the second QP, in act 312, to generate a second encoded MB. As with the first trial encoding, in act 306, it may be useful for this second trial encoding to be an I -frame encoding,

(0045) A second VQ of the second encoded MB is determined, in act 314. The determination of the second VQ may be performed in the same manner as the determination of the first VQ in act 308. The second VQ is then compared to the first VQ, in act 316. If the first VQ is better than the second VQ, this indicates that the encodin of the MB is not well behaved, fn this situation, it may be useful to set the final QP to be equal to the first QP, in act 318. if the first VQ is not better than the second VQ (which may be the most likely situation ), the final QP for the MB may be determined from the second QP and the second VQ, in act 320,

(0046] following the example approach for calculating the first VQ from in act 308 and for determining the second QP for the MB in act 310, determining the second VQ of the second encoded MB in in act 314 may include; determining a bit size of the second encoded MB; and calculating the ratio of the target bit size to the bit size of the second encoded MB, e.g. the second VQ. And determining the final QP for the MB in in act 320 (when the first VQ is not better than the second VQ) may include: calculating a second delta QP to be proportional to the logarithm of the second YQ and subtracting the secood delta QP from the second QP to calculate the final QP,

|00 7] As noted above with reference to the first delta QP, the second delta QP may have an. artificially large absolute value in some examples. Therefore, it ma be useful to bound the range of the second delta QP as well. This may be accomplished by comparing the second delta QP to a second delta QP range. (As it is expected in some examples that the bit size of the second trial encoded MB should be closer to the target bit size than the bit size of the first trial encoded MB, the second deita QP range may be smaller than the first deita QP range.) f the second delta QP is less than the second deita QP range, the secoad delta QP ma be set to a minimum second deita QP value; and if the second delta QP is greater than the second delta QP range, the second delta QP may be set to a maximum secoad delta QP value.

(0048| Once the final QP has been determined in either in act 318 or 320, the MB is final encoded using the final QP, in act 322, to generate the final encoded MB, As discussed herein with reference to the example embodiments of Figs. S , 2A, and 2B, this final encoding may utilize I-frarae, P-ftarne. or B-ffame encoding.

}ίΜ)4θ| Below is a specific example embodiment including pseudocode which may be used to implement the example embodiment of Fig. 3 using an H .264 codec based video encoder. This specific example is provided for demonstrative purposes and is not intended as limiting,

(0050) After the first trial encoding, the MB coded sizes may be collected. The MB target bits may be calculated by averaging the MB coded size of the entire frame. The rest of the example process as described herein involves adjusting the QPs for individual MBs such that the targe bit budget may be achieved.

Pass 1 QP adjustment:

dqpl ::: 6*log2(mb...target..si2e mb..coded...si-ie..PlN):

if(mbjCod«l_ siz€jP I < nib Jarget size/4)

dqp_limit ::: 9;

else

dqp .. limit™ 6;

d lj} - CLiPidqpJimit, dq l^);

qp pass I ::: qp .. initials - dqp!*; Equation 1 {00511 As shown in Equation 1 , for all MBs within the coding wait (e.g. frame), which uses the initial QP values from the initial adaptive quantization as a starting potat, the coded MB size is compared with the target bit size and a delta QP is derived (dq !. ¾ <). This assumes a standard quantization curve as assumed for a given encoding format. For H.264, the encoded MB bit size is expected to drop to one half every time the QP is increased by 6, Next, a maximum delta QP limit is set (dqpjimit), if the encoded MB bit size is less than a quarter of the target, the QP limit is set to 9. It is set to 6 otherwise. The delta QP is clipped using the QP limit in both the positive and negative range. The QP delta is then applied to the starting QP value to seed the next trial, encoding (qp_j>ass½). It is to be understood that the thresholds and setpoint values described in Pass 1 may be other values in other examples.

Pass 2 QP adjustment:

si = i« _coded_size_P]s;

s2 - mb_coded_size_P2>, < ;

ii¾s2>sL && dqp l N <0) II (s2<s1 && dqplK>0))

(if (abs(s2 - mb_target_size)<abs(s 1 - mbjarget_size))

dqp2 = 0;

else

dqp2 ~ - dq i*;}

else

{gain - dqpl / fog2(s2 s.l );

gain ;:: max(2, m.in(6, G));

dqp2 :: = (gain*log2(mbJarget m size s2);

ifTs2< nib_target_size/2)

dqp limit :::: 4;

else

dqp ... limit ::::! 6;

dqp2 - CLIPCdqp Jiroit dqp2);}

qpj>ass2 N ~ qp_iniliaki - dqpi - dqp2; Equation 2

}O052j After the second trial encoding, two points on the quantization curve are known I qp ass l^, rab coded size P IN] a«d (q ___pass2>s, mb coded size P2s]. in some situations, due to some non-linearity within the encoding process, die point on the curve moves in the opposite direction than expected, in this case, the QP of the first or second trial encoding may be used depending on which pass produced an encoded MB bit size closer to the target bit size. For a!i other cases, those two points may be used to calculate the slope of the quantization curve (gain).

{00531 Once the slope is known, it may be used to adjust the QP for the final pass. If the MB coded size after the second trial encoding (pass 2) is less than half of the target, a maximum delta QP (dqp limit) after pass 2 is se to 4; otherwise it is set to (>. The delta QP is clipped using the QP limit in both the positive and negative range.

|0O54| The final QP (qpjpass2 N ) is then applied to produce the final encoded video.. Again, it is to be understood that the threshold and setpoint values used in the pseudocode for Pass 2 herein may be set to other values in other examples. jOOSSf Figure 4 is a schematic illustration of a media deliver)' system 400 in accordance with embodiments of the present invention. The media delivery system 400 may provide a mechanism for delivering a media source 402 to one or more of a variety of media output(s) 404. Although only one media source 402 and media output 404 are illustrated in Figure 4, it is to be understood that any number may be used, and examples of the present invention may be used to broadcast and/or otherwise deliver media content to any number of media outputs.

{06561 The media source data 402 may be any source of media content, including but not limited to, video, audio, data, or combinations thereof. The media source data 402 may be, for example, audio and/or video data that may be captured using a camera, microphone,, and/or other capturing devices, or may be generated or provided by a processing device. Media source data 402 may be analog and/or digital. When the media source data 402 is analog data, the media source data 402 may be converted to digital data using, for example, an analog-to-digital converter (ADC), Typically, to transmit the media source data 402, some mechanism for compression and/or encryption may be desirable. Accordingly, a video encoding system 410 may be provided that may filter and/or encode the .media source data 402 using any methodologies in the art, known now or in the future, including encoding methods in accordance with video standards such as, but not limited to, H..264, HEVC, VC- 1 , VPS or combinations of these or other encoding standards. The video encoding system 10 ma be implemented with embodiments of the present invention described herein. For example, the video encoding system 410 may be implemented using the video encoding system 200 of Fig. 2A.

|0057| The encoded data 4 ί 2 may be provided to a communications link, such, as a satellite 414, an antenna 416. and/or a network 418. The network 418 may be wired or wireless, and farther may communicate using electrical and/or optical transmission. The antenna 16 may be a terrestrial antenna, and may, for example, receive and transmit conventional AM and FM signals, satellite signals, or other signals known in ie art. The communications link may broadcast the encoded data 412, and in some examples may alter the encoded data 412 and broadcast the altered encoded data 4 J 2 (e.g. by re-encoding, adding to, or subtracting from the encoded data 412). The encoded data 420 provided from the communications link may be received by a receiver 422 that may include or be coupled to a decoder. The decoder may decode the encoded data 420 to provide one or more media outputs, with the media output 404 shown in Figure 4, The receiver 422 may be included in or in communication with any number of devices, including but not limited to a mode , router, server, set-top box, laptop, desktop, computer, tablet, mobile phone, etc.

(0058) The media delivery system 400 of Figure 4 and/or die video encoding system 410 may be utilized hi a variety of segments of a content distribution industry,

jO S^J Figure 5 is a schematic illusiraiio of a video distribution system 500 that may .make use of video encoding systems described herein. The video distribution system 500 includes video contributors 505. The video contributors 505 may include, but are not limited to, digital satellite news gathering systems 506, event broadcasts 507, and remote studios 508. Each or any of these video contributors 505 may utilize a. video encoding system described herein, such as the video encoding system 200 of Figure 2A, to encode media source data and provide encoded data to a communications link. The digital satellite news gathering system 506 may provide encoded data to a satellite 502. The event broadcast 507 may provide encoded dat to an antenna 501 . The remote studio 508 may provide encoded data o ver a network 503.

} 60| A production segment 510 may include a content originator 512. The content originator 512 may receive encoded data from any or combinations of the video contributors 505. The content originator 512 may make the received content available, and may edit, combine, and/or manipulate any of the received content to make the content available, The content originator 512 may utilize video encoding systems described herein, such as the video encoding system 200 of Figure 2 A, to provide encoded data to the satellite 14 for another communications link). The content originator 512 may provide encoded data to a digital terrestrial television system. 536 over a network or other communication link, fit some examples, the content originator 512 may utilize a decoder to decode the content received from the contributorfs) 505, The content originator 512 may then re-encode data and provide the encoded data to the satellite 534. in other examples, the content originator 512 may not decode the received data, and may utilize a traiiscoder to change a coding format of the received data.

j 06611 A primary distribution segment 520 may include a digital broadcast system 521 , the digital terrestrial television system 516, and/or a cable system 523. The digital broadcasting system 52.1 may include a receiver, such as the receiver 422 described with reference to Figure 4, to receive encoded data from the satellite 14. The digital terrestrial television system 516 may include a receiver, such as the receiver 422 described with reference to Figure 4, to receive encoded data from the content originator 512. The cable system 523 may host its own content which may or may .not have been received from the production segment 510 and/or the contributor segment 505. For example, the cable system 523 may provide its own media source data 402 as that which was described with reference to Figure 4,

jOM2) The digital broadcast, system 521 may include a video encoding system, such as the video encoding system 200 of Figure 2A, to provide encoded data to the satellite 525, The cable system: 523 ma include a video encoding system, soch as video encoding system 200 of Figure 2A, to provide encoded data over a network or other communications link to a cable local headend 532. A secondary distribution segment 530 may include, for example, the satellite 525 and/or the cable local headend 532.

j0063J The cable local headend 532 may include a video encoding system, such as die video encoding system 200 of Figure 2A, to provide encoded data to clients in a client segment 440 over a network or other communications link. The satellite 525 may broadcast signals to clients in the client segment 540. The client segment 540 may include any number of devices that may include receivers, such as the receiver 422 and associated decoder described with reference to Figure , for decoding content and. ultimately, making content available to users. The client segment 540 may include devices such as set-top boxes, tablets, computers, servers, laptops, desktops, cell phones, etc.

| 064| Accordingly, filtering, encoding, and/or decoding may be utilized at any of a number of points in a video distribution system. Embodiments of the present invention may find use within any, or in some examples all, of these segments.

f0i ) 65| While the present disclosure has been described with reference to various embodiments, it will be understood that these embodiments are illustrative and that the scope of the disclosure is not limited to them. Many variations, modifications, additions, and improvements are possible. More generally, embodiments in accordance with the present disclosure have been described in the context of particular embodiments. Functionality may be separated or combined in procedures differently in various embodiments of the disclosure or described with different terminology. These and other variations, modifications, additions, and improvements may fall within the scope of the disclosure as defined in the claims that follow.