Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DETECTING SALIENCY IN AN IMAGE
Document Type and Number:
WIPO Patent Application WO/2011/121563
Kind Code:
A1
Abstract:
Saliency in an image (IM) is detected by determining a one-dimensional saliency profile (SP) along an axis of the image on the basis of at least one one-dimensional saliency detection (SBG) applied along the axis of the image. The one-dimensional saliency profile provides saliency information that is obtained with modest computational effort. An image can satisfactorily be processed in dependence on the one-dimensional saliency profile. Aspect ratio conversion is an example.

Inventors:
DE HAAN GERARD (NL)
DAMKAT CHRIS (NL)
Application Number:
PCT/IB2011/051378
Publication Date:
October 06, 2011
Filing Date:
March 31, 2011
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KONINKL PHILIPS ELECTRONICS NV (NL)
DE HAAN GERARD (NL)
DAMKAT CHRIS (NL)
International Classes:
G06K9/46; G06T3/00
Domestic Patent References:
WO2009070449A12009-06-04
Foreign References:
EP1968008A22008-09-10
EP2034439A12009-03-11
Other References:
JIN-HWAN KIM ET AL: "IMAGE AND VIDEO RETARGETING USING ADAPTIVE SCALING FUNCTION", PROCEEDINGS OF THE 17TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO),, 24 August 2009 (2009-08-24), pages 819 - 823, XP007915218
NIEBUR E ET AL: "CONTROLLING THE FOCUS OF VISUAL SELECTIVE ATTENTION", MODELS OF NEURAL NETWORKS, SPRINGER, BERLIN, DE, vol. 4, 1 January 2002 (2002-01-01), pages 247 - 276, XP008069441
D. GAO: "Content Aware Video Resizing", PAPER, MASTER GRADUATION IN ELECTRICAL ENGINEERING, EINDHOVEN UNIVERSITY OF TECHNOLOGY, 25 August 2009 (2009-08-25), pages 1 - 10, XP002646536, Retrieved from the Internet [retrieved on 20110628]
AVIDAN S ET AL: "Seam carving for content-aware image resizing", ACM TRANSACTIONS ON GRAPHICS: TOG, ACM, US, vol. 26, no. 3, 1 July 2007 (2007-07-01), pages 10 - 1, XP007904203, ISSN: 0730-0301, DOI: DOI:10.1145/1276377.1276390
D. VAQUERO ET AL.: "A survey of image retargeting techniques", PROC. SPIE, vol. 7798, 779814, 2 August 2010 (2010-08-02), pages 779814-1 - 779814-15, XP002646537
Attorney, Agent or Firm:
COOPS, Peter et al. (Building 44, AE Eindhoven, NL)
Download PDF:
Claims:
CLAIMS:

1. A method of detecting saliency in an image (IM), the method comprising:

a one-dimensional saliency profile determining step (SPG) in which a one- dimensional saliency profile (SP) along an axis of the image is determined on the basis of at least one one-dimensional saliency detection (SD, SDL) applied along the axis of the image.

2. A method of detecting saliency in an image according to claim 1, the one- dimensional saliency profile determining step (SPG) comprising:

a metric determining sub-step (MG) in which respective metrics (ME) relating to an image property are determined for respective image areas that extend along an orthogonal axis, which is orthogonal to the axis to which the one-dimensional saliency profile applies, a metric that is determined for an image area representing the image property in that image area,

a one-dimensional saliency detection sub-step (AF, SD) in which a one- dimensional saliency detection (SD) is applied to the respective metrics, which are aligned in accordance with the respective image areas that the respective metrics represent.

3. A method of detecting saliency in an image according to claim 2, in which, in the one-dimensional saliency profile determining step (SPG):

respective metric determining sub-steps (MG) are carried out for respective image properties; and in which

respective one-dimensional saliency detection sub-steps (AF, SD) are carried out for the respective metrics obtained from the respective metric determining sub-steps.

4. A method of detecting saliency in an image according to any of the claims 2 and 3, in which, in the metric determining sub-step (MG), the image property to which the respective metrics relate is one of the following image properties: luminance, chrominance, hue, saturation, edge intensity, edge density, edge orientation, local dynamic range, local variance, magnitude of motion, and direction of motion.

5. A method of detecting saliency in an image according to any of the claims 2-4, in which, in the metric determining sub-step (MG), a metric is determined by accumulating respective values for the image property belonging to respective pixels in the image area. 6. A method of detecting saliency in an image according to any of the claims 2-4, in which, in the metric determining sub-step (MG), a metric is determined by applying a statistical function to respective values for the image property belonging to respective pixels in the image area. 7. A method of detecting saliency in an image according to claim 1 , in which:

respective one-dimensional saliency detections (SDL) are applied to respective arrays of pixels (Ll s L2, .., LM) aligned along the axis of the image (IM), a one-dimensional detection applied to an array of pixels (Li) providing an array of saliency values (LSi); and the one-dimensional saliency profile (SP) is determined on the basis of respective arrays of saliency values (LSi, LS2, .., LSM), whereby a saliency value (ASi) for the one-dimensional saliency profile is determined on the basis of respective saliency values that are aligned (Ci) along an orthogonal axis, which is orthogonal to the axis to which the one-dimensional saliency profile applies. 8. A method of detecting saliency in an image according to claim 1 , in which a one-dimensional saliency detection involves at least one difference of Gaussian filtering (DOG).

9. A method of detecting saliency in an image according to claim 1 , in which a one-dimensional saliency detection involves various differential filtering operations that differ from each other in terms of scale (Fig. 8).

10. A method of detecting saliency in an image according to claim 1 , the method comprising:

- a further one-dimensional saliency profile determining step in which a further one-dimensional saliency profile along the orthogonal axis of the image is determined on the basis of the at least one further one-dimensional saliency detection applied along the orthogonal axis of the image.

11. A method of processing an image (IM), which encompasses the method of detecting saliency in the image according to claim 1, the method comprising:

an image processing step (ARC) in which the image is processed in

dependence on the one-dimensional saliency profile (SP).

12. A method of processing an image according to claim 11, in which, in the image processing step (ARC), the image is scaled along the axis in accordance with a scaling factor (Sc) that varies as a function of the one-dimensional saliency profile (SP). 13. A processor for detecting saliency in an image (IM), the processor comprising:

a one-dimensional saliency profile generator (SPG) adapted to generate a one- dimensional saliency profile (SP) along an axis of the image on the basis of the at least one one-dimensional saliency detection (SD, SDL) applied along the axis of the image.

14. A computer program product comprising a set of instructions that enables a processor, which is capable of executing the set of instructions, to carry out the method according to claim 1.

Description:
Detecting saliency in an image

FIELD OF THE INVENTION

An aspect of the invention relates to a method of detecting saliency in an image. The method may be used to achieve, for example, an aspect ratio conversion. Other aspects of the invention relate to a method of processing an image, a processor for detecting saliency in the image, and a computer program product for such processing.

BACKGROUND OF THE INVENTION

An image can be processed in dependence on detected saliency for the purpose of aspect ratio conversion. Aspect ratio conversion typically entails spatial deformation. However, image areas that comprise relatively many details should preferably not undergo any substantial spatial deformation. In case information about saliency in the image is available, it is possible to selectively apply spatial deformation in image areas that comprise relatively few salient details. Accordingly, it is possible to achieve aspect ratio conversion with a relatively modest loss in perceptual image quality.

Saliency in an image is typically detected by generating a two-dimensional saliency map. Generating a two-dimensional saliency map is computationally intensive because this typically involves two-dimensional filtering operations and two-dimensional scaling operations. Consequently, it is generally difficult, or even impossible, to implement a relatively fast two-dimensional saliency map generation at modest cost. This constitutes an obstacle for implementing saliency-based video processing in consumer applications, where images succeed one another at relatively high rate.

European patent application published under number EP 2 034 439 Al describes using graphic processor units for computing saliency maps, in order to accelerate the computation of the saliency maps.

SUMMARY OF THE INVENTION

There is a need for a solution that allows relatively fast saliency detection at modest cost. In order to better address this need, the following points have been taken into consideration. Two-dimensional saliency detection provides a saliency map with relatively precise and detailed saliency information. However, there are image processing applications that do not require saliency information with a level of precision and detail that two- dimensional saliency detection offers. An example is aspect ratio conversion, wherein image is scaled along a particular axis of the image. In such an application, it is sufficient to have an indication of where salient details are located along the axis of interest.

In accordance with an aspect of the invention, a method of detecting saliency in an image comprises a one-dimensional saliency profile determining step in which a one- dimensional saliency profile along an axis of the image is determined on the basis of at least one one-dimensional saliency detection applied along the axis of the image.

In accordance with the invention, a one-dimensional saliency profile is thus generated on the basis of one-dimensional saliency detection. One-dimensional saliency detection is significantly less computationally intensive than two-dimensional saliency detection. A reduction of computational effort of several orders of magnitude can be achieved. Nonetheless, a one-dimensional saliency profile that is generated in accordance with the invention can provide saliency information that is sufficiently precise for many image processing applications, such as, for example, aspect ratio conversion.

An implementation of the invention advantageously comprises one or more of the following additional features, which are described in separate paragraphs. These additional features each contribute to achieving relatively fast saliency detection at modest cost.

The one-dimensional saliency profile determining step can comprise a metric determining sub-step followed by a one-dimensional saliency detection sub-step. In the metric determining sub-step, respective metrics relating to an image property are determined for respective image areas that extend along an orthogonal axis, which is orthogonal to the axis to which the one-dimensional saliency profile applies. A metric that is determined for an image area represents the image property in that image area. In the one-dimensional saliency detection sub-step, a one-dimensional saliency detection is applied to the respective metrics, which are aligned in accordance with the respective image areas that the respective metrics represent.

In the one-dimensional saliency profile determining step, respective metric determining sub-steps can be carried out for respective image properties; and respective one- dimensional saliency detection sub-steps can be carried out for the respective metrics obtained from the respective metric determining sub-steps. In the metric determining sub-step, the image property to which the respective metrics relate can be one of the following image properties: luminance, chrominance, hue, saturation, edge intensity, edge density, edge orientation, local dynamic range, local variance, magnitude of motion, and direction of motion.

In the metric determining sub-step, a metric can be determined by accumulating respective values for the image property belonging to respective pixels in the image area.

In the metric determining sub-step, a metric can be determined by applying a statistical function to respective values for the image property belonging to respective pixels in the image area.

In an alternative implementation, saliency in an image can be detected by applying respective one-dimensional saliency detections to respective arrays of pixels aligned along the axis of the image. A one-dimensional detection that is applied to an array of pixels provides an array of saliency values. The one-dimensional saliency profile is then determined on the basis of respective arrays of saliency values, whereby a saliency value for the one- dimensional saliency profile is determined on the basis of respective saliency values that are aligned along an orthogonal axis, which is orthogonal to the axis to which the one- dimensional saliency profile applies;

A one-dimensional saliency detection can involve at least one difference of Gaussian filtering.

A one-dimensional saliency detection can involve various differential filtering operations that differ from each other in terms of scale.

A method of detecting saliency in an image can comprise a further one- dimensional saliency profile determining step in which a further one-dimensional saliency profile along the orthogonal axis of the image is determined on the basis of the at least one further one-dimensional saliency detection applied along the orthogonal axis of the image.

A method of processing an image can encompass the method of detecting saliency as defined hereinbefore and comprise an image processing step in which the image is processed in dependence on the one-dimensional saliency profile.

In the image processing step, the image can be scaled along the axis in accordance with a scaling factor that varies as a function of the one-dimensional saliency profile.

A detailed description, with reference to drawings, illustrates the invention summarized hereinbefore as well as the additional features. BRIEF DESCRIPTION OF THE DRAWINGS

Fig. 1 is a block diagram that illustrates an image display assembly, which comprises an aspect ratio converter.

Fig. 2 is a graph that illustrates a saliency profile in dependence on which the aspect ratio converter can scale an image.

Fig. 3 is a graph that illustrates a scaling function, which can be determined on the basis of a saliency profile.

Fig. 4 is a graph that illustrates a position mapping function, which is an integral of the scaling function.

Fig. 5 is a block diagram that illustrates a saliency profile generator, which forms part of the image display assembly.

Fig. 6 is a block diagram that illustrates a saliency sub-profile generation branch, which may be applied in the saliency profile generator.

Fig. 7 is a conceptual diagram that illustrates various operations that the saliency sub-profile generation branch carries out.

Fig. 8 is a block diagram that illustrates a one-dimensional saliency detector, which forms part of the saliency sub-profile generation branch.

Fig. 9 is a block diagram that illustrates an alternative saliency sub-profile generation branch, which may also be applied in the saliency profile generator.

Fig. 10 is a conceptual diagram that illustrates various operations that the alternative saliency sub-profile generation branch carries out.

DETAILED DESCRIPTION

Fig. 1 illustrates an image display assembly IDA, which comprises an image source SRC and an image display device IDD. The image source SRC and the image display device IDD may be stand-alone devices. In that case, the image source SRC may be, for example, a Blu-ray disk player, a multimedia hard disk, or a set-top box. The image display device IDD may be, for example, a flat screen display device. The image display assembly IDA may also form part of a self-contained apparatus, such as, for example, a multimedia apparatus or a personal communication apparatus.

The image display device IDD comprises an input interface INP, an image memory MEM, an aspect ratio converter ARC, a saliency profile generator SPG, and a display screen DPL. The input interface INP may comprise, for example, one or more buffers for temporarily storing data. The input interface INP may further comprise one or more decoders. The image memory MEM may comprise, for example, one or more volatile memory circuits.

The aspect ratio converter ARC and the saliency profile generator SPG are modules that may each be implemented by means of, for example, a set of instructions that has been loaded into an instruction execution device. In such a software-based

implementation, the set of instructions defines operations that the module concerned carries out, which will be described hereinafter. In this respect, Fig. 1 can be regarded as

representing a method, at least partially, whereby the aspect ratio converter ARC represents an aspect ratio conversion step, and the saliency profile generator SPG represents a saliency profile generation step.

The display screen DPL of the image display device IDD has a width and a height. The ratio of the width to the height defines a display aspect ratio. The display aspect ratio of the display screen DPL may be, for example, 16:9.

The image display device IDD basically operates as follows. The image display device IDD receives source image data IS from the image source. The source image data IS represents at least one input image IM in a non-coded format or in a coded format. Such an input image IM may have an original aspect ratio that is different from the display aspect ratio. The input image IM can be regarded as a matrix of pixels, which comprises several lines of pixels and several columns of pixels. The ratio of the number of columns to the number of lines typically corresponds to the original aspect ratio.

Assuming that the input image IM is two-dimensional, the input image IM has a horizontal axis and a vertical axis, which corresponds to the lines and the columns, respectively. Respective columns have respective positions on the horizontal axis of the image. Respective lines have respective positions on the vertical axis of the image.

An input image IM that is comprised in the source image data IS is, at least partially, temporarily stored in the image memory MEM as internal image data IN. The internal image data IN, which represents the input image IM, typically comprises respective components relating to respective image properties, such as, for example, luminance and chrominance.

The saliency profile generator SPG generates a one-dimensional saliency profile SP along the horizontal axis of the input image IM on the basis of the internal image data IN that is stored in the image memory MEM. The one-dimensional saliency profile SP indicates respective degrees of saliency for respective vertically elongated image portions that have respective positions on the horizontal axis. A vertically elongated image portion may correspond with a column of pixels. In case a vertically elongated image portion represents, at least partially, salient image details, the one-dimensional saliency profile SP indicates a relatively high degree of saliency for that image portion. Conversely, in case a vertically elongated image portion does not comprise any salient image details, the one- dimensional saliency profile SP indicates a relatively low degree of saliency for that image portion.

Fig. 2 illustrates an example of a one-dimensional saliency profile SP. Fig. 2 is a graph that comprises a horizontal axis, which corresponds to the horizontal axis x; of the input image. The graph comprises a vertical axis, which represents a degree of saliency Sa. A curve in relatively thick lines represents the one-dimensional saliency profile SP according to which the degree of saliency varies along the horizontal axis of the input image. The degree of saliency is relatively high in a range RA of horizontal positions, which corresponds to a vertical middle section of the input image. The vertical middle section thus comprises relatively many salient details. In contrast, a left-hand vertical border section and a right-hand vertical border section of the image comprise relatively salient few details.

The aspect ratio converter ARC scales the input image IM along the horizontal axis in dependence on the one-dimensional saliency profile SP so as to obtain an output image that has the display aspect ratio. The aspect ratio converter ARC applies display image data ID, which represents the output image, to the display screen DPL. In response, displays the output image, which is an aspect ratio corrected version of the input image IM comprised in the source image data IS.

Fig. 3 illustrates a scaling function SF, which the aspect ratio converter ARC may derive from the one-dimensional saliency profile SP illustrated in Fig. 2. Fig. 3 is a graph that comprises a horizontal axis, which corresponds to the horizontal axis of the input image x;. The graph comprises a vertical axis, which represents a local scaling factor Sc. A curve in relatively thick lines represents the scaling function SF according to which the aspect ratio converter ARC may scale the input image IM along the horizontal axis of the input image. The local scaling factor Sc varies along the horizontal axis instead of being constant. This implies that the input image IM is scaled in a nonlinear fashion, which will generally be the case when an aspect ratio conversion is required.

More specifically, in Fig. 3, the local scaling factor Sc is substantially equal to 1 in the range RA where the degree of saliency is relatively high. That is, the vertical middle section of the input image IM with relatively many salient details, which corresponds to this range RA, is not scaled. Relative pixel positions are maintained along the horizontal and the vertical axes. The vertical middle section with relatively many salient details does therefore not undergo any substantial spatial deformation. In a manner of speaking, the range RA where the degree of saliency is relatively high is not used for achieving aspect ratio conversion.

In contrast, outside the range RA where the degree of saliency is relatively high, the local scaling factor Sc is greater than 1. Relative pixel positions along the horizontal axis are widened with respect to those along the vertical axis. This causes a spatial deformation. For any given position on the horizontal axis outside the range RA, it holds that the smaller the degree of saliency is, the larger the local scaling factor Sc is. Consequently, outside the range RA, the input image IM is horizontally stretched to an extent that is inversely related to the degree of saliency. The smaller the degree of saliency is, the larger the spatial deformation is. Image portions with relatively few salient details are thus spatially deformed for achieving aspect ratio conversion, where image portions with relatively many salient details are substantially left untouched, as it were.

Fig. 4 illustrates a position mapping function PM, which is an integral of the scaling function SF illustrated in Fig. 3. Fig. 4 is a graph, which has a horizontal axis, which corresponds to the horizontal axis of the input image x;. The graph has a vertical axis, which corresponds to the horizontal axis of the output image x 0 . A curve in relatively thick lines represents the position mapping function PM according to which a particular position on the horizontal axis of the input image IM is mapped on a particular position on the horizontal axis of the output image. The output image has a width Wo that is larger than that Wi of the input image. This may apply, for example, in case the original aspect ratio is 4:3 and the display aspect ratio is 16:9.

In the range RA of horizontal positions where the degree of saliency is relatively high, the position mapping function PM exhibits a first derivative that is substantially equal to 1. In this range RA, relative horizontal positions are maintained with respect to relative vertical positions. There is substantially no spatial deformation in this range RA, in which horizontal positions are faithfully mapped from the input image IM to the output image.

In contrast, the position mapping function PM exhibits a first derivative that is larger than 1 outside the range RA of horizontal positions in which wherein degree of saliency is relatively high. Moreover, the position mapping function PM is nonlinear outside this range RA. All this makes that the output image is a nonlinearly stretched version of the input image. Nonlinear stretching occurs in image regions wherein the degree of saliency is relatively low. Spatial deformation, which is necessary for achieving aspect ratio conversion, is thus primarily applied image in regions with relatively few salient details.

Fig. 5 illustrates details of the saliency profile generator SPG, which forms part of the image display device IDD illustrated in Fig. 1. The image memory MEM is also represented in Fig. 5. The saliency profile generator SPG comprises respective saliency sub- profile generation branches SGB1, SGB2, SGB3, respective normalizers NMl, NM2, NM3, and a combiner CB.

The saliency profile generator SPG basically operates as follows. Respective saliency sub-profile generation branches SGB1, SGB2, SGB3 process respective types of values VI, V2, V3 that relate to respective image properties. For example, a first saliency sub-profile generation branch SGB1 may process luminance values. A second saliency sub- profile generation branch SGB2 may process first chrominance difference values. A third saliency sub-profile generation branch SGB3 may process second chrominance difference values. The saliency profile generator SPG may comprise one or more additional saliency sub-profile generation branches for other image properties that relate to, for example, motion and edge information.

A saliency sub-profile generation branch SGB1, SGB2, SGB3 determines a gross saliency sub-profile SGI, SG2, SG3 that relates to the image property of which the saliency sub-profile generation branch processes the values VI, V2, V3. For example, the first saliency sub-profile generation branch SGB 1 may determine a gross saliency sub-profile SGI that relates to luminance of the input image. The second and the third sub-profile generation branches SGB2, SGB3 may determine gross saliency sub-profiles SG2, SG3 that relate to first chrominance differences and second chrominance differences, respectively. The respective normalizers NMl, NM2, NM3 apply respective appropriate weighting coefficients to the respective gross saliency sub-profiles SGI, SG2, SG3 so as to obtain respective normalized saliency sub-profiles NPl, NP2, NP3. The combiner CB combines these respective normalized saliency sub-profiles NPl, NP2, NP3, which relate to respective image properties, so as to obtain the saliency profile SP that is applied to the aspect ratio converter ARC as illustrated in Fig. 1.

Fig. 6 illustrates an implementation of a saliency sub-profile generation branch SGB. This implementation will be referred to hereinafter as the saliency sub-profile generation branch SGB for reasons of convenience. The saliency sub-profile generation branch SGB comprises a metric generator MG, a metric array former AF, and a one- dimensional saliency detector SD. These are modules that may each be implemented by means of, for example, a set of instructions that has been loaded into an instruction execution device. In such a software -based implementation, the set of instructions defines operations that the module concerned carries out, which will be described hereinafter. In this respect, Fig. 6 can be regarded as representing a method, at least partially, whereby a particular module represents a particular step. For example, the metric generator MG may represent a metric generation step.

Fig. 7 conceptually illustrates operations that the metric generator MG and the metric array former AF carry out. Fig. 7 illustrates the input image IM as a matrix of pixels that comprises a plurality of columns Ci, C 2 , .., CN- The metric generator MG determines respective image property metrics MEi, ME 2 , .., ME N for respective columns of pixels in the input image. The respective image property metrics ME all relate to the image property of interest, that is, the image property of which the saliency sub-profile generation branch SGB processes the values V. The image property of interest may be, for example, any one of the following image properties: luminance, chrominance, hue, saturation, edge intensity, edge density, edge orientation, local dynamic range, local variance, magnitude of motion, and direction of motion.

An image property metric that has been generated for a column represents the image property of interest in an image area that corresponds with this column. For example, the metric generator MG may successively determine a first image property metric MEi for a first column Ci, a second image property metric ME 2 for a second column C 2 , and so on. The metric generator MG may finally determine an N-th image property metric ME for an N-th column CN, N representing an integer value corresponding to the number of columns.

An image property metric ME may be, for example, a sum of respective values that respective pixels in the column concerned have, whereby the values relate to the image property of interest. For example, let it be assumed that the image property of interest is luminance. In that case, the metric generator MG may determine luminance metric for a column by adding together luminance values of pixels in that column. Alternatively, the image property metric ME may be a statistical parameter other than average value, such as, for example, median value.

The metric array former AF aligns the respective image property metrics ME that the metric generator MG generates in a column-wise fashion. The respective image property metrics MEi, ME 2 , .., MEN are aligned in accordance with the respective columns Ci, C 2 , .., CN for which these metrics represent the image property of interest. Fig. 7 clearly illustrates this metric alignment. Accordingly, the metric array former AF provides an array of image property metrics AM, in which the image property metric MEi for the first column Ci occupies a first position, the image property metric ME 2 for the second column C 2 occupies a second position, and so on, until the image property metric ME N for the N-th column C N , which occupies a last position.

The one-dimensional saliency detector SD applies a saliency detection to the array of image property metrics AM, which is one-dimensional. Accordingly, the one- dimensional saliency detector SD provides a gross saliency sub-profile SG, which relates to the image property of interest. The gross saliency sub-profile SG will typically be in the form of an array, which has a size similar to that of the array of image property metrics AM. This gross saliency sub-profile array may thus comprise N respective saliency metrics for the N respective columns of the image. A saliency metric for a column indicates the degree of saliency in that column, which constitutes a vertically elongated image area.

Fig. 8 illustrates an implementation of the one-dimensional saliency detector SD. This implementation will be referred to hereinafter as the saliency detector SD for reasons of convenience. The saliency detector SD comprises a difference of Gaussian filter DOG, a first down-scaler DS1 , a second down-scaler DS2, a first up-scaler US1, and a second up-scaler US2. The difference of Gaussian filter DOG has a given kernel of a given size. Importantly, the difference of Gaussian filter DOG is one-dimensional and, therefore, carries out operations that are relatively simple. The same applies to the down-scalers DS1, DS2 and the up-scalers US1, US2.

The saliency detector SD basically operates as follows. The first down-scaler DS1 provides a first down-scaled version AM- of the array of image property metrics. For example, the first down-scaler DS1 may apply a down scaling factor of 2. In that case, the first down-scaled version AM- has a size that is half of that of the array of image property metrics AM. The second down-scaler DS2 provides a second down-scaled version AM— of the array of image property metrics. For example, the second down-scaler DS2 may apply a further down scaling factor of 2, in which case the second down-scaled version AM— has a size that is one quarter of that of the array of image property metrics AM.

The difference of Gaussian filter DOG is successively applied to the array of image property metrics AM, the first down-scaled version AM- thereof, and the second down-scaled version AM— thereof, in any particular order. Accordingly, a first filter result FRl, a second filter result FR2, and a third filter result FR3 are obtained, respectively. The first filter result FRl will typically be in the form of an array of filter output samples, which has a size similar to that of the array of image property metrics AM. That is, the array will typically comprise N filter output samples in case there are N image property metrics ME for N columns. The second filter result FR2 will then constitute a smaller array, and the third filter result FR3 will constitute a yet smaller array.

The first up-scaler US 1 up-scales the second filter result FR2 with an up- scaling factor that is the inverse of the down scaling factor of the first down-scaler DS1. Accordingly, the first up-scaler US1 provides an up-scaled version of the second filter result FR2, which has a size similar to that of the first filter result FR1. The second up-scaler US2 up-scales the third filter result FR3 with an up-scaling factor so that an up-scaled version of the third filter result FR3 is obtained, which has a size similar to that of the first filter result FR1. For example, in case the size of the second down-scaled version AM— of the array of image property metrics is one quarter of that of the array of image property metrics AM, the second up-scaler applies an up-scaling factor of 4. The first filter result FR1, the up-scaled version of the second filter result FR2, and the up-scaled version of the third filter result FR3 are combined so as to obtain the gross saliency sub-profile SG.

In effect, the saliency detector SD illustrated in Fig. 8 applies three different kernel sizes. A smallest kernel size corresponds to that of the difference of Gaussian filter DOG. A medium kernel size corresponds to a once up-scaled version of the kernel size of the difference of Gaussian filter DOG. A largest kernel size corresponds to a twice up-scaled version of the kernel size of the difference of Gaussian filter DOG. It is as if the saliency detector SD were to comprise three differences of Gaussian filters, which differ from each other in terms of kernel size.

Fig. 9 illustrates an alternative implementation of a saliency sub-profile generation branch. This alternative implementation will be referred to hereinafter as alternative saliency sub-profile generation branch SGB A for reasons of convenience. The alternative saliency sub-profile generation branch SGB A comprises a line-wise one- dimensional saliency detector SDL, a saliency buffer SBF, and a column-wise saliency accumulator SAC. These are modules that may each be implemented by means of, for example, a set of instructions that has been loaded into an instruction execution device. In such a software-based implementation, the set of instructions defines operations that the module concerned carries out, which will be described hereinafter. In this respect, Fig. 8 can be regarded as representing a method, at least partially, whereby a particular module represents a particular step. For example, the line- wise one-dimensional saliency detector SDL may represent a line -wise one-dimensional saliency detection step. Fig. 10 conceptually illustrates operations that the alternative saliency sub- profile generation branch SGB A carries out. The line -wise one-dimensional saliency detector SDL processes respective image property values V from respective image lines in the image in a line-wise fashion. A line of image property values undergoes a one-dimensional saliency detection. Accordingly, the line -wise one-dimensional saliency detector SDL provides respective saliency detection arrays for respective lines in the image. For example, the line- wise saliency detector SDL provides a first saliency detection array LSi for the first line Li in the image, a second saliency detection array LS 2 for the second line L 2 in the image, and so on. Finally, the line-wise saliency detector SDL provides a M-th saliency detection array LS M for the M-th line L M in the image, M being an integer value representing the number of lines in the input image.

The saliency buffer SBF temporarily stores the respective saliency detection arrays, which the line-wise one-dimensional saliency detector SDL provides. The saliency buffer SBF constructs a saliency map SM by arranging the respective saliency detection arrays in accordance with the image lines from which these arrays have been generated. Fig. 10 clearly illustrates this. The saliency map SM thus constitutes a matrix of saliency values that has respective columns and respective lines corresponding with those of the input image.

The column-wise saliency accumulator SAC provides respective accumulated saliency values AS for respective columns in the saliency map. For example, the columnwise saliency accumulator SAC may determine a first accumulated saliency value ASi for a first column Ci by adding together respective saliency values in that column. The columnwise saliency accumulator SAC may then determine a second accumulated saliency value AS 2 for a second column C 2 by adding together respective saliency values in that column, and so on. The column- wise saliency accumulator SAC may finally determine an N-th accumulated saliency value ASN for an N-th column C N , which is the final column.

The column-wise saliency accumulator SAC aligns the respective accumulated saliency values AS in a column-wise fashion. That is, the respective accumulated saliency values are aligned in accordance with the respective columns for which these values have been determined. Fig. 10 clearly illustrates this alignment. Accordingly, the column- wise saliency accumulator SAC provides an array of accumulated saliency values AV. This array may constitute the gross saliency sub-profile SG, which the alternative saliency sub-profile generation branch SGB A provides. CONCLUDING REMARKS

The detailed description hereinbefore with reference to the drawings is merely an illustration of the invention and the additional features, which are defined in the claims. The invention can be implemented in numerous different ways. In order to illustrate this, some alternatives are briefly indicated.

The invention may be applied to advantage in numerous types of products or methods related to saliency dependent image processing. Aspect ratio conversion is an example of such processing but does not exclude other applications. For example, a one- dimensional saliency profile, or a pair of one-dimensional saliency profiles, which have been generated in accordance with the invention, may be used to identify a region of interest in an image. This region of interest may be selected so as to form a smaller image that represents the region of interest with fewer data, but with a comparable resolution.

There are numerous ways of detecting saliency in an image in accordance with the invention. For example, referring to Fig. 1 , the saliency profile generator SPG may be adapted to provide a saliency profile along the vertical axis of the input image. The saliency profile generator SPG may generate such a vertical saliency profile in a manner similar to that described hereinbefore with reference to figures, which concerns generating a saliency profile along the horizontal axis. To a certain extent, generating a vertical saliency profile involves interchanging "columns" with "lines", and "lines" with "columns" in the description hereinbefore.

Referring to Fig. 1 , the aspect ratio converter ARC may be adapted to scale the input image IM solely along the vertical axis, or may be adapted to scale the input image IM along the horizontal axis and along the vertical axis. In the latter case, the saliency profile generator SPG will provide two saliency profiles, which are mutually orthogonal, one saliency profile SP along the horizontal axis and another saliency profile along the vertical axis. The aspect ratio converter ARC may be used for increasing the aspect ratio, as well as for decreasing the aspect ratio. In the latter case, the original aspect ratio is higher than the display aspect ratio. In that case, the curves illustrated in Figs. 2 and 3 no longer apply.

Referring to Fig. 3, the local scaling factor Sc would be smaller than 1 outside the range RA in that case. Fig. 3 should be adapted accordingly: the width Wo of the output image would be smaller than that of the input image Wi.

Referring to Fig. 1 , the display screen DPL may comprise a module for carrying out a mere resolution adaptation, without any aspect ratio conversion. Such a module will typically scale the image linearly along the horizontal and vertical axes of the image with an identical scaling factor. For example, let it be assumed that the input image IM has an original resolution of 640x480 pixels, and that the display screen DPL has a display resolution of 1920x1080 pixels. The aspect ratio converter ARC may provide an aspect ratio converted image with a resolution of 854x480 pixels. The aforementioned module in the display screen may then scale this aspect ratio converted image so as to achieve the display resolution of 1920x1080 pixels. It is also possible to provide the input interface INP with a module for a resolution adaptation prior to an aspect ratio conversion.

There are numerous ways of generating image property metrics. For example, a single image property metric may be generated for several consecutive lines or for several consecutive columns in an image. This may result in a saliency profile that has a lower resolution, but which may still provide satisfactory image processing results. There are numerous functions, statistical or otherwise, that can be used to generate a suitable image property metric, that is, a metric that represents an image property of interest in an image area of interest.

A one-dimensional saliency profile may be determined on the basis of a single image property, such as, for example, luminance only. Referring to Fig. 5, the saliency profile generator SPG can be simplified by removing all but one saliency sub-profile generation branches, so that only one saliency sub-profile generation branch SGB1 remains.

The term "image" should be understood in a broad sense. The term embraces any type of data that represents visual information.

The term "one-dimensional saliency detection" should be understood in a broad sense. The term embraces any type of saliency detection in which a kernel is translated along a single axis. The kernel itself may be two-dimensional in the sense that the kernel comprises respective arrays of values that are orthogonally oriented which respect to the single axis along which the kernel is translated.

In general, there are numerous different ways of implementing the invention, whereby different implementations may have different topologies. In any given topology, a single module may carry out several functions, or several modules may jointly carry out a single function. In this respect, the drawings are very diagrammatic. For example, referring to Fig. 1, the saliency profile generator SPG and the aspect ratio converter ARC may form part of a single processor module.

There are numerous functions that may be implemented by means of hardware or software, or a combination of both. A description of a software-based implementation does not exclude a hardware-based implementation, and vice versa. Hybrid implementations, which comprise one or more dedicated circuits as well as one or more suitably programmed processors, are also possible. For example, various functions described hereinbefore with reference to the figures may be implemented by means of one or more dedicated circuits, whereby a particular circuit topology defines a particular function.

The remarks made hereinbefore demonstrate that the detailed description with reference to the drawings is an illustration of the invention rather than a limitation. There are numerous alternatives, which fall within the scope of the appended claims. Any reference sign in a claim should not be construed as limiting the claim. The word "comprising" does not exclude the presence of other elements or steps than those listed in a claim. The word "a" or "an" preceding an element or step does not exclude the presence of a plurality of such elements or steps. The mere fact that respective dependent claims define respective additional features, does not exclude combinations of additional features other than those reflected in the claims.