Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ANALYZING IMAGE DATA
Document Type and Number:
WIPO Patent Application WO/2022/031278
Kind Code:
A1
Abstract:
A method comprises receiving or determining image data comprising a plurality of image data points. Each image data point has a color value in a first color space. A first process is performed for determining, based on a first color value of a first image data point of the plurality of image data points, a first set of reflectance functions for the first image data point. The reflectance functions represent in a reflectance space reflectance values at a plurality of wavelengths for a surface corresponding to the first image data point. A second process is performed for determining, based on a second color value of a second image data point, a second set of reflectance functions for the second image data point. The results of the first and second processes are compared, and, based on the comparison, a property of the image data is determined.

Inventors:
MOROVIC PETER (ES)
MOROVIC JAN (GB)
Application Number:
PCT/US2020/044946
Publication Date:
February 10, 2022
Filing Date:
August 05, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HEWLETT PACKARD DEVELOPMENT CO (US)
International Classes:
H04N1/60; G06T7/90
Foreign References:
JP2011138393A2011-07-14
US20170048421A12017-02-16
US20100142805A12010-06-10
Attorney, Agent or Firm:
PERRY, Garry A. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1 . A method comprising: receiving or determining image data, the image data comprising a plurality of image data points, each of the plurality of image data points having a respective color value in a first color space; performing a first process for determining a first set of reflectance functions for a first image data point of the plurality of image data points, the reflectance functions representing in a reflectance space reflectance values at a plurality of wavelengths for a surface corresponding to the first image data point, the first process being for determining the first set of reflectance functions based on a first color value of the first image data point; performing a second process for determining a second set of reflectance functions for a second image data point of the plurality of image data points, the reflectance functions representing in the reflectance space reflectance values at a plurality of wavelengths for a surface corresponding to the second image data point, the second process being for determining the second set of reflectance functions based on a second color value of the second image data point; determining a comparison between a result of the first process and a result of the second process; and determining, based on the comparison between the result of the first process and the result of the second process, a property of the image data.

2. The method of claim 1 : wherein the first set of reflectance functions comprises a first paramer set or a first metamer set and the second set of reflectance functions comprises a second paramer set or a second metamer set.

3. The method of claim 1 : wherein the result of the first process is the first set of reflectance functions and the result of the second process is the second set of reflectance functions; and wherein the determining a comparison between the result of the first process and the result of the second process comprises: determining an intersection volume between a first volume in the reflectance space defined by the first set of reflectance functions and a second volume in the reflectance space defined by the second set of reflectance functions.

4. The method of claim 1 : wherein the first process comprises: determining a first plurality of sets of reflectance functions for the first image data point, each of the sets of reflectance functions of the first plurality of sets of reflectance functions corresponding to a different illuminant function; and selecting the first set of reflectance functions from the first plurality of sets of reflectance functions; and wherein the second process comprises: determining a second plurality of sets of reflectance functions for the second image data point, each of the sets of reflectance functions of the second plurality of sets of reflectance functions corresponding to a different illuminant function; and selecting the second set of reflectance functions from the second plurality of sets of reflectance functions.

5. The method of claim 4: wherein the selecting the first set of reflectance functions comprises selecting the first set of reflectance functions based on a comparison of a volume in the reflectance space of the first set of reflectance functions to a volume of at least one other set of reflectance functions of the first plurality of sets of reflectance functions; and/or wherein the selecting the second set of reflectance functions comprises selecting the second set of reflectance functions based on a comparison of a volume in the reflectance space of the second set of reflectance functions to a volume of at least one other set of reflectance functions of the second plurality of sets of reflectance functions.

6. The method of claim 1 : wherein the determining the comparison between the result of the first process and the result of the second process comprises determining whether the first set of reflectance functions and the second set of reflectance functions correspond to the same ilium inant function or whether the first set of reflectance functions and the second set of reflectance functions correspond to different illuminant functions.

7. The method of claim 1 : wherein the first image data point and the second image data point are two neighboring pixels of the image data.

8. The method of claim 1 : comprising: performing a third process for determining, for one or more further image data points, different to the first image data point and the second image data point, of the image data, a respective further set of reflectance functions; and wherein the determining the property of the image data is based on a comparison of the results of the first process and the second process and a result of the third process.

9. The method of claim 8: wherein the first image data point, the second image data point and the one or more further image data points are three neighboring pixels of the image data.

10. The method of claim 1 : wherein the property of the image data is a boundary between a first portion of a surface shown by the image data and a second portion of the surface shown by the image data, wherein the first portion and the second portion are illuminated by ilium inants having different spectral power distributions.

11 . The method of claim 1 : wherein the property of the image data is a boundary between a first portion of a surface shown by the image data and a second portion of a surface shown by the image data, wherein the first portion and the second portion have different reflectance functions.

12. The method of claim 1 : wherein the property of the image data is a property of a first illuminant illuminating a portion of a surface shown by the image data.

13. The method of claim 1 : wherein the first process comprises using a first illuminant function for determining the first set of reflectance functions; and wherein the second process comprises using a second illuminant function, different to the first illuminant function, for determining the second set of reflectance functions.

14. The method of claim 1 : wherein the property of the image data is a spectral smoothness of a portion of the image including the first image data point and the second image data point.

15. The method of claim 1 : wherein the property of the image data is an estimate of a reflectance function for one of the plurality of image data points of the image data.

16. An imaging system to: receive or determine image data, the image data comprising a plurality of image data points, each of the plurality of image data points having a respective color value in a first color space; perform a first process for determining a first set of reflectance functions for a first image data point of the plurality of image data points, the reflectance functions representing in a reflectance space a reflectance of a surface at a plurality of wavelengths, the first process being for determining the first set of reflectance functions based on a first color value of the first image data point; perform a second process for determining a second set of reflectance functions for a second image data point of the plurality of image data points, the reflectance functions representing, in the reflectance space, a reflectance of a surface at a plurality of wavelengths, the second process being for determining the second set of reflectance functions based on a second color value of the second image data point; determine a comparison between a result of the first process and a result of the second process; and determine, based on the comparison between the result of the first process and the result of the second process, a property of the image data.

17. A non-transitory machine-readable medium comprising instructions which, when executed by a processor, cause the processor to: receive or determine image data, the image data comprising a plurality of image data points, each of the plurality of image data points having a respective color value in a first color space; perform a first process for determining a first set of reflectance functions for a first image data point of the plurality of image data points, the reflectance functions representing in a reflectance space a reflectance of a surface at a plurality of wavelengths, the first process being for determining the first set of reflectance functions based on a first color value of the first image data point; perform a second process for determining a second set of reflectance functions for a second image data point of the plurality of image data points, the reflectance functions representing, in the reflectance space, a reflectance of a surface at a plurality of wavelengths, the second process being for determining the second set of reflectance functions based on a second color value of the second image data point; determine a comparison between a result of the first process and a result of the second process; and determine, based on the comparison between the result of the first process and the result of the second process, a property of the image data.

Description:
ANALYZING IMAGE DATA

BACKGROUND

[0001] Color is a concept that is understood intuitively by human beings. However, it is a subjective phenomenon rooted in the retinal and neural circuits of a human brain. A “color” is a category that is used to denote similar visual perceptions; two colors are said to be the same if they produce a similar effect on a group of one or more people. Color can be represented in a large variety of ways. For example, in one case a color may be represented by a power or intensity spectrum across a range of visible wavelengths. In another case, a color model may be used to represent a color using a small number of variables.

BRIEF DESCRIPTION OF THE DRAWINGS

[0002] Various features of the present disclosure will be apparent from the detailed description which follows, taken in conjunction with the accompanying drawings, which together illustrate certain example features, and wherein:

[0003] FIG. 1 is a flow chart representation of a method according to examples described herein;

[0004] FIG. 2 is a flow chart representation of a method of determining a paramer set for use in a method according to examples described herein;

[0005] FIG. 3 shows a set of target color values volumes corresponding to a respective set of target color values in a first color space, determined according to part of a method according to examples described herein;

[0006] FIG. 4 shows a set of reflectance functions forming a paramer set determined according to a method according to examples described herein for an example target color value shown in FIG. 3;

[0007] FIG. 5 shows a schematic representation of a part of a method according to examples described herein;

[0008] FIG. 6 shows a schematic representation of another part of a method according to examples described herein; [0009] FIG. 7 shows an example imaging system for performing a method according to examples described herein; and

[0010] FIG. 8 shows an example of a non-transitory machine for implementing a method according to examples described herein.

DETAILED DESCRIPTION

[0011] In the following description, for purposes of explanation, numerous specific details of certain examples are set forth. Reference in the specification to "an example" or similar language means that a particular feature, structure, or characteristic described in connection with the example is included in at least that one example, but not necessarily in other examples.

[0012] Certain examples described herein provide a method of analyzing image data by comparing one or more sets of reflectances corresponding to one or more image data points of the image data.

[0013] In this context, certain examples described herein relate to color mapping in an imaging system. Color mapping is a process by which a first representation of a given color is mapped to a second representation of the same color. As mentioned above, color may be represented by a power or intensity spectrum across a range of visible wavelengths. However, this is a high dimensionality representation. To represent color at a lower dimensionality, i.e. using a lower number of variables, a color model can be used.

[0014] A color model can define a color space. A color space in this sense may be defined as a multi-dimensional space, wherein a point in the multidimensional space represents a color value and dimensions of the space represent variables within the color model. For example, in a Red, Green, Blue (RGB) color space, an additive color model defines three variables representing different quantities of red, green and blue light. Other color spaces include: a Cyan, Magenta, Yellow and Black (CMYK) color space, wherein four variables are used in a subtractive color model to represent different quantities of colorant, e.g. for a printing system; the International Commission on Illumination (CIE) 1931 XYZ color space, wherein three variables (‘X’, Y’ and Z’ or tristimulus values) are used to model a color, and the CIE 1976 (L* a*, b* — CIELAB) color space, wherein three variables represent lightness ('!_’) and opposing color dimensions (‘a’ and ‘b’). Certain color spaces, such as RGB and CMYK may be said to be devicedependent, e.g. an output color with a common RGB or CMYK value may have a different perceived color when using different imaging systems.

[0015] Colors are formed by the interaction of a surface, a light source, and an observer. A light source may be characterized by a power or intensity function defining the power or intensity of the light across the range of visible wavelengths. The power or intensity spectrum of the light source may be characterized by use of a number of spectral bands distributed across the range of visible wavelengths. For example, the spectrum for a given light source may be specified by N intensity values representing the intensity of the light source at a N different sample wavelengths distributed evenly over the range of visible wavelengths of 400 - 700nm. In some examples, N may be, for example, 16 or 31. This representation of the spectrum for the given light source may be referred to as an intensity function. The reflectance of a surface may similarly be defined by N values specifying the proportion of light reflected by the surface at the N different sample wavelengths. This representation of the reflectance of a surface may be referred to as the reflectance function, or simply the reflectance, of the surface. In such a representation as is described above, therefore, the reflectance of a surface is defined in a spectral space of N dimensions. This spectral space of N dimensions is sometimes referred to as reflectance space. In a similar manner, the sensitivity of an observer to light may vary with the wavelength of the light. Thus, an observer can be characterized by a spectral sensitivity function.

[0016] A given combination of a light source having a given intensity function, a surface having a given reflectance function, and an observer having a given spectral sensitivity function induces a given color response in the observer. The observer can characterize the color response by a set of color values in a color space. An observer may represent the color response in a color space which has a lower number of dimensions that the reflectance space. For example, the observer may characterize the color by XYZ tristimulus values, thereby representing the color response in a 3-dimensional color space.

[0017] However, various different combinations of light source and surface can induce in the observer the same color response represented in the color space. For example, under the same light source, various different reflectance functions can induce the same tristimulus color values in the observer. The set of reflectance functions which induce the same color value in the observer under the same light source is referred to as the metamer set. The reflectance functions making up the metamer set are referred to as metamers.

[0018] A paramer set defines a set of reflectance functions which match a target color value under given conditions to within a particular tolerance. That is, a paramer is a reflectance which matches the target color value, but may not match the target color value as closely as a metamer.

[0019] FIG. 1 shows a block diagram representation of an example method 100 of analyzing image data. The method comprises, at block 110, receiving or determining image data, the image data comprising a plurality of image data points. Each of the plurality of image data points has a respective color value in a first color space. The first color space may be, for example, a tristimulus color space wherein points in the first color space are defined by tristimulus values. For example, the first color space may be an RGB color space. In other examples, the first color space may be the CIE XYZ or the CIE L* a* b* color space. The color value of each of the plurality of image data points defines a point in the first color space. For example, in the case where the first color space is an RGB color space, each color value comprises three values respectively for R, G and B. Each image data point of the plurality of image data points may be a pixel of an image. The image data may be obtained by an image capturing device, such as a camera or a scanner. The image data may be received from such a device or from another device with access to the image data. [0020] At block 120, the method comprises performing a first process for determining a first set of reflectance functions for a first image data point of the plurality of image data points. Each of the reflectance functions of the first set of reflectance functions represents, in a reflectance space, reflectance values at a plurality of wavelengths for a surface corresponding to the first image data point. The first process is for determining the first set of reflectance functions based on a first color value of the first image data point. The first set of reflectance functions may be a set of metamers or a set of paramers for the first color value in the color space. That is, the first process is a process for determining a candidate set of reflectances which could have produced the first color value at the device which captured the image data. For example, in the example where the image data is a real-world image, for example a photograph, the set of reflectances may comprise candidates for the reflectance of a surface which could have produced a given, e.g. RGB, value captured by the camera for a given pixel under given lighting assumptions. As described above, a reflectance function is an N-dimensional function representing a proportion of light reflected by a surface at each of N sample wavelengths in the visible range. An example method of determining a set of paramers for use in an example of the method according to FIG. 1 is described below. However, the first process may comprise any suitable method of determining a set of reflectances, such as a metamer set or a paramer set, for the first image data point.

[0021] At block 130, the method comprises performing a second process for determining a second set of reflectance functions for a second image data point of the plurality of image data points. Each reflectance function of the second set of reflectance functions represents, in the reflectance space, reflectance values at a plurality of wavelengths for a surface corresponding to the second image data point. The second process is for determining the second set of reflectance functions based on a second color value of the second image data point. The second set of reflectance functions, as described above for the first set of reflectance functions, may, for example, comprise a metamer set or paramer set for the second image data point. The second process may comprise any of the features which are described herein as potential features of the first process.

[0022] The method comprises, at block 140, determining a comparison between a result of the first process and a result of the second process. In examples, the result of the first process is the first set of reflectances, e.g. a metamer or paramer set, for the first image data point and the result of the second process may be the second set of reflectances, e.g. a metamer or paramer set, for the second image data point. Thus, block 140 may involve comparing the first set of reflectances to the second set of reflectances. As above, the first set of reflectances may be a first paramer or metamer set while the second set of reflectances may be a second paramer or metamer set. By comparing the first and second paramer or metamer sets to one another, a property of the image data may be obtained. Examples of such properties will be discussed in more detail below. In another example, the result of one of the first process and the second process may be that no reflectance functions matching the given target color value are found. In this case, a property of the image data may still be obtained. For example, a property of the image data may be determined based on the fact that one of the first process and the second process produced no candidate reflectances for one of the image data points. Examples of this will be discussed below.

[0023] In some examples, the first process and the second process are for determining respective first and second sets of reflectance functions for the same image data point. For example, the first process may make a first set of assumptions relating to the observer and/or the illuminant which produced a given measured color value in the first color space while the second process may make a different, second set of assumptions relating to the observer and/or the illuminant. Such examples may allow a property of the image data, such as a property of the illuminant which illuminates the surface represented by the image data, to be determined. Examples of this will be described below.

[0024] Determining a comparison between the first set of reflectance functions and the second set of reflectance functions may comprise determining an intersection between a first volume in the reflectance space defined by the first set of reflectance functions and a second volume in the reflectance space defined by the second set of reflectance functions. For example, each of the first and second sets of reflectance functions may comprise a metamer set or a paramer set. The metamer/paramer sets may be compared by determining an intersection between them. In one example, each of the first and second sets of reflectance functions may define a paramer set defining a respective volume in the N-dimensional reflectance space. These paramer sets may be compared by determining an intersection between these volumes.

[0025] Since reflectances forming a metamer set are represented in a higher-dimenensional space than the target color value, the target color value is said to underdetermine the reflectances. That is, reflectances are represented in N- dimensional reflectance space, where (as above) N may, for example, equal 16 or 31 , while a capture device is defined in M dimensional color space, where, for example, M equals 3 in examples where the color space in an RGB or CIE XYZ color space. Therefore, in such examples, M « N.

[0026] In one representation, the metamer set is the intersection of the space of possible reflectances, in N-dimensional reflectance space, with a hyperplane defined by the target color value. Therefore, the metamer set itself resides in a sub-space of reflectance space, the sub-space having a number N - M of dimensions. Accordingly, to intersect two metamer sets in general is not possible unless the hyperplanes of the two metamer sets intersect. Even in the case where two metamer sets may be intersected due to their respective hyperplanes intersecting, the ‘volume’ of the intersection is degenerate, i.e. lower dimensional than the space in which the intersection is being computed.

[0027] Computing the intersection between two or more paramer sets, in examples described herein, may be done in the full N-dimensional space, which may provide certain advantages. In examples, the intersection of two or more paramer sets may be determined by computing the enclosing hyperplanes, i.e. half-planes or inequalities, for each of the two or more paramer sets and then computing the half-plane intersection, i.e. the extreme vertices of the convex hull, for all of the half-planes or inequalities. The result of the computation of the intersection is a convex subspace of reflectance space that is enclosed in all of the paramer sets. This convex subspace corresponds to the set of reflectances that are in all of the two or more paramer sets.

[0028] At block 150, the method comprises determining, based on the comparison between the result of the first process and the result of the second process, a property of the image data.

[0029] The property of the image data may be a boundary between a first section of the image data and a second section of the image data. For example, the property of the image data may be a boundary between a first portion of a surface shown by the image data and a second portion of the surface shown by the image data. For example, the first portion of the surface and the second portion of the surface may be portions of the same surface which are illuminated by different illuminants, i.e. illuminants having different spectral power distributions. In another example, the first portion of the surface and the second portion of the surface may represent different types of material having different reflectance functions. In such an example, the first portion of the surface and the second portion of the surface may be lit by the same illuminant. Alternatively, in some examples, a boundary between surfaces of different material which are lit by different illuminants may be identified. In another example, the property of the image data which is determined by comparing the results of the first process and the second process is a property of a illuminant illuminating a part or a whole of the surfaces represented by the image data.

[0030] In certain examples, the method also comprises performing a third process for determining, for one or more image data points of the image data, a respective further set of reflectance functions. In such examples, the wherein the determining the property of the image data is based on a comparison of the results of the first process and the second process and a result of the third process. The respective further sets of reflectance functions may each be a metamer set or a paramer set for one of the further image data points.

[0031] Certain examples described herein allow for a property of the image to be determined using the context in the image of the first image data point and the second image data point. For example, if the first and second image data points are neighboring pixels in the image, the first set of reflectances and the second set of reflectances can be compared and information about the actual surfaces and illuminants which produced the respective color values at the first and second image data point can be determined. For example, if a first set of reflectances has a small intersection with the second set of reflectances, then it may be determined that the respective stimuli, i.e. the respective combinations of surface and illuminant, which produced the first set of reflectances and the second set of reflectances differ in some way. For example, as mentioned above, it may be that the first image data point and the second image data point represent the same type of surface illuminated by different illuminants, i.e. there is an illumination boundary between the first and second image data points. Alternatively, the small intersection may be due to the first and second image data points being illuminated by the same illuminant but having different reflectances, i.e. there is a material/surface boundary between the first and second image data points. Examples of determining such properties of the image are described below in more detail. First, a method of determining a paramer set for use in an example of the method according to FIG. 1 will be described.

[0032] FIG. 2 shows a block diagram representation of a method 1000 of determining a paramer set. The method 1000 may be performed as part of an example of the method according to FIG. 1. In particular, the method 1000 may be performed as part of the first process at block 120 and/or the second process 130. This method 1000 here will be described in the following in the context of computing a paramer set for the first image data point, at block 120. The method comprises, at block 1010, determining a target color corresponding to a point in the first color space. The target color value is the first color value corresponding to the first image data point. The target color value is determined from the image data received at block 110.

[0033] At block 1020, the method comprises determining a color value criterion defining a target volume in the first color space. The color value criterion may also be referred to herein as a colorimetry criterion. The color value criterion defines a target volume forming a target region in the first color space which includes the target color value.

[0034] The target volume defines the colors in the first color space which are considered a match to the target color, to a given degree, under the given conditions for which the paramer set is being computed. In other words, the color value criterion defines how closely the color induced by a reflectance function under reference conditions should match the target color value for the reflectance function to be a paramer of the target color value. In the example where the first color space is the 3-dimensional XYZ space, the target color is represented by a point and the target volume defines a 3-dimensional volume including the point representing the target color. In such an example, the color value criterion may define a cube, a sphere, or another type of target volume enclosing the target color value. For example, the color value criterion may define the target volume as including the points falling within a range of the target color value plus or minus a tolerance. The target volume may, for example, be a cuboid if the tolerances define the target volume as (X, Y, Z) = (Xtarget +/- Xtol, Ytarget +/- Ytol, Ztarget +/- Ztoi), where Xtarget, Ytarget, and Ztarget are respectively the coordinates of the target color value in XYZ space and X to i, Ytoi, and Z toi are respectively tolerances in the X, Y and Z dimensions which define the target volume. For example, if in the above representation the tolerances in the X, Y and Z dimensions are equal to one another then the target volume is a cube having sides of length 2 x the tolerance with the target color at the center of the cube. In another example, the target volume in the color space may be a sphere, or an approximation of a sphere. The sphere may be centered on the target color value. [0035] In certain examples, the target volume in the first color space may be defined according to a volume in a second color space. For example, the target volume may be defined as the volume in the first color space which map to a particular volume in the second color space. The second color space may be a perceptually uniform color space, such as the L*a*b* color space. In one example, a sphere may be defined in L*a*b* such that points within the sphere are within a given perceptual distance to the target color volume, as defined by a point in the L*a*b* space. Each color in the L*a*b* space may be mapped to a color in the XYZ color space by a suitable process. Accordingly, the sphere in the L*a*b* space may be mapped into the XYZ space to produce the target volume. The target volume in the first, XYZ, space may therefore comprise a projection of a sphere defined in the second, L*a*b*, color space. Where a sphere is defined in L*a*b* space, a radius of the sphere may be set such that each of the colors in the sphere are within a given perceptual distance of the target color value. For example, the radius of the sphere in L*a*b* may be set as 1 delta E (DE) unit such that the sphere, and the projection of the target volume into the first color space, defines colorimetries which are within 1 DE of the target colorimetry under the reference conditions.

[0036] In examples, the target volume in the first color space may be defined by a convex hull of a polytope, e.g. where the first color space is a 3D color space, a polyhedron. A given target volume may have a shape which is an approximation of a given geometric shape defined by a convex hull of a polytope. For example, where the first color space is a 3D color space, the target volume in the first color space may be an approximation of a sphere defined by a convex hull of a polyhedron. The target volume in the first color space may, in some examples, be a projection in the first color space of a convex polytope in the second color space. For example, a convex hull of a polyhedron which approximates a sphere may be defined in the second color space and the target volume may be a projection in the first color space of that approximation of a sphere. The convex polytope in the first color space and/or the convex polytope in the second color space may be defined by a set of inequalities. In other examples, the target volume may be a polytope which is not convex. Where the target volume in the first color space is a polytope which is not convex, the target volume may be defined by a surface tessellation rather than by a set of inequalities. Such a surface tessellation may describe a surface defining the target volume using the smallest degree simplex of the color space. For example, in a 3D color space the target volume may be defined by a triangulation, while in a 4D color space the target volume may be defined by a tetrahedralization. In some examples where the target volume in the first color space is a projection of a volume in the second color space, the volume in the second color space may be convex polytope but when projected into the first color space may define a target volume which is not convex. The reverse may also be true in that the volume in the second color space may be a non-convex polytope while the target volume is a convex polytope. In such examples the non-convex polytope, whether in the first color space or the second color space, may be described by a surface tessellation.

[0037] The color value criterion may be formulated to specify that, under a set of reference conditions, i.e. under a given illuminant and when observed by a given observer, a reflectance induces in the observer a color value within the target volume in the first color space. In some examples, the reference conditions comprise a standard illuminant and/or a standard observer, for example a standard CIE illuminant and/or a standard CIE observer. For example, the illuminant of the reference conditions may be one of the standard CIE illuminants or illuminant series A, B, C, D, E or F. Each of these standard illuminants or series of illuminants is defined to approximate the spectral power distribution of a given type of light. For example, Illuminant A is intended to represent an average incandescent light, while Illuminant B is intended to represent direct sunlight. Illuminant series D is a series of standard illuminants intended to represent natural daylight. An example standard illuminant of the Illuminant series D is the D50 illuminant. One example of a standard observer which may form a part of the reference conditions is the CIE 1931 standard observer, another is the CIE 1964 standard observer. The color value induced in the observer by a reflectance under the reference conditions may be stated as S1*ref, where: S1 defines the reference conditions including the reference ilium inant and the reference observer; and ref is the reflectance. Thus, the color value criterion may comprise one of more inequalities. For example, the color value criterion where the target volume is a cube may be formulated as an inequality having the following form:

S1 ref — (Xfarget +/“ Xf 0 |, Yfarget +/“ Xf 0 |, Zfarget +/“ Xf 0 |)

Other ways of defining the target volume are, for example, in the case where the target volume is convex, by a set of linear half spaces defined in the form [A, b] such that A *(X ta rget, Ytarget, ^-target) b. In examples where the target volume is not convex, a surface tessellation, as mentioned above, may be defined that contains (Xfarget, Yfarget, Zfarget)-

[0038] At block 1030, the method comprises determining one or more reflectance criteria. The one or more reflectance criteria are criteria to be satisfied by each reflectance function which forms a member of a paramer set for the target color. In certain examples, the one or more reflectance criteria define one or more inequalities to be satisfied by individual reflectance values which make up a given reflectance function. That is, as described above, each reflectance function defines N different reflectance values, each of the N reflectance values corresponding to a proportion of light which is reflected at different wavelengths. One or more inequalities may be defined for each of the N reflectance values.

[0039] In one example, the one or more reflectance criteria comprise a first reflectance criterion defining that no individual reflectance value may be less than 0. This represents the physical constraint that the percentage of light which is reflected by a surface at any one wavelength cannot be less than 0, i.e. at any given wavelength, no less than no light can be reflected. The first reflectance criteria, in examples, is formulated as N inequalities to be satisfied by a given reflectance function, each of which specifies that a reflectance value at a given wavelength is greater than or equal to 0.

[0040] An example second reflectance criterion defines a limit on the maximum value that individual reflectance values of the reflectance function may take. For most surfaces, an appropriate maximum reflectance value at any given wavelength is 100%. However, in some circumstances, such as that of a fluorescent surface, the maximum value of reflectance for a surface at a given wavelength may be greater than 100%. The second reflectance criterion may also be formulated as a set of inequalities to be satisfied by the individual reflectance values of a reflectance function. The maximum reflectance value may be the same for each of the wavelengths for which the reflectance function defines a reflectance value, or the maximum reflectance value may be different for different wavelengths. The maximum reflectance value may be set based on empirical knowledge of the reflectances of the type of surfaces of interest. If the paramer set is to be used in the context of fluorescent surfaces, then a maximum reflectance may be set based on known maximum reflectance values of fluorescent surfaces of interest. In one such example, where fluorescent surfaces are of interest, a maximum reflectance value is set at around 1.8 (i.e. 180% reflectance). In other such examples, the maximum reflectance value may be set at from 1.5 and 2. In certain examples, maximum reflectance values may be relaxed (e.g. set at greater than 1 ) at wavelengths at which emission peaks of known fluorescent surfaces occur. In such examples, maximum reflectance values at other wavelengths at which emission peaks are not known to occur may remain at a standard value, of e.g. 1 .

[0041] In some examples, minimum and/or maximum reflectance values, as defined by the first and second reflectance criteria, may be set to account for noise. For example, instead of the first reflectance criteria defining a minimum reflectance of 0, the minimum reflectance value may be set at 0 - E . Similarly, the maximum reflectance value may be set at 1 + Eu, or at a set reflectance value + Eu, e.g. where the set reflectance value is set to account for fluorescence, as described above. Here, s and £u are tolerances to account for noise at, respectively, lower and upper ends of the range of possible reflectance values.

[0042] In some examples, one or more further reflectance criteria are used in addition to the first and the second reflectance criteria. One example of a further reflectance criterion is a naturalness constraint. A naturalness constraint imposes the condition that any reflectance function which is to form a member of the paramer set has a particular property related to the physical realizability of the reflectance function. That is, a naturalness constraint is intended to limit the reflectance functions for inclusion in the paramer set to reflectance functions which correspond to real-world surfaces. For example, the naturalness constraint may specify that any reflectance function which is to form a member of the paramer set varies smoothly over the range of visible wavelengths. Accordingly, in one example, a set of representative physically-realizable reflectances may be defined. A given reflectance may be considered to satisfy the naturalness constraint if the reflectance is an additive combination of one or more of the reflectances included in the set of representative physically-realizable reflectances. The naturalness constraint may be formulated such that it is satisfied by reflectances which fall within the convex hull of the representative set of physically-realizable reflectance functions and not by reflectances which fall outside this convex hull. In an example, the naturalness constraint, like the first and second reflectance criteria, may take the form of a set of linear inequalities placed on the reflectance function with one or more linear inequalities placing constraints on each of the N values of the reflectance function. The naturalness constraint may be formulated as defined in the following paper, the entirety of which is incorporated herein by reference: Peter Morovic and Graham D. Finlayson, "Metamer-set-based approach to estimating surface reflectance from camera RGB," J. Opt. Soc. Am. A 23, 1814-1822 (2006).

[0043] At block 1040, the method comprises computing, based on the one or more reflectance criteria and the color value criterion, a boundary, in the reflectance space, of points which satisfy the color value criterion and the one or more reflectance criteria. The boundary may define a bounding box in the N- dimensional reflectance space of points satisfying the color value criterion and the one or more reflectance criteria. In some examples, the boundary is a convex hull defining the points which satisfy the color value criterion and the one or more reflectance criteria. Computing the convex hull in reflectance space provides a set of reflectance functions forming the extreme vertices of the convex hull. Each of these reflectance functions satisfies the color value criteria and the one or more reflectance criteria. The reflectance functions so obtained form the paramer set for the target color value. The paramer set comprises one or more reflectance functions. The convex hull may be computed using a program such as Qhull, such as is described in the following paper, the entirety of which is incorporated herein by reference: Barber, C.B., Dobkin, D.P., and Huhdanpaa, H.T., "The Quickhull algorithm for convex hulls," ACM Trans, on Mathematical Software, 22(4):469-483, Dec 1996, http://www.qhull.org.

[0044] In some examples, the convex hull may be computed to determine a paramer set which comprises reflectances having a certain property. If the certain property can be expressed as a convex function in reflectance space, then convex programming, e.g. linear or quadratic programming, may be employed to compute the convex hull. The paramer set may therefore be computed such that each of the paramers making up the paramer set has the certain property.

[0045] An example of computing paramer sets according to the method show in FIG. 2 is shown in FIG. 3 and FIG. 4

[0046] In this example, the first color space is the CIE XYZ color space. The reference conditions are a CIE D50 illuminant and a 1931 XYZ observer. FIG. 3 shows 24 target color values corresponding to the 24 colorimetries of the MacBeth ColorChecker Chart, which is a standard calibration chart. 24 respective target volumes having the above properties are shown in FIG. 3, respectively surrounding the 24 target colorimetries. Each of the 24 target volumes is a cube in the XYZ color space, wherein each cube has sides of length 1. A first target cube 210 corresponding to one of the 24 target color values is labelled in FIG. 3. Each target cube, including the first target cube 210, may be represented as follows (X = Xtarget +/- 0.5, Y — Yfarget +/“ 0.5, Z — Zfarget +/“ 0.5).

[0047] FIG. 4 shows an example of a set of reflectances forming a paramer set determined according to a method described herein. The set of reflectances shown in FIG. 4 form a paramer set for the first target cube 210 shown in FIG. 3. The paramer set shown in FIG. 4 comprises 1000 reflectance functions. The reflectance functions in this example are represented in a 16-dimensional reflectance space. Accordingly, 16 values corresponding to the proportion of light reflected at 16 different wavelengths across the wavelength range 400nm to 700nm define each reflectance function. It can be seen from FIG. 4 that the reflectance functions comprise sharp peaks, i.e. for a particular reflectance function the difference between the reflectance value at a given sample wavelength and the reflectance value at a neighboring sample wavelength may be large. This is a consequence of the reflectances forming the paramer sets being determined without constraints being placed on their smoothness.

[0048] While examples above are described in terms of obtaining direct reflectances in other examples a paramer set may be computed on a linear model basis. For example, the above formulation can be applied to solving for linear model weights instead of direct reflectances. This may be done by multiplying all left-hand-side inequalities defining the color value criterion and the one or more reflectance criteria with a matrix B that represents the linear model basis in N dimensions. For example, the matrix B is a 16xN matrix if the original sampling is 16 and N is < 16.

[0049] Methods described with reference to FIG.2 to FIG. 4 allow for paramer sets to be determined for arbitrary spectral stimuli, including surfaces which are fluorescent and/or which are non-Lambertian in in some other sense. Methods described herein allow for paramer sets to be determined in these and other circumstances without introducing constraints on the reflectances forming the paramer set. [0050] FIG. 5 shows an example method of analyzing image data 510 to determine a property of the image data 510, according to example methods described herein.

[0051] The image data 510 comprises a plurality of pixels, including a first pixel 512. Each pixel has an RGB color value. The image data 510 comprises R rows and C columns of pixels. FIG. 5 shows a subset of image data having 4 rows and 7 columns of pixels. The RGB color value of each pixel can be denoted as RGBjj, where the subscript / denotes the row number and the subscript j denotes the column number. For example, the color value of the first pixel 512 can be denoted RGB23, since it is in the second row and third column of the image data 510.

[0052] In one example, an image processor (not shown in FIG. 5) computes a paramer set in reflectance space for each of the pixels, including the first pixel 512, making up the image data 510. Each paramer set may be determined in a manner as described above with reference to FIG. 2 to FIG. 4 using the color values RGBjj of the pixels as respective target color values. An array of paramer sets is thus obtained corresponding to the pixels of the image data 510.

[0053] In some examples, the image data 510 is image data which has been captured by a particular capture device. In some examples, the spectral sensitivity of the sensor of the device which captured the image data 510 may be known. The spectral power distribution of the illuminant which illuminated the surface or surfaces represented by the image data 510 may also be known in some examples. For example, the image data 510 may have been captured by a scanner such that the illuminant is a light of the scanner having a known spectral power distribution. The image data 510 may have been captured by a camera of the scanner, the camera having a known spectral sensitivity. In other examples, one or both of the sensitivity of the capture device or the illuminant which illuminated the surface or surfaces represented by the data may not be known. Certain examples of the method described herein allow a property of the image data 510 to be determined in any of these cases. [0054] In order to compute the paramer sets for the pixels of the image data 510 an illuminant function is used. In one example, paramer sets are computed for each of the pixels using a first illuminant function. The process of computing paramer sets for the pixels of the image data 510 is then repeated for one or more further ilium inants. In this way, a plurality of arrays of paramer sets is may be obtained for the image data 510, with each array of paramer sets corresponding with a different illuminant function. For example, a first paramer set may be computed for the first pixel 512 and each of the other pixels in the image data 510 using a first illuminant in the paramer set computation. The first illuminant may, for example, be a standard CIE illuminant representing a particular light spectrum.

[0055] Once paramer sets have been determined for the pixels of the image data 510 using a given illuminant function, volumes of the paramer sets may be computed. It may be found that for some pixels no paramer can be found when using the given illuminant function. The paramer sets of other pixels of the image data 510 may have various different volumes. The volume of a paramer set computed for a given pixel using the given illuminant function may be used to determine an indication of the likelihood that the illuminant function provides an appropriate representation of the light illuminating the object represented by the pixels. For example, if no paramers are found for a given pixel for the given illuminant function, then the illuminant function may be assigned a low likelihood of being an appropriate representation of the illumination at the given pixel. For another pixel a paramer set having a given volume may be obtained by using the given illuminant function. This may indicate that the given illuminant function is a possible candidate for accurately representing the light illuminating the pixel. In this manner, an intensity map for the given illuminant function over the image data 510 may be built up based on the paramer set volumes for the image data 510 computed using the given illuminant function. The intensity map can be used to give a local indication of the likelihood that the given illuminant is an appropriate representation of the light illuminating a given portion of the image data 510. The process of determining an array of paramer sets for the image data 510 and obtaining an intensity map from these paramer sets may be repeated for one or more further illuminant functions. For example, a plurality of arrays of paramer sets and corresponding intensity maps may be obtained for a plurality of different, e.g. standard CIE, illuminants.

[0056] A composite array of paramer sets may be obtained for the image data 510 by comparing the plurality of arrays of paramer sets determined for the image data 510 using different illuminants. The composite array of paramer sets for the image data 510 includes one paramer set for each of the pixels of the image data 510. The paramer set to be included in the composite array for a given pixel may be selected from the plurality of paramer sets for the given pixel based on probabilities assigned to the illuminant functions at the given pixel. For example, if a paramer set for a given illuminant has a larger volume, then that illuminant may be assigned a larger probability. For example, a comparison of the paramer sets computed for the first pixel 512 may indicate that the paramer set computed using the CIE standard illuminant A has assigned to it the largest probability of being an appropriate representation of the illuminant of the object represented by the first pixel 512. The paramer set computed using the illuminant A may therefore be selected to represent the first pixel 512 in the composite array of paramer sets. In this example, the paramer set selected for a different pixel of the image data 510 may also correspond to the illuminant A or may correspond to a different illuminant, e.g. CIE standard illuminant D50. Thus, the composite array of paramer sets may comprise paramer sets computed using different illuminants for different pixels of the image data.

[0057] The image processor parses the composite array of paramer sets for each of the pixels of the image data 510 in a windowing operation, represented in FIG. 5 by a window 550. The window 550 in this example is 3x3 pixels in size, though any other size of window may be used in other examples. During a particular instance of the windowing operation, two or more of the paramer sets corresponding to the pixels covered by the window 550 are compared to one another. In one example, the intersection of the paramer sets corresponding to all of the pixels covered by the window 550 is determined. The intersection of a given pixel with the other pixels in the window 550 provides information which may be used to determine a property of the image data 510. The windowing operation is repeated as pixels of the image data 510 are parsed with the window 550. Each windowing operation results in an intersection between a set of paramer sets corresponding to a set of neighboring pixels, each set comprising 9 pixels in this example. A plurality of paramer intersections for a plurality of sets of neighboring pixels of the image data 510 is thereby obtained.

[0058] In this example, each of the paramer sets, as described above, is a full N-dimensional convex volume in reflectance space. The intersection of the paramer sets contained in the window is determined by computing the enclosing hyperplanes, i.e. half-planes or inequalities, for each of the 3x3 pixels in the window 550 and then computing the half-plane intersection, i.e. the extreme vertices of the convex hull, for all of the half-planes or inequalities. The result of the computation of the intersection is a convex subspace of reflectance space that is enclosed in all of the paramer sets of the pixels in the window 550. This convex subspace corresponds to the set of reflectances that are candidate reflectances for any one of the 3x3 pixels in the window 550.

[0059] In one example, the intersection between the paramer sets of pixels in the window 550 can be used to narrow down the potential reflectance estimates for the individual RGBjj color values of the pixels in the window 550. For example, when estimating the actual reflectance which gave rise to a particular RGB value of a pixel in the window 550, such as of the first pixel 512, a higher probability weighting may be given to candidate reflectances which lie in the computed intersection between the paramer sets of the pixels in the window 550. Thus, contextual information regarding the paramer sets for neighboring pixels may be used to inform an estimate of the actual reflectance corresponding to a given pixel. This can be useful in compensating for noise in an image. For example, the variation in RGB values between neighboring pixels may be due to noise, e.g. in the response of the capture device, rather than to actual differences in reflectance or illumination of the surface corresponding to the pixel. By taking into account the sets of reflectances for more than one pixel, e.g. by windowing over the pixels in the above-described manner, the noise may be compensated for and more accurate and robust reflectance estimates may be obtained.

[0060] In some examples, the volume of the intersection of the paramer sets of two or more pixels, e.g. all of the pixels in the window 550, can be used as an indication of spectral smoothness. That is, a large volume of intersection between the paramer sets of neighboring pixels indicates that the local potential spectral space for the pixels is similar. Conversely, a small intersection between the paramer sets for neighboring pixels may indicate that the local spectral space is less similar.

[0061] FIG. 6 shows the image data 510 and illustrates examples of methods of identifying properties of the image data 510 using methods described above. In this example, as described with reference to FIG. 5, a set of paramers is computed for each pixel of the image data 510. A composite array of paramer sets is then determined by computing a plurality of sets of paramers for the image using different ilium inants and then selecting an appropriate paramer corresponding to an appropriate ilium inant at each pixel of the image, as has been described above. The composite array of paramer sets is then parsed with the window 550 to compute a plurality of intersections between paramer sets of pixels covered by the window 550 when the window 550 is in a given position. The result of a plurality of such windowing operations performed in a first portion 514 of the image 510 is shown as a set of boxes 650a-i. Each box of the boxes 650a-i represents a single intersection volume of a 3x3 set of pixels covered by the window 550 in a single windowing operation. In FIG. 6, a filled box represents a small intersection volume between the paramer sets of a given set of pixels while an unfilled box indicates a large intersection volume between the paramer sets.

[0062] In the first portion 514 of the image 510, the boxes 650a, 650e, 650i lying along the diagonal of the set of boxes show a small intersection volume. The off-diagonal boxes 650b, 650c, 650d, 650f, 650g, 650h show a large intersection volume. In the example shown, the first portion 514 is a portion of the image in which the paramer sets in the composite array of paramer sets all have been computed using the same illuminant function. That is, the same illuminant was assigned the largest probability for each of the pixels in the first portion 514 of the image. Thus, in selecting paramer sets for inclusion in the composite array of paramer sets for each of the pixels in the first portion 514, the respective paramer sets selected for the pixels corresponds to the same illuminant function. Since the same illuminant function has been used to compute the paramers for each pixel in the first portion 514, a large intersection volume between the paramer sets of neighboring pixels indicates that the pixels share a similar set of candidate reflectances which could have produced their respective RGB values under the same illuminant. From this, it may be determined, for example, that each of the upper off-diagonal boxes 650b, 650c, 650f represent a spectrally similar environment, i.e that there is no illuminant or material boundary represented by any of these pixels. Similarly, it may be determined that the lower off-diagonal boxes 650d, 650g, 650h also each represent a spectrally similar environment. However, the boxes along the diagonal 650a, 650e, 650i each show a small intersection volume between the paramer sets of the respective sets of pixels to which they correspond. For example, box 650a shows a small paramer intersection volume. This indicates that the pixels represented in box 650a lie on an intersection between spectrally different environments. Since, as described above, the paramer sets of the pixels in the composite array of paramer sets at the first portion 514 of the image data 510 all correspond to the same illuminant, it can be determined that the boundary indicated by the small intersection volumes is not a boundary between different ilium inants. Therefore, the small intersection between paramer sets in the first portion 514 of the image 510 can be taken as an indication that the pixels included in the box 650a represent a material boundary. In this example, the material boundary is between pixels representing a mountain and pixels representing sky. The small intersection volume of box 650a, in combination with the observation that the box 650a does not lie on an illuminant boundary, indicates that some of the set of pixels to which the box 650a corresponds have significantly different reflectance functions than other pixels of the set of pixels. Accordingly, this can be used to determine a material boundary in the image data 510. Similarly, it can be deduced that the other boxes 650e, 650i along the diagonal also lie along a material boundary due to their small intersection volumes.

[0063] Example methods of determining intersections between sets of reflectances determined for image data may also be used to determine boundaries between illuminants. To illustrate an example of this, a second portion 516 of the image data 510 contains a boundary between pixels which represent a material illuminated by a first illuminant and the same material illuminated by a second illuminant. In this example, the second portion 516 represents a snow surface and the boundary is a shadow line separating a directly illuminated portion of the snow from a portion of the snow which is in shadow. Computing intersection volumes for this second portion 516 of the image may indicate that the second portion 516 includes spectral environments which are different to one another. For example, computing intersection volumes assuming the same illuminant may indicate that there is some type of boundary roughly between the upper and lower halves of the second portion 516.

[0064] A second set of boxes 660 represents respective intersection volumes between paramer sets of respective sets of 3x3 in the second portion 516 of the image data 510. The intersection volumes represented by the boxes 660, in this example, as with the first portion 514, represent intersection volumes between paramer sets of the composite array of paramer sets for the image data 510. Unlike the first portion 514, the paramer sets representing the pixels of the second portion 516 in the composite array of paramer sets are paramer sets which have been computed using different illuminant functions. In this example, in the composite array of paramer sets, the paramer sets for the pixels along the top row of the boxes 660 are paramer sets which have been computed using a first illuminant function. The paramer sets for the pixels along the bottom row of the boxes 660 are paramer sets which have been computed using a second illuminant function which is different to the first illuminant function. In this example, this is because out of the ilium inants for which paramer sets which were computed for the pixels in the top row, the first illuminant function was assigned the largest probability of being an accurate representation of the actual illumination. Thus, the paramer sets corresponding to the first illuminant have been selected for the pixels in the top row for inclusion in the composite array of paramer sets. For the pixels in the bottom row, the second illuminant function has been assigned the largest probability and the paramer sets corresponding to the second illuminant function have been selected to represent the bottom row of pixels in the composite array of paramer sets.

[0065] It can be seen that for the top row of the boxes 660 the intersection volumes between the paramer sets is relatively large. This indicates that the top row of boxes relate to a spectrally similar environment, i.e. there is no material or illuminant boundary represented by these pixels. Similarly, since the second illuminant function has been used to compute the paramer sets for the pixels represented by the bottom row of boxes, the large intersection volumes shown in FIG. 6 indicate that there is no material or illuminant boundary shown in the pixels represented by the bottom row of boxes. Specifically, in this example, where a window contains pixels which all represent the snow surface in shadow, such as for each box in the top row of the boxes 660, a large intersection volume between the paramer sets is obtained. Similarly, where a window contains pixels which all represent the snow surface in the direct light, such as the bottom row of the boxes 660, a large intersection volume is also obtained. However, the boxes in the middle row of the boxes 660 contain pixels which lie at the illuminant boundary. The composite array of paramer sets at this portion of the image therefore contains some paramer sets corresponding to the first illuminant and some paramer sets corresponding to the second illuminant. For the boxes in the middle row of the boxes 660, which comprise pixels either side of the illuminant boundary, small intersection volumes are obtained. Since the composite array of paramer sets volumes in this area contains paramer sets computed using different illuminant functions, it can be determined that the pixels represent an illuminant boundary.

[0066] As described above, in example methods, the paramer sets in the composite array of paramer sets for the image data may be paramer sets computed using different illuminants for different pixels. The different illuminants used may be examples of daylight spectra and man-made light sources. By comparing the intersection volumes obtained for a given window or set of windows when different illuminant functions are used, a local likelihood of that illuminant accurately representing the actual illuminant illuminating the pixels of the given window is provided. For example, the intersection volume for a given set of pixels may be largest when their paramer sets are computed using a first illuminant function. The first illuminant function may accordingly be selected as an estimate of the actual illuminant illuminating the surface represented by those pixels.

[0067] As also mentioned above, by comparing paramer sets computed for pixels of the image, other properties of the image data may be determined. For example, for a real-world image such as the example shown in FIG. 6, the distribution of paramer volumes for a given illuminant over the image data 510 can be used to indicate the local likelihood of the given illuminant being an accurate representation of the actual lighting at a given portion of the image. Multiple illuminant functions representative of the illumination in a given portion of the image may be obtained in this way, or a single illuminant representative of the illumination of the scene as a whole may be obtained. If it is desired to determine a single representative illuminant function over the whole image, then the intersection volume distribution over the whole image may be compared for different illuminants and an appropriate illuminant selected based on a comparison of the intersection volume distributions.

[0068] Certain examples described above involve the computation of intersection volumes between paramer sets in order to determine information about the image data. However, other examples may compute and compare metamer sets, rather than paramer sets. Where metamer sets are used, the metamer set for a given pixel, i.e. a given input RGBjj, may be determined based on a given sensor spectral sensitivity, a given illuminant spectral power distribution, as well as a sample set of representative reflectances. In an example, a linear model basis of the representative set of reflectances is computed in order to determine the metamer sets in an O-dimensional subspace of the N-dimensional reflectance space, where 0 « N. For example, N may be 31 , while 0 may be, for example, 6 to 12, such that the linear model basis is computed in an O-dimensional subspace which is a 6- to 12-dimensional subspace of the 31 -dimensional reflectance space. Computing the metamer set comprises computing the convex hull of metameric blacks, added to the result that resides in an M-dimensional (determined by the number of sensors) subspace of N. The process of computing metamers may be performed as described in Graham D. Finlayson and Peter Morovic, "Metamer sets," J. Opt. Soc. Am. A 22, 810-819 (2005), the entirety of which is incorporated herein by reference.

[0069] When metamers rather than paramers are used, the image data may be parsed in a windowing operation similar to that described above for the paramer case. However, while an intersection between metamer sets may exist, as described above, since metamer sets reside in a degenerate subspace of the N- dimensional reflectance space, the intersection between metamer sets can be computed by finding a common subspace to all metamer sets. For example, a common subspace between the metamer sets may be found by performing principal component analysis (PCA) over the reflectances computed from all of the metamer sets in the window and projecting the result onto a new domain.

[0070] FIG. 7 shows an example imaging system 700 for performing a method according to examples described above. In the example of FIG. 7, image data 710 is passed to an image processor 720. The image processor 720 process image data 710. The image data 710 may comprise color data as represented in an input color space, such as pixel representations in an RGB color space or in an XYZ color space. The image processor 720 determines one or more sets of reflectance functions in reflectance space for the color data, e.g. one paramer set or metamer set per pixel or one paramer set per different RGB or XYZ color value, according to methods described above. The image processor 720 proceeds to analyze the image data by comparing two or more of the computed sets of reflectance functions to one another, for example according to any of examples described above. By this comparison, the image processor 720 can determine a property of the image data 710.

[0071] In some examples, the image processor 720 may be arranged to compute the paramer sets for the input color values as described in certain examples above. In other examples, the image processor 720 may determine paramers for the input color values from pre-computed paramer sets which are precomputed for given color values to correspond to given reference conditions. In such examples, the paramer sets may be pre-computed by the image processor 720 or pre-computed elsewhere and supplied to the image processor 720.

[0072] FIG. 8 shows an example of a non-transitory machine 800. Certain methods and systems as described herein may be implemented by a processor 810 that processes computer program code that is retrieved from a non-transitory storage medium 820. In some examples, the processor 810 and computer readable storage medium 820 are comprised within the non-transitory machine 800. Machine-readable medium 820 can be any medium that can contain, store, or maintain programs and data for use by or in connection with an instruction execution system. Machine-readable media can comprise any one of many physical media such as, for example, electronic, magnetic, optical, electromagnetic, or semiconductor media. More specific examples of suitable machine-readable media include, but are not limited to, a hard drive, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory, or a portable disc. In Figure 8, the machine-readable storage medium comprises instructions 830 which, when executed by at least one processor, may cause the processor to perform the method described above.

[0073] As mentioned above, although in certain examples, the results of the first process and the second process are respective sets of reflectances for the first image data point and the second image data point, in certain examples, one or both of the results of the first process and the second process may not find any results for the reflectance of a given image data point. For example, it may be found as a result of one or both of the first process and the second process that the color value corresponding to one of the first and second image points has no results in the form of reflectance functions which could have produced the first color value, e.g. for a particular assumed illuminant. In this case, the results may indicate that a particular illuminant function is unlikely to be representative of the actual spectral power distribution of the illuminant which was lighting the points in question. In some examples, the process of trying to determine a set of reflectances for a particular image data point or set of image data points may be repeated using different illuminants. If results, i.e. candidate reflectances, are found for one illuminant but not for another, then this may indicate that the properties of the actual illuminant are closer to those of the illuminant function which produced a set of results.

[0074] The preceding description has been presented to illustrate and describe examples of the principles described. This description is not intended to be exhaustive or to limit these principles to any precise form disclosed. Many modifications and variations are possible in light of the above teaching.