Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTISPECTRAL DATA ACQUISITION
Document Type and Number:
WIPO Patent Application WO/2013/098708
Kind Code:
A2
Abstract:
The present invention relates to multispectral data acquisition from an object illuminated by a light source including multiple light emitting elements, each having a different spectrum. Lines of an image sensor are exposed progressively, and a multispectral measurement is taken by collecting multiple light signal samples for the same one of the lines of the image sensor under different ones of the spectral lighting conditions.

Inventors:
GRITTI TOMMASO (NL)
DE BRUIJN FREDERIK JAN (NL)
NIJSSEN STEPHANUS JOSEPH JOHANNES (NL)
RAJAGOPALAN RUBEN (NL)
BAGGEN CONSTANT PAUL MARIE JOZEF (NL)
FERI LORENZO (NL)
DAVIES ROBERT JAMES (NL)
Application Number:
PCT/IB2012/057406
Publication Date:
July 04, 2013
Filing Date:
December 18, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KONINKL PHILIPS ELECTRONICS NV (NL)
International Classes:
G01J3/10; G01J3/28; H04N9/04
Domestic Patent References:
WO2012020381A12012-02-16
Foreign References:
US20100073504A12010-03-25
US20070035740A12007-02-15
US20090096895A12009-04-16
Other References:
PARK, J. ET AL.: "Multispectral Imaging Using Multiplexed Illumination", IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, 2007
PARK, J. ET AL.: "Multispectral Imaging using multiplexed illumination", IEEE INTERNATIONAL CONFERENCE ON IMAGE VISION, 2007
DU, H. ET AL.: "A Prism- based System for Multispectral Video Acquisition", IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION, 2009
Attorney, Agent or Firm:
VAN EEUWIJK, Alexander, Henricus, Walterus et al. (AE Eindhoven, NL)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. Apparatus for use in acquiring multispectral data from an object illuminated by a light source including a plurality of different light emitting elements, wherein at least one of the plurality of light emitting elements is activated at a time instant to create varying spectral lighting conditions over time; the apparatus comprising:

an image sensor configured such that lines of the image sensor are exposed progressively to acquire light signal samples under the varying lighting conditions for a line of the image sensor; and

a module configured to take a multispectral measurement of said object by collecting multiple light signal samples for a same one of said lines of the image sensor under different ones of said spectral lighting conditions, and storing the samples for the same line in a same data set. 2. The system according to Claim 1, wherein said module comprises a computing device coupled to the image sensor, the computing device being configured to:

derive multispectral data by combining a first light signal sample acquired under a first of said lighting conditions for a line of the image sensor with a second light signal sample acquired under a second of said lighting conditions for the same line of the image sensor.

3. The system according to Claim 2, wherein the computing device is further configured to:

extract information on a characteristic of a captured object corresponding to the line of the image sensor based on the derived multispectral data.

4. The system according to any preceding Claim, wherein said lighting conditions are created by a plurality of individual color channels, and the acquired light signal samples include information establishing under the influence of which of said color channels said line was exposed.

5. The apparatus of any preceding Claim, wherein the apparatus comprises a light sensor arranged to face said light source when the image sensor faces said object; and the multispectral measurement module is configured, in taking said multispectral measurement, to use the light sensor to determine information establishing under the influence of which of said spectral lighting conditions said line was exposed for each of said samples, and to store said information with said samples for processing the multispectral measurement.

6. The apparatus of claim 5, wherein the apparatus comprises a mobile user terminal housing said image sensor and light sensor, the image sensor being mounted on one face of the mobile user terminal and the light sensor being mounted on an opposing face of the mobile user terminal.

7. The apparatus of claim 5 or 6, wherein said image sensor comprises a first image sensor and said light sensor comprises a second image sensor configured such that lines of the second image sensor are also exposed progressively to acquire light signal samples under the varying lighting conditions, the progressive exposure of the lines of the first and second image sensors being synchronized so that a co-exposed line of the second image sensor is used to establish under the influence of which of the spectral lighting conditions said line of the first image sensor was exposed for each of said samples.

8. The system according to any preceding Claim, wherein the lines of the image sensor are exposed sequentially at a sensor line rate, the sensor line rate being greater than twice of a light modulation frequency for each of the plurality of light emitting elements.

9. The system according to any preceding Claim, wherein each of the plurality of light emitting elements is configured with a light modulation frequency that is determined at least partially based on exposure time of the image sensor.

10. The system of any of Claims 1-9, wherein the plurality of light emitting elements include a light emitting diode.

11. A system comprising the apparatus of any preceding Claim and said light source.

12. A method acquiring multispectral data from an object illuminated by a light source including a plurality of different light emitting elements, wherein at least one of the plurality of light emitting elements is activated at a time instant to create varying spectral lighting conditions over time; the method comprising:

progressively exposing lines of an image sensor to acquire light signal samples under the varying lighting conditions for a line of the image sensor; and

collecting multiple light signal samples for a same one of said lines of the image sensor under different ones of said spectral lighting conditions, and storing the samples for the same line as a multispectral measurement of said object in a same data set.

13. The method according to Claim 12, further comprising:

deriving multispectral data by combining a first light signal sample acquired under a first lighting condition for a line of the image sensor with a second light signal sample acquired under a second lighting condition for the same line of the image sensor.

14. The method according to Claim 13, further comprising:

extracting information on a characteristic of an object corresponding to the line of the image sensor based on the derived multispectral data.

15. The method according to Claim 12, 13 or 14, wherein said lighting conditions are created by a plurality of individual color channels, and the acquired light signal samples include information establishing under the influence of which of said color channels said line was exposed.

16. The method of any of Claims 12 to 15, comprising:

directing a light sensor to face said light source when the image sensor faces said object;

as part of said multispectral measurement, using information from the light sensor to establish under the influence of which of said spectral lighting conditions said line was exposed for each of said samples; and

storing said information with said samples for processing the multispectral measurement.

17. The method of claim 16, wherein the image sensor is mounted on one face of the mobile user terminal and the light sensor is mounted on an opposing face of the mobile user terminal, and wherein said directing comprises directing the image sensor towards said object so that the light sensor mounted on said opposing face faces the light source.

18. The method of claim 16 or 17, wherein said image sensor comprises a first image sensor and said light sensor comprises a second image sensor; and the method comprises progressively exposing lines of the second image sensor to acquire light signal samples under the varying lighting conditions, the progressive exposure of the lines of the first and second image sensors being synchronized so that a line of the second image sensor is used to establish under the influence of which of the spectral lighting conditions said line of the first image sensor was exposed for each of said samples.

19. The method according to any of Claims 12 to 18, wherein the lines of the image sensor are exposed sequentially at a sensor line rate, the sensor line rate being greater than twice of a light modulation frequency for each of the plurality of light emitting elements.

20. The method according to any of Claims 12 to 19, wherein each of the plurality of light emitting elements is configured with a light modulation frequency that is determined at least partially based on exposure time of the image sensor. 21. The method of any of Claims 12 to 20, wherein the plurality of light emitting elements include a light emitting diode.

22. The method according to any of Claims 12 to 22, comprising a step of controlling said light source to create the varying spectral lighting conditions.

23. A computer program product comprising a computer program tangibly embodied on a computer-readable medium, the computer program being configured to carry out the method according to any of Claims 12 to 22. 24. An apparatus for use in acquiring multispectral data from an object illuminated by different spectral lighting conditions over time, the apparatus comprising:

an image sensor and a separate light sensor arranged such that when the image sensor faces said object the light sensor faces said light source; and a multispectral measurement module configured to take a multispectral measurement of said object by capturing multiple light signal samples for a same spatial element of the said image sensor under different ones of said spectral lighting conditions, using the light sensor to determine under the influence of which of said lighting conditions the spatial element was exposed for each of said samples.

25. The apparatus of Claim 24, wherein the apparatus comprises a mobile user terminal housing said image sensor and light sensor, the image sensor being mounted on one face of the mobile user terminal and the light sensor being mounted on an opposing face of the mobile user terminal.

26. The apparatus of claim 24 or 25, wherein said image sensor comprises a first image sensor and said light sensor comprises a second image sensor.

27. The apparatus of claim 26, wherein one of the image sensors comprises a front-facing camera of the mobile user terminal and the other of the image sensors comprises a rear-facing camera of the mobile user terminal.

28. The apparatus of claim 26 or 27, wherein the second image sensor is also configured such that lines of the second image sensor are exposed progressively to acquire light signal samples under the varying lighting conditions, the progressive exposure of the lines of the first and second image sensors being synchronized so that a line of the second image sensor is used to establish under the influence of which of the spectral lighting conditions said line of the first image sensor was exposed for each of said samples.

Description:
MULTISPECTRAL DATA ACQUISITION

FIELD OF THE INVENTION Embodiments of the present invention generally relate to the field of imaging, and more particularly, to multispectral data acquisition.

BACKGROUND OF THE INVENTION Image sensors are becoming more and more ubiquitous: from high quality professional video cameras, to harsh weather robust surveillance cameras, industrial cameras for industrial vision, cameras for use in computer game interfaces, and, more recently, cameras in smart phones. Hence there is extremely large set of applications in which cameras are the preferred sensor. Example technologies for use in image sensors include Charge Couple Devices (CCD) and Complementary Metal Oxide Semiconductor (CMOS).

The scope for improving image sensing is by no means limited to ever increasing spatial resolution. Another possible extension is to increase the number of wavelengths separately captured by the sensor. One useful property of the typical silicon substrates used in CCD and CMOS cameras is their sensitivity to a relatively large range of electromagnetic radiation around the visible wavelengths: the silicon sensitivity may not be constant, but may for example be sufficient to capture radiation from 360 nm, close to visible blue, to 900 nm, well into the near-infra-red (NIR) domain. This property is typically employed to capture color images. A set of red (R), green (G) and blue (B) filters is formed over groups of neighbouring pixels, so as to form a respective group of three constituent pixels R, G and B for each image point to be captured (the individual constituent RGB pixels in a group are sometimes referred to as sub pixels and the group or image point is sometimes referred to as a color pixel). The loss in spatial resolution, due to the fact that a single constituent pixel cannot capture separately all three colors, is the price to be paid to acquire a trichromatic image (an image based on three separate wavelengths) with a single sensor.

Multispectral (MS) imaging is an extension over trichromatic image sensing which captures more than three wavelengths for a given spatial image point. Its implementation may greatly depend on the particular application to which it is targeted, which determines the number of separate wavelengths to be measured, the accuracy in the separation of each band (i.e. whether the separate sub-bands should completely separate or whether limited overlap is allowed), the range of distances of the objects to be measured, and, naturally, the price of the solution.

With the increasing development of underlying imaging and sensing technology, multispectral imaging has been proposed and applied in several fields. As used herein, term "multispectral imaging" refers to the imaging technology capable of capturing image data at more than three frequencies across the electromagnetic spectrum. Multispectral imaging allows extraction of additional information that the human eye fails to capture with its receptors for red, green and blue (RGB). The multispectral data or information acquired with MS imaging can be used in a variety of fields to identify people or other objects or their constituent material(s) on the basis of their respective absorption or emission spectra, e.g. based on visible and/or infrared imaging. Such fields include for example object recognition and classification, industrial vision, food verification, material recognition, people detection, tracking, and satellite remote sensing to be able to distinguish the material of objects extremely far from the sensor.

To allow the acquisition of multispectral image data from a scene, it may be desirable to obtain useful multispectral data at pixel level, i.e. for each of a two-dimensional array of image points at a suitably detailed resolution. One way to acquire multispectral image is to use a special instrument that is intrinsically designed to capture more than three frequencies for each image point, for example comprising separate dedicated sensors for each wavelength and/or dedicated optics for each wavelength. Such solutions work by trading spatial resolution for spectral resolution. For example a spectrometer or prism may be used to split an incoming light beam from each image point into its constituent spectrum, to be detected over a group of pixels each arranged to detect the light from a different part of the spectrum for a given image point. In another example an instrument comprises a camera with special Bayern filter pattern designed with more frequencies of filter than the conventional red, green and blue.

Such techniques may be used to acquire multispectral data with relatively high accuracy. However, their cost is generally too expensive. In addition, use and maintenance of these special instruments requires specialists with proficient skills. Other solutions enable the acquisition of multispectral data by means of a more common image sensor, for example a sensor embedded in a standard RGB camera, which does not in itself intrinsically capture light at more than three frequencies at once. Instead the object in question is illuminated using lighting units providing different colors, and different samples are captured for each image point under each of the different illuminations. Such solutions trade off temporal resolution for spectral resolution. They require multiple light sources, with different spectral distributions, to sequentially illuminate the scene. For example one such solution is presented in "Multispectral Imaging Using Multiplexed Illumination " (Park, J. et al, 2007, IEEE International Conference on Computer Vision). This exploits a standard RGB camera, and LEDs of different colors. The authors propose multiplexing of LEDs (i.e. illumination of the scene with LEDs of different colors at the same time) to increase acquisition speed. An advantage is the use of a standard camera, and the possibility of adopting standard LEDs, allowing for fast switching and high intensity.

Such solutions conventionally work based on an assumption that the image sensor is exactly synchronized with the multiplexed illumination and capable of acquiring one image for each different illumination. As a result, one of the disadvantages of these cheap solutions for multispectral data acquisition is the requirement of synchronization between the image sensor and the lighting units. They may also require extensive and accurate calibration, ranging from acquiring multiple images of objects with known spectra, to measuring the spectrum of the light sources with a spectrometer, to measuring the spectral transmittance of the color filters used. When the lighting units update, for example in terms of light hue, saturation, or brightness, the time-consuming and tedious calibration procedure has to be repeated.

In view of the foregoing, there is a need in the art for a system and method for acquiring multispectral data in a more efficient or effective way.

SUMMARY OF THE INVENTION

In one aspect, embodiments of the present invention provide apparatus for use in acquiring multispectral data from an object illuminated by a light source including a plurality of different light emitting elements, wherein at least one of the plurality of light emitting elements is activated at a time instant to create varying spectral lighting conditions over time; the apparatus comprising: an image sensor configured such that lines of the image sensor are exposed progressively to acquire light signal samples under the varying lighting conditions for a line of the image sensor; and a module configured to take a multispectral measurement of said object by collecting multiple light signal samples for a same one of said lines of the image sensor under different ones of said spectral lighting conditions, and storing the samples for the same line in a same data set.

In another aspect, embodiments of the present invention provide a method for acquiring multispectral data from an object illuminated by a light source including a plurality of different light emitting elements, wherein at least one of the plurality of light emitting elements is activated at a time instant to create varying spectral lighting conditions over time; the method comprising: progressively exposing lines of an image sensor to acquire light signal samples under the varying lighting conditions for a line of the image sensor; and collecting multiple light signal samples for a same one of said lines of the image sensor under different ones of said spectral lighting conditions, and storing the samples for the same line as a multispectral measurement of said object in a same data set.

In another aspect, embodiments of the present invention provide a computer program product for use in acquiring multispectral data from an object illuminated by a light source including a plurality of different light emitting elements, wherein at least one of the plurality of light emitting elements is activated at a time instant to create varying spectral lighting conditions over time. The computer program product comprises code embodied on a computer-readable storage medium and configured so as when executed on a processing apparatus to perform operations of: progressively exposing lines of an image sensor to acquire light signal samples under the varying lighting conditions for a line of the image sensor; and collecting multiple light signal samples for a same one of said lines of the image sensor under different ones of said spectral lighting conditions, storing the samples for the same line as a multispectral measurement of said object in a same data set. In another aspect, embodiments of the present invention provide apparatus for use in acquiring multispectral data from an object illuminated by different spectral lighting conditions over time, the apparatus comprising: an image sensor and a separate light sensor arranged such that when the image sensor faces said object the light sensor faces said light source; and a multispectral measurement module configured to take a multispectral measurement of said object by capturing multiple light signal samples for a same spatial element of the said image sensor under different ones of said spectral lighting conditions, using the light sensor to determine under the influence of which of said lighting conditions the spatial element was exposed for each of said samples.

In another aspect, embodiments of the present invention provide a system for use in multispectral data acquisition. The system comprises: a light source including a plurality of light emitting elements having different spectra, wherein at least one of the plurality of light emitting elements is activated at a time instant to create varying lighting conditions over time; and an image sensor configured such that lines of the image sensor are exposed progressively to acquire light signal samples under the varying lighting conditions for a line of the image sensor.

In another aspect, embodiments of the present invention provide a method for multispectral data acquisition. The method comprises: creating varying lighting conditions over time by controlling a light source including a plurality of light emitting elements having different spectra, such that at least one of the plurality of light emitting elements is activated at a time instant; and progressively exposing lines of an image sensor to acquire light signal samples under the varying lighting conditions for a line of the image sensor.

In yet another aspect, embodiments of the present invention provide a computer program product comprising a computer program tangibly embodied on a computer-readable medium. The computer program is configured to: create varying lighting conditions over time by controlling a light source including a plurality of light emitting elements having different spectra, such that at least one of the plurality of light emitting elements is activated at a time instant; and progressively exposing lines of an image sensor to acquire light signal samples under the varying lighting conditions for a line of the image sensor.

In accordance with embodiments of the present invention, the light source includes multiple light emitting elements each having a different spectrum, and is configured such that each light emitting element is, at least partially, emitting light while some of the other light emitting elements are not. In the meantime, the image sensor is configured to work with a "rolling shutter effect." That is, exposure of the image sensor is done progressively line-byline rather than globally. As such, it is possible to collect samples for any line of the image sensor illuminated by each individual light emitting element. In this way, meaningful pixel level multispectral data encoded in the light source may be captured with a sufficient accuracy, while removing any requirement of synchronization and/or calibration between the image sensor and lighting source. Accordingly, performance and ease of the multispectral data acquisition may be significantly improved.

Other features and advantages of embodiments of the present invention will also be understood from the following description of specific exemplary embodiments when read in conjunction with the accompanying drawings, which illustrate, by way of example, the principles of the invention.

BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention will be presented in the sense of examples and their advantages are explained in greater detail below, with reference to the accompanying drawings, wherein:

FIG. 1 is a high-level block diagram illustrating a system for use in multispectral data acquisition in accordance with an exemplary embodiment of the present invention;

FIG. 2 is a schematic diagram illustrating an example of images captured with a standard image sensor under illumination with a light source including two light emitting elements with different spectra in accordance with an exemplary embodiment of the present invention; FIGS. 3A-3D are a schematic diagrams illustrating the low-pass filtering characteristics of the acquisition process of an image sensor in accordance with an exemplary embodiment of the present invention;

FIG. 4 is a schematic diagram illustrating examples of relationship between the light modulation time of the light emitting elements and the sensor line rate of the image sensor in accordance with an exemplary embodiment of the present invention;

FIG. 5 is a flowchart illustrating a method for use in multispectral data acquisition in accordance with an exemplary embodiment of the present invention; FIG 6 is a schematic diagram illustrating an example of a mobile user terminal capturing multispectral data from a scene; FIG 7 is a schematic diagram illustrating an example of image data captured using front facing and rear facing image sensors;

FIG. 8 is a schematic diagram illustrating example color filter transmission spectra for an RGB camera and example emission spectra of red, green and blue LEDs; and

FIG 9 is a schematic diagram illustrating examples of individual contributions of red, green and blue LEDS when captured with example red, green and blue color filters of a camera.

Throughout the figures, same or similar reference numbers indicates same or similar elements.

DETAILED DESCRIPTION OF EMBODIMENTS

In general, embodiments of the present invention provide a system, method, and computer program product for use in multispectral data acquisition. As will be apparent from the further discussions below, exposure of the image sensor is performed progressively, while different light emitting elements are activated at different time instances so as to create time- varying lighting conditions. Collaboration of such "rolling shutter" exposure and the varying light conditions is exploited to enable acquisition of multispectral data at pixel level, without any synchronization or calibration. Now some exemplary embodiments of the present invention will be described with reference to the figures.

Reference is first made to FIG. 1, where a block diagram illustrating high-level architecture of a system 100 for use in multispectral data acquisition in accordance with an exemplary embodiment of the present invention is shown.

As shown, the system 100 in accordance with embodiments of the present invention comprises a light source 102. The light source 102 includes a plurality of light emitting elements 102-1, 102-2...102-N. In some embodiments, light emitting diodes (LEDs) may serve as the light emitting elements in the light source 102. Alternatively or additionally, organic light emitting diodes (OLEDs) or fluorescents may be used in connection with embodiments of the present invention, just to name a few. The light emitted by the light emitting elements 102-1, 102-2...102-N will illuminate the scene, and especially a target object 104. It is noted that though only one target object is shown in FIG. 1, there could be more than one object in the scene.

Specifically, in accordance with embodiments of the present invention, the light source is controlled such that at least one of the plurality of light emitting elements 102-1, 102-2...102- N is activated at each time instant to create light conditions that vary over time. For example, the light source may comprise a driver (not shown) responsible for activating different light emitting element(s) at different time instances according to predefined controlling strategy.

To this end, in some embodiments, the light source 102 is controlled so that the light emitting elements 102-1, 102-2...102-N is activated alternatively or sequentially, for example. Specifically, each of the light emitting elements 102-1, 102-2...102-N may be activated, at least for a short time span, while all the others are not active. Durations of the active times for different light emitting elements may or may not be equivalent. In this way, in different time intervals, the target object 104 will be illuminated by a different lighting element included in the light source. As a result, the lighting conditions varying over time are created in the scene.

Alternatively, combination in which the illuminations of different light emitting elements 102-1, 102-2...102-N overlap for some time is possible as well. For example, one or more of the light emitting elements 102-1, 102-2...102-N may be activated at a time instant. Then at a subsequent time instant, different one or more of the light emitting elements 102-1, 102- 2...102-N may be activated. In other words, in different time intervals, different combinations or subsets of the multiple light emitting elements are activated, thereby creating varying lighting conditions over time.

Still referring to FIG.1, the system 100 further comprises an image sensor 106 for capturing image data. In accordance with embodiments of the present invention, the image sensor 106 may be configured such that the lines thereof are exposed progressively to acquire light signal samples (i.e., pixel values). That is, the exposure of the image sensor 106 is not based on a global timing. Instead, each line of the image sensor 106 is reset after the previous one, giving rise to an effect known as "rolling shutter effect." As known, CMOS sensors such as most of image sensor embedded in mobile phones or portable computers may work with rolling shutter acquisition. As a result, standard and cheap image sensors that are available in the market may be used as the image sensor 106 in embodiments of the present invention.

As an example, in the embodiments where a CMOS sensor functions as the image senor 106, lines of the image sensor may be exposed sequentially at a given rate referred as sensor line rate. Alternatively or additionally, more than one but less than all lines of the image sensor may be exposed at a time. Other progressive patterns for exposure of the image sensor are also possible, and the scope of the invention is not limited in this regard.

When a line of the image sensor is exposed, light signal samples for each pixel in the line may be captured. The acquired samples may be stored and processed. As further discussed below, by configuring and controlling the light source 102 and the image sensor 106 in the above outlined manner, for any given line of the image sensor 106, it is possible to acquire samples illuminated under different lighting conditions, namely, multispectral data. Now an example mechanism for multispectral data acquisition will be explained in detail with reference to a specific embodiment.

Only for the purpose of illustration, it is assumed that multiple LEDs function as the light emitting elements of the light source, and one light emitting element (LED in this case) is activated at each time instant while the others are not active to create different lighting conditions. It is further assumed that a CMOS sensor functions as the image sensor, and the image sensor is equipped with three color filters for red (R), green (G), and blue (B), respectively. Lines of the image sensor (CMOS sensor in this case) are exposed progressively one by one.

In such embodiments, at a first time instance, a first light emitting element is activated to illuminate the scene, thereby creating a first lighting condition. At the same time, a given line of the image sensor is exposed to capture light signal samples for this given line under the first lighting condition. Then at a subsequent second time instance, it is possible that this given line is exposed again under a different second lighting condition, for example, created by the illumination of a different second light emitting element. The mechanism works based on the fact that even if the frequency of switching among the different light emitting elements is much higher than the line rate of the image sensor, it is still able to collect light signal samples for a same line under different lighting conditions created by different light emitting elements. That is, even if the light source (and its light emitting elements) and the image sensor is configured and controlled separately, interaction of light modulation of the light emitting elements and the "rolling shutter" acquisition of the image sensor is detectable. From the perspective of a series of consecutive image frames, the light modulation shall "roll" over these image frames. Otherwise, subsequent image frames captured by the image sensor may not contain samples under different lighting conditions.

For example, referring to FIG. 2, three consecutive image frames captured by a standard RGB-based image sensor are shown. There are two LEDs serving as the light emitting elements in the light source. As seen from the three consecutive images frames, there is a rolling shutter effect in which a series of strips representing illuminations by different LEDs are observed as "rolling" over the target object which is marked by a rectangle in the top row. That is, the contributions by different light emitting elements are available, even though in this case the light modulation frequency of the LEDs is set to be 298 Hz which is much higher than the frame rate of the image sensor and is also above the frequencies causing perceivable flicker in the captured images. Additionally, it is also found that even if the switching frequency of the light emitting elements is high, it is unnecessary to set the exposure time to be very short as in prior solutions. As a result, enough light signal samples may be collected in each exposure, for example, for reliable multispectral measurement. For any lines of the image sensor, each light signal sample may be stored according to the lighting condition under which the current sample is acquired. In other words, samples for a line that are acquired under the same lighting condition may be stored in a same dataset, and samples under different lighting conditions are stored in different datasets. Upon gathering enough samples for a line, multispectral data for the line may be derived, for example, by combining the light signal samples acquired under different lighting conditions. The calculation of multispectral data may be performed by one or more processors 108 associated with the image sensor. For example, the processor(s) may be co-located with the image sensor in a camera. Alternatively, the processor(s) may be located in a separate device from the image sensor.

In this regard, given a set of samples for a same line that are acquired under different lighting conditions, several methods may be used to derive multispectral data or information for the line, whether currently known or developed in the future. For example, one of such method is described in Multispectral Imaging using multiplexed illumination, Park, J., et al., 2007, IEEE International Conference on Image Vision. Another example is described in A Prism- based System for Multispectral Video Acquisition. Du, H., et al., 2009, IEEE International Conference on Computer Vision. Any other suitable methods or algorithms for deriving multispectral data are also possible, and the scope of the invention is not limited in this matter. In fact, by using the mechanism proposed by the present invention, samples under different lighting conditions may be collected, which is the key issue of multispectral data acquisition. The specific manner in which the samples are processed or used does not construct limitations to the invention.

In some embodiments, upon obtaining the multispectral data for one or more lines of the image sensor, the information on characteristics of the target object or its portion corresponding to the line(s) of the image sensor may be obtained. For example, material information of the target object may be extracted from the multispectral data for the corresponding sensor line(s). As will be appreciated by those skilled in the art, a requirement for material information extraction may be that the image sensor and the target object shall not move for the number of image frames required to gather enough samples for the corresponding lines under different lighting conditions. Possible extensions would include motion estimation or more tracking algorithms based on computer vision to compensate for camera/object motions.

Another example of the characteristics information of the target object is the cooking time when the target object is a food. Specifically, it is possible to determine the ingredients of the target object (food in this case) based on the material information extracted from the multispectral data. Then the cooking time of the food may be evaluated automatically. The above examples are merely illustrative. Multispectral measurements on other characteristics of the target object may be implemented as well. In some embodiments of the present invention, the light signal samples acquired for each line of the image sensor may include information on individual color channels of the current light primary. For example, in exemplary embodiments where the image sensor operates on RGB primaries, the image sensor may be configured to establish under the influence of which light primary the line was exposed. Then the sampling values for each individual color channel, namely, R, G and B, may be obtained and stored separately. Those skilled in the art will readily appreciate that measuring contributions of individual light emitting elements with different color channel would increase the information captured by the image sensor compared to what would be available under white light. For example, for an image sensor with three color filters for red, green and blue, respectively, and a light source including three light emitting elements, nine kinds of spectral samples may be acquired. Further, it is found in practice that the spectral samples for each color channel are sufficient enough to increase the performance of multispectral data acquisition and potential subsequent measurements. Of course, it is noted that embodiments of the present invention are of course applicable to the white light.

Additionally, in some embodiments, the system 100 allows the collaborative setting of the light emitting elements 102- 1 , 102-2. . . 102-N in the light source 102 and the image sensor 106, in order to further improve the performance of multispectral data acquisition. In particular, light emitting elements usually implement certain forms of time modulation. For examples, most LEDs are typically configured with a light modulation frequency conforming to pulse-width-modulation (PWM) or other forms of modulation. On the other hand, the image sensor has its own exposure time and sensor line rate. For example, the image sensor may be configured such that the lines thereof are exposed sequentially at a certain sensor line rate, as described above. In each exposure, the line is exposed for a time interval referred as the exposure time. These factors collectively affect the detection and acquisition of multispectral samples, which will be further discussed below.

For the purpose of illustration, the exposure time of the image sensor 106 is denoted as C ET , the sensor line rate of the image sensor 106 is denoted as S LR , and the light modulation frequency of a light emitting element is denoted as L MF - AS an example, the light from each light emitting element 102- 1 , 102-2. . . 102-N may be time-modulated according to PWM. FIG. 3A shows a schematic diagram of the light signal sent by a lamp with a light modulation frequency of L MF - For simplicity only a single primary is illustrated and the resulting signal is a single train of pulses. The resulting light is a series of pulses, as shown in FIG. 3A where the horizontal axis represents time (T) and the vertical axis represents the signal level which pulses with frequency (f). In multi-primary lamps (e.g. RGB) the resulting signal may be a linear combination of various trains of pulses (one per primary), all with the same base frequency. FIG. 3B shows the corresponding spectrum.

In embodiments, C ET and L MF are set to make sure the code is visible. In short, the system will be arranged for the following condition to be true: 1/ CET≠ m · L MF where m = 1, 2, 3, 4, i.e. an integer. If 1/ C ET = ni · L MF then the light modulation frequency of the lamp may fall in a zero of the low-pass-filter characteristic of the shutter, as described below. Furthermore, in embodiments S LR is set to be larger than two times the highest light modulation frequency that is used in the system. More generally it is desirable to set it as high as possible.

When the light from a light emitting element in this case the light signal as described in relation to FIGs. 3 A and 3B, reaches the image sensor 106, the image sensor 106 detects and acquires the light signal samples. During acquisition process, the rolling shutter exposes each line to the light for a time C ET - This acquisition process produces a low pass filtering effect on the acquired light signal. FIG. 3C shows the low-pass filter characteristic of the acquisition process of a rolling shutter camera with an exposure time C ET - Referring to FIG. 3C where the horizontal axis represents frequency (f) and the vertical axis represents signal level, it can be seen that at the blind spots 302-1, 302-2...302-M corresponds to multiples of 1/C ET where the low-pass filter has a value of zero. This effect can be seen by overlapping the spectrum of a signal with light modulation frequency L MF = 1/C ET , as in FIG. 3D. If the light modulation frequency L MF = m · 1/C ET , then when the light reaches the image sensor 106 the light signal samples cannot be acquired since the light signal goes undetected. Therefore, in preferred embodiments of the present invention, each of the multiple light emitting elements 102-1, 102-2...102-N is configured with a light modulation frequency (L MF ) that is determined at least partially based on the line rate (C ET ) of the image sensor 106 to avoid the blind spots in signal detection. More specifically, configuration of the system 100 preferably ensures that L MF ≠ m · 1/C ET - Other factors may also be taken into consideration when determining the light modulation frequency of the light emitting elements, for example, the status and requirements related to the light emitting elements themselves.

It shall be noted that the setting of light modulation frequency based on the sensor line rate does not imply any synchronization or calibration between the light emitting elements and the image sensor like the prior art, but rather only a much less frequent update is required, for example, when the light hue, saturation, or brightness of the light emitting elements is changed. Additionally, the scope of the invention is not limited to the above constraint of the exposure time and light modulation frequency. For example, in embodiments where a light emitting element has an irregular time modulation pattern other than a PWM, it is possible to set the light modulation frequency dependent ly of the image sensor. This is because in this event, it is possible that the light signal reaches the image sensor 106 when there is no blind spot.

With regard to the sensor line rate S LR of the image sensor 106, in accordance with embodiments of the present invention, it is set to be greater than the light modulation frequency L MF of the light emitting elements 102-1 . . . 102-N. Referring to FIG. 4, examples are shown where the light modulation frequency L MF of light emitting elements is decreased while keeping the sensor line rate S LR unchanged (L MF decreases from the top-left to bottom- right in FIG. 4). It is seen that the lower L MF , the larger amount of consecutive lines contain the samples under the same lighting condition. The benefit of having larger bands of uniform contributions is that the identification and acquisition of multispectral information is more reliable, since such information may be derived from more samples. Accordingly, in some embodiments, the sensor line rate S LR of image sensor is preferably set to be much higher than the light modulation frequency L MF - AS an example, S LR may be set to be greater than twice of the L MF - Any other suitable settings are also possible. In fact, the sensor line rate S LR may be set as high as possible. From the above descriptions of the exemplary system 100 in accordance with embodiments of the present invention, those skilled in the art would appreciated that the light source and image sensor may be configured and controlled independently. Even if the collaborative settings may be performed between the light source and the image sensor for further improvement of the system performance, such settings are quite simple and less frequent. Reference is now made to FIG. 5, where a flowchart illustrating a method for use in multispectral data acquisition in accordance with an exemplary embodiment of the present invention is shown.

After the method 500 starts, at step S502, lighting conditions varying over time are created by controlling a light source (e.g., the light source 102 as shown in FIG.l) including a plurality of light emitting elements (e.g., the light emitting elements 102-1, 102-2...102-N). Each of the light emitting elements in the light source has a spectrum different with each other. For example, LEDs, OLEDs, or fluorescents may be used as the light emitting elements.

At step S502 the light source is controlled such that at least one of the plurality of light emitting elements is activated at a time instant. For example, in some embodiments, the light emitting elements may be configured to sequentially illuminate the scene. That is, each of the light emitting elements may be activated, at least for a short time span, while all the others are not active. Alternatively, combination in which the illuminations of different light emitting elements overlap for some time may be exploited as well. For example, one or more of the light emitting elements 102-1, 102-2...102-N may be activated at a time instant, and the varying lighting conditions are created by different combinations of the light emitting elements.

Specifically, in embodiments where the light emitting elements are configured with a light modulation frequency (such PWM light modulation), the light modulation frequency of any light emitting element may be determined at least partially based on the exposure time of the image sensor. In this way, the blind spots where the light signal samples goes undetectable may be eliminated.

The method 500 then proceeds to step S504, where lines of an image sensor are progressively exposed. In this way, for any lines of the image sensor, light signal samples under the varying lighting conditions may be detected and acquired in a time span. In accordance with some embodiments, the acquired light signal samples may include information on individual color channels, e.g., R, G, B channels. Further, in those embodiments where lines of the image sensor are exposed sequentially at a certain sensor line rate, the sensor line rate may be set to be, for example, two times greater than the light modulation frequency of each of the light emitting elements.

By configuring the light source and the image sensor as described above in steps S502 and 504, respectively, meaningful pixel level samples for any line of the image sensor that are acquired under different illuminations may be gathered.

Optionally, the method 500 may proceed to step S506 to derive multispectral data for a line of the image sensor by combining samples for the line that are previously acquired under different light conditions. Multispectral data may be derived accordingly by any suitable methods or algorithms, and the scope of the invention is not limited in this regard.

Next, at optional step S508, information on characteristics is determined, for example, material information of the captured target object corresponding to one or more lines of the image sensor.

The above has introduced a methodology capable of operating without any synchronization between image sensor and lighting unit, by exploiting the rolling shutter characteristics available in most image sensors. Such a system allows for further reducing the cost of a multispectral imaging system, thanks to the possibility of adopting low cost image sensors which can be manufactured in large volumes.

To summarize, multispectral illumination may be exploited without any synchronization between image sensor and lighting unit, by adopting an image sensor with rolling shutter readout as follows. In embodiments the system comprises:

• a light source 102 containing more than one light emitting elements 102-1 ... 102-N such that the spectrum of each light emitting element is not identical to any of the other, e.g. each emitting a respective light primary (Lp 1 );

• a time modulation scheme adopted by the light source 102, operating so that each light emitting element 102-1 ... 102-N is, at least partially, emitting light while some of the other light emitting elements are not;

• an image sensor 106 with rolling shutter acquisition; and

• a processing unit 108 used to analyze the images captured by the image sensor, applying the following processing for each incoming frame In embodiments the processing for each frame comprises:

(i) performing an analysis of each line acquired by the sensor 106;

(ii) establishing under the influence of which light primary (Lp 1 ) the line was captured (with j = 1, 2, ... N, where N is the number of light primaries if each of the light emitting elements 102-1...102-N emits a respective light primary);

(iii) storing values for each individual color channel C^Lp 1 ) of the camera for the object currently observed (with i = 1,2,3 for a typical RGB camera);

(iv) combining information captured for the same line in a previous time instance if the detected light primary at the current frame is different from what previously captured; and

(v) processing combined information (a set of C(Lp J )) to extract material information for each individual line. In some situations there may be a limitation related to optional step (ii), where each acquired line is analysed to establish under which light primary it had been acquired. That is, without any assumption on the material being observed, it may not be possible to univocally identify which light primary is affecting each line. As an example, a pixel which is captured resulting in a green value, could be either be the result of a white light primary on a green material, or of green light primary on a white material.

In some applications the described technique is adequate, as the ambivalence can be solved by incorporating some assumptions on the typical range of materials present in the environment, or exploiting the presence of several materials appearing on the same line. The sensor 106 collects multispectral data for a given line of the image which increases the amount of information available to the skilled operator, for him or her to use as he or she finds useful. For example the skilled operator may have predetermined knowledge of the spectra emitted by the light source 102, e.g. knowing it to comprise a sequence of certain red, green and blue LEDs. So in that case only six different combinations for the three differently illuminated versions of the line being studied may be possible - either the first version of the line was illuminated by red, the second by green and the third by blue; or the first version was illuminated by red, the second by blue and the third by green; or the first version was illuminated by blue while the second was illuminated by red and the third by green; etc. Further, the skilled operator may know that the material in question is one of a certain category or subset, e.g. is trying to distinguish between a few different types of plastic or foodstuffs having known properties. Given this context, it may be that only one of the six possible interpretations maps to the absorption or emission spectrum of one of the expected materials.

Alternatively the operator need not necessarily have any predetermined information about the light source. Instead the results from an object or material being studied may be compared with results previously captured from one or more instances of materials or objects with known spectral properties, which were exposed under the same lighting conditions, and an identification may be made in that way without the need for any explicit absolute knowledge of the spectral properties. For example step (v) may be performed as described in WO 2012/020381, which provides a training approach which uses machine learning to mitigate the need for tedious calibration. More generally the skilled user can use the multispectral data collected by the present invention in any way he or she sees fit.

Nonetheless, in some applications it may be desirable to provide further assistance in resolving potential ambiguity in a step of determining under the influence of which of multiple light spectra or elements the different versions of a given line was illuminated.

To address this, in further embodiments of the invention there is provided an apparatus comprising the image sensor and a separate light sensor, in which the light sensor is arranged to face the light source when the image sensor faces the object in question. For example the apparatus may comprise a mobile user terminal housing the image sensor and light sensor, the image sensor being mounted on one face of the mobile user terminal and the light sensor being mounted on an opposing face of the mobile user terminal. In embodiments the light sensor may comprises a second image sensor configured such that lines of the second image sensor are also exposed progressively to acquire light signal samples under the varying lighting conditions. For example, the two sensors may be comprised by the front and back cameras of a mobile user terminal, e.g. a "smart phone" or other "smart device".

FIG. 6 gives an example usage scenario, illustrating the positions of a second sensor 604 which may be referred to as the direct-line sensor (e.g. front facing camera) and the first sensor 102 which may be referred to as the reflected- light sensor 106 (e.g. back facing camera) mounted on opposing faces of a smart device 602. The object 104 being studied, e.g. a fresh produce display, is illuminated by multiplexed illumination 606 from the light source 102, and this light 606 is reflected back from the object 104 into the reflected- light sensor 106 (e.g. rear facing camera of the device 602). For example the light source 102 may be mounted on a ceiling 608 above the object 104. The second, direct-line sensor 604, being mounted on the opposing face of the device 602, points in a direction of the multiplexed illumination 606 when the other sensor 106 is pointed in a direction of the object 104 from which that same light 606 is reflected.

In embodiments each of the two image sensors 106, 604 is a separate "rolling shutter" type image sensor in which lines of the sensor are progressively exposed (rather than all lines being exposed globally at once). In this case the progressive exposure of the lines of the two image sensors 106, 604 may be synchronized so that a co-exposed line of the second image sensor 604 can be used to establish under the influence of which of the spectral lighting conditions a given line of the first sensor 106 was exposed for each of said samples.

The processing module 108 is configured, in taking the multispectral measurement, to use the second, direct-line facing image sensor 604 (or other separate light sensor) to determine information establishing under the influence of which of the spectral lighting conditions a given line of the first, reflected-light sensor 106 was exposed for each of the samples. The processing module 108 stores this information with the samples for processing the multispectral measurement. Thus in exemplary embodiments, the accuracy of the measurement is further enhanced by exploiting the possibility to simultaneously acquire, by means of front and back facing cameras of a smart device, data from the camera facing the light source directly, and data from the camera facing the object under analysis containing the very same light modulation after reflection by the object. In this way, the data acquired from the camera facing directly the light source allows the light primary affecting every line to be established, while the other camera is exploited in the same fashion as the methodology described above, with the difference of not having to solve for the unknown in step (ii). As discussed, in embodiments the invention exploits the characteristics of the time modulation of LED based light sources. Most LED drivers implement some form of pulse- width-modulation (PWM) in order to achieve different light intensity settings. For RGB and, more generally, for multi-primary lights, each primary LED is modulated in order to obtain the desired color temperature.

Further, most mage sensors embedded in smartphones, tablets and compact cameras are based on CMOS sensors for which exposure is not based on a global timing. On the contrary, each line of the sensor is reset after the previous one, giving rise to an effect known as "rolling shutter effect". This effect is particularly noticeable when recording videos of object moving across the camera field of view horizontally.

In embodiments making use of the two opposing sensors 106, 604, the system comprises the following:

· a light source 102 containing more than one light emitting elements 102-1...102-N such that the spectrum of each light emitting element is not identical to any of the other, e.g. each emitting a respective light primary (Lp 1 );

• a time modulation scheme adopted by the light source 102 operating so that each light emitting element 102-1...102-N is, at least partially, emitting light while some of the other light emitting elements are not;

• a device 602 embedding two image sensors 106, 604, both with rolling shutter acquisition, both with the same sensor characteristics, facing in two opposite directions, and for which acquisition is synchronized (i.e. the time at which a line is exposed on one sensor can be related to the time at which another line on the other sensor is exposed); and

• a processing unit 108 used to analyze the images captured by both image sensors 106, 604, applying the following processing for each incoming frame.

In embodiments the processing for each frame comprises:

· identifying which of the two image sensors 106, 604 is facing the light source directly, i.e. which is the direct-light sensor, and as a consequence which is capturing reflected light, i.e. which is the reflected-light-sensor;

• analysing each line acquired by the direct-line-sensor 604 by

(i) establishing under the influence of which light primary (Lp 1 ) the line was captured (with j = 1, 2, ... Num light primaries);

• analysing each line acquired by the reflected- light-sensor 106 by

(ii) fetching, for each line of the sensor, the light primary (Lp 1 ) the line was captured (with j = 1, 2, ... Num light primaries), from the analysis of the content of the direct- line-sensor 604;

(iii) storing values for each individual color channel of the camera for the object currently observed C^Lp 1 ) (with i = 1,2,3 for a typical RGB camera);

(iv) combining information captured for the same line in a previous time instance if the detected light primary at the current frame is different from what previously captured; and

(v) processing combined information (a set of C^Lp 1 )) to extract material information for each individual line (e.g. what described in WO 2012/020381).

The left hand side of FIG.7 illustrates an image of the ceiling 608 directly above the device 602, captured by the smart device front-facing camera 604, i.e. the direct line sensor. The horizontal bars represent the light modulation effect captured by the rolling shutter of the camera 604, in which one or more light primaries (Lp 1 ) are visible in each captured line. The right hand side of FIG. 7 illustrates an image of the produces 104 (or other object or objects) directly below the device 602, captured by the device back-facing camera 102, i.e. the reflected light sensor. As the same kind of image sensor is used in both front and back facing camera, and both sensors are synchronized, the information related to the light primaries (Lp 1 ) extracted from the direct line sensor 604 is applicable to the reflected light sensor 106. This can advantageously be used in overcoming the ambiguities related to the derivation of the light primary (Lp 1 ).

FIG. 8 and FIG. 9 illustrate a reason why measuring several LED contributions with different color channels increases the information captured by the camera, compared to what would be available under white light. FIG. 8 illustrates a typical color- filter transmission spectra for RGB cameras, and emission spectrum of most common red (R), green (G) and blue (B) LEDs. FIG. 9 illustrates individual contributions of R, G and B LEDs when captured with blue color filter of a camera with color filters shown in FIG. 8, corresponding to C 3 (Lp 1 ), C 3 (Lp 2 ) and C2(Lp 3 ). The values corresponding to the red and green color filter of the camera can be computed in a similar way. For the case shown in FIG. 8, having a camera with three color filters and three LEDs allows a total of nine spectral samples to be gathered. Even if, as seen from FIG. 9, the spectral samples are not very narrowly spaced, they are often sufficient to greatly increase the performance of a material recognition system. For example this could be used in conjunction with a training technique such as described in WO 2012/020381. As previously described, a requirement on the luminaire driver may be that each individual LED is activated, at least for a short time, while other LEDs are not active. Combinations in which different LEDs overlap for some time can also be exploited to gather further samples. Once such a system would be deployed, it should be able to perform multispectral measurement as independently as possible from the particular light hue, saturation, and brightness at which the luminaire is set. While there are three cases for which may not be possible to gather more than one LED sample (i.e. fully saturated R, G, or B), any other light color would require the use of more than one LED. Furthermore, any number of LED primaries could be used, with luminaires embedding a larger number of LED primaries providing even more accurate multi-spectral measurement.

It will be appreciated that the above embodiments have been described only by way of example.

While conventionally applied in industrial vision, multispectral imaging is well suited for all kinds of material recognition and can strongly increase accuracy of object classification in the wider field of computer vision, e.g. which captures an image with a standard trichromatic sensor.

One example application is food recognition for optimal cleaning, e.g. for applying a proper amount of cleaning depending on recognized fruit and/or vegetables. Too aggressive cleaning of a vegetable in which pesticides typically do not penetrate too deep may also remove vitamins. On the other hand too soft cleaning of a vegetable which has absorbed too much pesticide would be insufficient. Recognizing fruits and/or vegetables solely from shape, texture or color can be extremely challenging, given the variation in appearance. Classifying the types using embodiments of the present invention would allow for an easier and at the same time more robust approach.

Another example application is food recognition for optical cooking time, e.g. automatic cooking time detection becomes possible with a simple camera. Recognition of ingredients based on embodiments of the present invention would greatly improve robustness of the system.

Another example application is material recognition for floor care. Different floors require different treatment, and recognition of them on the basis of colour/texture alone can be challenging (especially given the fact that often, plastic based floors are printed to mimic different materials). Embodiments of the present invention would allow a much more robust detection, with the addition of commoditized hardware. Yet another example application is material recognition for automatic lighting calibration. Light sensors are commonly used to sense the amount of light present in the environment. A typical installation procedure requires a series of steps which are aimed at measuring the reflectivity coefficient of the objects at which the light sensor is pointing. With the embodiments of the present invention it the different type of materials may be recognized, thus allowing for an automatic setup.

The set of applications which can benefit from this invention is very diverse. The above applications are just examples, and the list may be expanded. Further, as discussed, some embodiments of the present invention make use of an image sensor and separate light sensor (e.g. another image sensor such as an opposing camera); but in variants of such embodiments, the idea may be expanded to determine lighting conditions under which any spatial element of an image sensor is exposed, from a global exposure even down to individual pixels. Hence in embodiments there may be provided an apparatus comprising an image sensor and a separate light sensor arranged such that when the image sensor faces the object in question the light sensor faces the light source illuminating that object. The apparatus also comprises a multispectral measurement module configured to take a multispectral measurement of said object by capturing multiple light signal samples for a same spatial element of the said image sensor under different ones of said spectral lighting conditions, using the light sensor to determine under the influence of which of said lighting conditions the spatial element was exposed for each of said samples. Again in embodiments the apparatus may comprise a mobile user terminal housing said image sensor and light sensor, the image sensor being mounted on one face of the mobile user terminal and the light sensor being mounted on an opposing face of the mobile user terminal; and again the light sensor may comprise a second image sensor, e.g. one of the image sensors comprising a front-facing camera of the mobile user terminal and the other comprising a rear-facing camera of that same terminal. Further, the invention need not be dealt with as an entire system comprising both light source and image sensor - rather, as the light source and sensor parts of the system are capable of being operated separately, so they may be provided separately and/or implemented by different parties. For example in embodiments, the invention is embodied in a separate sensing device (e.g. smart device 602) which can operate in the presence of whatever lighting happens to already be present in the environment, as long that lighting creates different lighting conditions, e.g. using the inherent LED modulation in the existing, everyday lighting of a room. Alternatively the option of dedicated lighting being provided for the multispectral imaging is not excluded either. For the purpose of illustrating the spirit and principle of the present invention, some specific embodiments thereof have been described above. However, it is noted that the described embodiments in no sense limit the scope of the present invention. For example, though some exemplary embodiments are described above in connection with RGB-based LEDs, any other suitable light emitting elements, even white light emitting elements, can be used to put the invention into practice. Further, any suitable image sensor such as CMOS-based or Charge Couple Device (CCD) based ones may be used in embodiments of the invention, whether currently known or developed in the future. Moreover, the image sensor may work under any color model other than RGB model. Through the above descriptions, those skilled in the art would readily appreciate that embodiments of the present invention provide a more efficient and/or effective mechanism for use in multispectral data acquisition or multispectral measurements. Multispectral data may be easily captured by configuring a system (for example, the system 100 as shown in FIG.1) comprising a light source and an image sensor. The light source includes multiple light emitting elements each having a different spectrum. Each light emitting element is controlled to be activated while at least some of the other light emitting elements are not. The image sensor is configured to work with "rolling shutter effect," namely, performing exposure progressively line-by-line. By use of this simple configuration, it is possible to collect samples for every line of the image sensor illuminated by each individual light emitting element. In this way, meaningful pixel level multispectral information encoded in the light source may be captured with a sufficient accuracy, while removing any requirement of synchronization or calibration between the image sensors and lighting source. Accordingly, performance of multispectral data acquisition may be significantly improved.

In general, the various exemplary embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. For example, some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device, although the invention is not limited thereto. While various aspects of the exemplary embodiments of this invention may be illustrated and described as block diagrams, flowcharts, or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.

Specifically, various blocks shown in FIG. 5 may be viewed as method steps, and/or as operations that result from operation of computer program code, and/or as a plurality of coupled logic circuit elements constructed to carry out the associated function(s). At least some aspects of the exemplary embodiments of the inventions may be practiced in various components such as integrated circuit chips and modules, and that the exemplary embodiments of this invention may be realized in an apparatus that is embodied as an integrated circuit, FPGA or ASIC that is configurable to operate in accordance with the exemplary embodiments of the present invention.

While several specific implementation details are contained in the above discussions, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a sub-combination or variation of a sub-combination. Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.

Various modifications, adaptations to the foregoing exemplary embodiments of this invention may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings. Any and all modifications will still fall within the scope of the non-limiting and exemplary embodiments of this invention. Furthermore, other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these embodiments of the invention pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings.

Therefore, it is to be understood that the embodiments of the invention are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are used herein, they are used in a generic and descriptive sense only and not for purposes of limitation.