Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
APPARATUS FOR ENABLING A PHOTOGRAPHIC DIGITAL CAMERA TO BE USED FOR MULTI- AND/OR HYPERSPECTRAL IMAGING
Document Type and Number:
WIPO Patent Application WO/2020/115359
Kind Code:
A1
Abstract:
It is disclosed using a photographic digital camera (110) together with a diffraction grating(130) for dispersing incident light towards an objective (112) of the digital camera (110) to provide a diffraction image for a diffraction photograph (300) for use in multi-/hyperspectral imaging.

Inventors:
TOIVONEN MIKKO (FI)
RAJANI CHANG (FI)
KLAMI ARTO (FI)
Application Number:
PCT/FI2019/050859
Publication Date:
June 11, 2020
Filing Date:
December 02, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HELSINGIN YLIOPISTO (FI)
International Classes:
G01J3/02; G01J3/18; G01J3/28
Foreign References:
US20140193839A12014-07-10
US20150338307A12015-11-26
DE202015006402U12015-10-05
CN106840398A2017-06-13
Other References:
RALF HABEL ET AL: "Practical Spectral Photography", COMPUTER GRAPHICS FORUM, vol. 31, no. 2pt2, 2 May 2012 (2012-05-02), GB, pages 449 - 458, XP055651217, ISSN: 0167-7055, DOI: 10.1111/j.1467-8659.2012.03024.x
LEDIG, CHRISTIAN ET AL.: "Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network", CVPR, vol. 2, no. 3, 2017
DEBORAH, HILDANOL RICHARDJON YNGVE HARDEBERG: "A comprehensive evaluation of spectral distance functions and metrics for hyperspectral image processing", IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, vol. 8.6, 2015, pages 3224 - 3234, XP011665034, DOI: 10.1109/JSTARS.2015.2403257
WANG, ZHOU ET AL.: "Image quality assessment: from error visibility to structural similarity", IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 13.4, 2004, pages 600 - 612, XP011110418, DOI: 10.1109/TIP.2003.819861
OPPENHEIM, ALAN V.RONALD W. SCHAFER: "From frequency to quefrency: A history of the cepstrum", IEEE SIGNAL PROCESSING MAGAZINE, vol. 21.5, 2004, pages 95 - 106, XP011118156, DOI: 10.1109/MSP.2004.1328092
OKA- MOTO, TAKAYUKIICHIROU YAMAGUCHI: "Simultaneous acquisition of spectral image information", OPTICS LETTERS, vol. 16.16, 1991, pages 1277 - 1279, XP000220056
DESCOUR, MICHAELEUSTACE DERENIAK: "Computed-tomography imaging spectrometer: experimental calibration and reconstruction results", APPLIED OPTICS, vol. 34.22, 1995, pages 4817 - 4826
FAIRCHILD, MARK D: "Color appearance models", 2013, JOHN WILEY & SONS
Attorney, Agent or Firm:
PAPULA OY (FI)
Download PDF:
Claims:
CLAIMS

1. An apparatus (100) for enabling a photographic digital camera (110) to be used for multi- and/or hyperspectral imaging, the apparatus (100) comprising:

a frame (120) configured for coupling with the digital camera (110); and

a diffraction grating (130) coupled to the frame (120) and configured for dispersing in- cident light, when the frame (120) is coupled to the digital camera (110), towards an ob jective (112) of the digital camera (112) for providing a diffraction image for a diffrac tion photograph (300) for use in multi- and/or hyperspectral imaging.

2. The apparatus (100) according to claim 1, wherein the diffraction grating (130) has a grating con stant of 1000-10000 or 1600-5000 nanometers.

3. The apparatus (100) according to any preceding claim, wherein the diffraction grating (130) is adapted to produce a diffraction pattern, where a diffraction angle between first order diffrac- tion maxima, for at least part of a wavelength range 400-1000 nm corresponding to a wavelength of the incident light, is equal or smaller in comparison to a field-of-view angle of the digi tal camera (110) .

4. The apparatus (100) according to any preceding claim, wherein the diffraction grating (130) is configured to disperse incident light in two di mensions .

5. The apparatus (100) according to any preceding claim, wherein the frame (120) is configured for the diffraction grating (130) to be positioned within 1 centimeter to the surface (114) of the objective (112) of the digital camera (110), when the apparatus (100) is in use.

6. The apparatus (100) according to any preceding claim comprising a border (140) configured to limit the amount of incident light on the dif fraction grating (130) to reduce overlap in the diffraction image.

7. The apparatus (100) according to claim 6, wherein the border defines a hole (142) smaller than the diffraction grating (130).

8. The apparatus (100) according to claim 7, wherein the border (140) is arranged to spatially sepa rate diffracted incident light from non- diffracted incident light for the diffraction image .

9. The apparatus (100) according to claim 7 or 8, wherein the border (140) is arranged to spatial ly separate the first order diffraction pattern from the second order diffraction pattern for the diffraction image.

10. The apparatus (100) according to any preceding claim, wherein the frame (120) is made of card board .

11. The apparatus (100) according to any preceding claim, comprising a frequency filter positioned to filter the incident light before it arrives at the objective (112) of the digital camera (110) .

12. Using a photographic digital camera (110) to gether with a diffraction grating (130) for dis persing incident light towards an objective (112) of the digital camera (110) to provide a diffraction image for a diffraction photograph (300) for use in multi- and/or hyperspectral im aging .

13. Using a photographic digital camera (110) to gether with a diffraction grating (130), accord ing to claim 12, wherein a border (140) defining a hole (142) smaller than the diffraction grat ing (130) is configured to limit the amount of incident light on the diffraction grating (130) to reduce overlap in the diffraction image.

14. A method comprising:

receiving (230) image information correspond ing to at least a part of a diffraction pho tograph (300) obtained using a photographic digital camera (110) with means (100) for dispersing incident light for providing a diffraction image for the diffraction photo graph (300) ; and

determining (240) at least one multi- and/or hyperspectral characteristic (310, 320) from the image information using a computational algorithm.

15. The method according to claim 14, wherein the means comprise the apparatus according to any of claims 1-11.

16. The method according to claim 14 or 15, wherein the means (100) comprise a diffraction grating (130) for dispersing incident light towards an objective (112) of the digital camera (110) to provide the diffraction image and a border (140), defining a hole (142) smaller than the diffraction grating (130), configured to limit the amount of incident light on the diffraction grating (130) to reduce overlap in the diffrac tion image.

17. The method according to any of claims 14-16, wherein the computational algorithm comprises a machine learning algorithm.

18. The method according to any of claims 14-17, wherein the computational algorithm comprises a convolutional neural network.

19. The method according to any of claims 14-18, wherein the computational algorithm is adapted to utilize one or more dilations for determining multi- and/or hyperspectral characteristics (310, 320) from the image information.

20. The method according to claim 19, wherein the computational algorithm comprises a machine learning algorithm associated with a first reso lution used for training the machine learning algorithm, and the computational algorithm fur ther comprises:

determining a resolution corresponding to the image information; and

scaling the one or more dilations if the res olution corresponding to the image infor mation is different from the first resolu tion.

21. A method according to any of claims 14-20, the method comprising using (250) the at least one multi- and/or hyperspectral characteristic (310, 320) for any combination of the following: to produce a multi- and/or hyperspectral image from the image information, to segment one or more regions from the image information or to charac terize one or more material properties from the image information.

22. A method (400) for generating a computational algorithm for multi- and/or hyperspectral imag ing using a photographic digital camera (110), the method comprising:

using a photographic digital camera (110) coupled to an apparatus (100) according to any of claims 1-11 to obtain a diffraction photograph corresponding to a scene (410) from a first location;

using a multi- and/or hyperspectral camera (420) to obtain a multi- and/or hyperspectral photograph corresponding to the scene (410) from a second location, wherein the second location is substantially the same as the first location;

using a computational algorithm to produce a multi- and/or hyperspectral image by invert ing diffraction from the first image;

determining a difference value for a measure of difference between the multi- and/or hy perspectral image and the multi- and/or hy perspectral photograph; and

modifying one or more parameters of the com putational algorithm to reduce the difference value .

23. A computer program product comprising instruc tions which, when the program is executed by a computer, cause the computer to carry out the method of any of claims 14-21.

Description:
APPARATUS FOR ENABLING A PHOTOGRAPHIC DIGITAL CAMERA TO BE USED FOR MULTI- AND/OR HYPERSPECTRAL IMAGING

FIELD

The present disclosure relates to providing image information that can be used for multi- and/or hyperspectral imaging.

BACKGROUND

Multispectral , as well as hyperspectral, im aging allows capturing the spectrum of electromagnetic radiation at a large number of consecutive wavelength bands for each pixel in an image, resulting in a 3D tensor which contains narrow-band slices of the spec- trum. This allows capturing image information corre sponding to multiple wavelength slices.

In traditional approaches, multi-

/hyperspectral imaging has been performed using spe cial devices called multi- and hyperspectral cameras. These devices generally operate by scanning the scene either spectrally (spectral scanning) or spatially (spatial scanning) , where the image is scanned pixel by pixel or line by line. A significant drawback of these cameras is that capturing a single image in good lighting conditions might take tens of seconds or even minutes using a scanning method, since the camera needs to capture each spectral or spatial dimension separately. Further, the spatial resolution at which these cameras operate is typically very low, ranging from roughly 0.25 megapixels in portable models to typically 1-2 megapixels in more refined stationary models. Multi- and hyperspectral cameras are also highly expensive. While also non-scanning multi- /hyperspectral cameras do exist, such as an implemen- tation where the optical sensing area of the multi-

/hyperspectral camera is divided into multiple fre- quency sensitive regions, these alternatives remain costly. In addition, they typically have low resolu tion and are designed for a single-specific applica tion.

OBJECTIVE

An objective is to eliminate or alleviate at least some of the disadvantages mentioned above.

In particular, it is an objective to provide a low-cost alternative for multi- and/or hyperspectral imaging, which can be used to capture multi- and/or hyperspectral images substantially in real time.

Moreover, it is an additional objective to provide a versatile alternative for multi- and/or hy perspectral imaging, which can be used for a plurality of applications, in contrast to the previously used application-specific multi- and hyperspectral cameras.

Finally, it is an objective to provide an al- ternative that can be used not only for general multi- spectral imaging but hyperspectral imaging in particu lar .

SUMMARY

In accordance with the present disclosure, it has been discovered that an ordinary digital camera (also "digital camera" or "photographic digital cam era") can be used for multispectral or even for hyper spectral imaging with the method and apparatus as dis- closed herein. As the solutions disclosed herein may be used both for general multispectral imaging and for more demanding hyperspectral imaging, the solutions below are disclosed in terms multi-/hyperspectral im aging, corresponding to both the multispectral and/or hyperspectral imaging. The ordinary digital camera is a photographic digital camera, which may be used to capture still im ages as individual photographs and/or video by taking a rapid sequence of photographs. This allows capturing image information for multi-/hyperspectral imaging simultaneously in two spatial dimensions, i.e. produc ing a snapshot, which may be taken substantially in real time. Moreover, this allows capturing image in formation for multi-/hyperspectral imaging, where the image information can be video information. The image information may thereby have a frame rate correspond ing to the frame rate of the photographic digital cam era .

The ordinary digital camera may be, for exam ple, a compact camera or a system camera such as a digital single-lens reflex camera (DSLR) or digital multi-lens reflex camera. The ordinary camera may also be an industrial camera or a surveillance camera such as a closed-circuit television camera. The ordinary camera may be an integrated camera. It may be a camera phone or a camera integrated into a computer or a tab let. It may also be a separate web camera. What is im portant is that the camera is not a multispectral or hyperspectral camera as such so that it is not intrin sically capable of producing multispectral or hyper spectral images.

Typically, an ordinary digital camera is adapted to produce RGB images but it may, additionally or alternatively, be adapted to produce CMY and/or monochrome images. Correspondingly, the ordinary digi tal camera is adapted to use the RGB, CMY and/or mono chromatic color model. The ordinary digital camera is adapted to capture image information simultaneously in two spatial dimensions. The ordinary digital camera has a sensor for capturing light for photographing, which sensor may be a CCD (charge coupled image de vice) sensor or a CMOS (complementary metal oxide sem- iconductor) image sensor. The sensor has a sensor area which is adapted for capturing an image for a photo graph .

In the following, it is further disclosed ap paratuses and methods allowing an ordinary digital camera to be used for multispectral or even hyperspec- tral imaging. The common concept allowing a photo graphic digital camera to be used for multi- /hyperspectral imaging is the production of a diffrac tion photograph by dispersing incident light with a diffraction grating before it arrives at the photo graphic digital camera or the objective of the photo graphic digital camera in particular. A diffraction photograph is therefore a photograph taken by the pho tographic digital camera, where incident light from a scene is dispersed before it arrives at the photo graphic digital camera for multi-/hyperspectral imag ing so that a plurality of different frequencies of the incident light corresponding to a single spatial point of the incident light are dispersed to different spatial points on the objective of the photographic digital camera and, correspondingly, to different pix els of the photographic digital camera. This effec tively corresponds to mapping the spectral information of the incident light into spatial dislocations. The plurality of different frequencies may comprise more than hundred or even more than several hundred contig uous spectral bands each dispersed to a different spa tial point or pixel in the diffraction photograph. The incident light may comprise visible light but it may also, additively or alternatively, include parts out side the visible range. For many types of digital cam eras, the inventive concept of the present disclosure may be utilized for incident light having wavelength of, for example, 400-700 nm or 400-1000 nm. Conse quently, the apparatus as disclosed herein may be adapted for multi-/hyperspectral imaging in these wavelength regimes.

In a first aspect, an apparatus for enabling a photographic digital camera to be used for multi- /hyperspectral imaging is disclosed. The apparatus may be configured as an accessory such as an attachment to the photographic digital camera. The apparatus com prises a frame configured for coupling with the photo graphic digital camera. The apparatus further compris es a diffraction grating coupled to the frame and con figured for dispersing incident light, when the frame is coupled to the photographic digital camera, towards an objective of the photographic digital camera for providing a diffraction image for a diffraction photo graph for use in multi-/hyperspectral imaging, e.g. by a computational algorithm, examples of which are described below. A key concept is the creation of a diffraction photograph, which allows spreading fre quency information across multiple spatial points in one and/or two dimensions. Due to the diffraction grating, the apparatus is adapted to spatially spread the spectrum of the incident light in the diffraction photograph. This in turn, allows the photographic dig ital camera to capture a plurality of different fre quencies of the incident light corresponding to a sin gle spatial point of the incident light at different spatial points on the objective of the photographic digital camera and, correspondingly, at different pix els of the photographic digital camera. The diffrac tion grating is adapted to be positioned in front of the objective of the photographic digital camera and is therefore separate from the optical system of the camera. This allows the apparatus to be removably at tached to the camera. For example, it can be adapted to be retrofitted to the camera to enable the photo graphic digital camera to be used to capture a dif fraction image for multi-/hyperspectral imaging. It is noted that the apparatus can be used in passive operation for multi-/hyperspectral imaging. Consequently, the apparatus can be a passive apparatus or it may have both passive and active operating modes. Moreover, the apparatus is can be used for snapshot multi-/hyperspectral imaging and/or multi- /hyperspectral video imaging since it can be adapted to provide the diffraction image for a diffraction photograph to the photographic digital camera essen tially instantaneously so that the intrinsic capabil ity of the digital camera for snapshot imaging and/or video imaging is maintained also when the digital cam era is used together with the apparatus to produce one or more diffraction photographs for multi- /hyperspectral imaging.

In general, it is noted that the specific way a diffraction pattern is formed by a diffraction grat ing depends on various factors such as the features of the diffraction grating, the optical geometry of the apparatus and the features of the objective of the digital camera. However, specific optical designs can be obtained following general optical design princi ples known to a person skilled in the art. In accord ance with the present disclosure, a number of specific implementations have been identified which may marked ly improve the applicability of the apparatus for ena bling a photographic digital camera to be used for multi-/hyperspectral imaging.

In an embodiment, the diffraction grating has a grating constant of 1000-10000 nanometers or 1600- 5000 nanometers. As an example, the grating constant may be smaller than 2000 nanometers, corresponding to a grating having more than 500 lines per millimeter. While the optimal grating constant may depend on the photographic digital camera and its objective, it has been found out that having a grating constant smaller than 2000 nanometers, e.g. 1000-1500 nanometers, may in several embodiments provide a significantly im proved balance so that the incident light is dispersed enough but not too much. A good balance also allows the sensor area of the digital camera to be optimally used. This has also been found to be achieved for a particularly good degree for multi-/hyperspectral im aging for a grating constant of 1000-10000 nanometers or 1600-5000 nanometers in particular.

In an embodiment, the diffraction grating is adapted to produce a diffraction pattern, where a dif fraction angle between first order diffraction maxima (hereafter also "the first-order angle") , for at least part of a wavelength range 400-1000 nm corresponding to a wavelength of the incident light, is equal or smaller in comparison to a field-of-view angle of the digital camera. As the field-of-view angle of the dig ital camera corresponds to the field of view of the objective, or the lens, of the digital camera in the dimension of the diffraction pattern and as the dif fraction pattern has two spatially opposite first or der maxima in the dimension of the diffraction pat tern, this allows both maxima for the part of the wavelength range to be fully captured in the diffrac tion photograph. When the diffraction grating is two dimensional, the diffraction grating may be adapted to produce a diffraction pattern, where a diffraction an gle between first order diffraction maxima in both di mensions, for at least part of a wavelength range 400- 1000 nm corresponding to a wavelength of the incident light, is equal or smaller in comparison to the corre sponding field-of-view angles of the digital camera. The first-order angle may also be substantially the same as the field-of-view angle, in which case the first order maxima extend substantially to the oppo site edges of the diffraction photograph so that the sensor area of the digital camera may be fully covered in the dimension or dimensions of the diffraction pat- tern. In some embodiments, the first-order angle for a wavelength of the incident light of substantially 1000 nanometers is equal or smaller in comparison to a field-of-view angle of the digital camera. In other embodiments, the first-order angle for a wavelength of the incident light of substantially 700 nanometers is equal or smaller in comparison to a field-of-view an gle of the digital camera, for example when the digi tal camera, such as a camera phone, is adapted to fil ter out wavelengths above 700 nanometers. In yet other embodiments, the wavelength may be even smaller fur ther, for example when the apparatus is adapted for a specific application.

In an embodiment, the apparatus comprises an optical element, such as a lens system comprising one or more lenses, adapted to scale the diffraction im age. The optical element may be adapted to be posi tioned between the diffraction grating and the objec tive of the digital camera. The optical element may be adapted to scale the diffraction image up or down. This allows the size of the diffraction image to be adjusted so that the use of the sensor area of the digital camera may be improved. For example, the opti cal element may be adapted to scale the diffraction image so that the first order maxima of the diffrac tion pattern extend substantially to the opposite edg es of the diffraction photograph in one or more dimen sions .

In an embodiment, the diffraction grating is configured to disperse incident light in two dimen sions. This allows the frequency information of the incident light to be spread spatially even further than with a one-dimensional diffraction grating.

In an embodiment, the frame is configured for the diffraction grating to be positioned within 1 cen timeter to the surface of the objective of the digital camera, when the apparatus is in use. In further em- bodiments, the frame may even be configured for the diffraction grating to be positioned within 3-5 milli meters to the surface of the objective.

In an embodiment, the apparatus comprises a border configured to limit the amount of incident light on the diffraction grating to reduce overlap in the diffraction image. This allows separating a part of the diffraction pattern from the non-diffracted in cident light in the diffraction image so that the dif fraction image has one or more spatial regions com prising only the part of the diffraction pattern and not any non-diffracted incident light. This may great ly simplify the task of determining multi- /hyperspectral characteristics from the diffraction image. In some embodiments, the border may define a hole, which can be smaller than the diffraction grat ing. The border may be part of the frame, for example a hole in the frame. The border may define a hole smaller than the diffraction grating for limiting the amount of incident light on the diffraction grating to reduce overlap in the diffraction image.

In an embodiment, the border is arranged to spatially separate diffracted incident light from non- diffracted incident light for the diffraction image. This corresponds to separating the zeroth order dif fraction pattern from the diffraction patterns of or der one or higher for multi-/hyperspectral imaging. The diffraction image may thus have one or more spa tial regions comprising only diffracted incident light and not any non-diffracted incident light. On the oth er hand, the diffraction image may have a spatial re gion comprising only the non-diffracted incident light, i.e. at least substantially without any dif fracted incident light, for example corresponding to a regular photographic image such as an RGB photographic image. This not only makes the diffraction image more effective for multi-/hyperspectral imaging but actual- ly allows an unsupervised machine learning algorithm to be used for determining one or more multi- /hyperspectral characteristics from the diffraction image. This is because the unsupervised machine learn ing algorithm may utilize the spatial region compris ing only the non-diffracted incident light as a refer ence .

In an embodiment, the border is arranged to spatially separate the first order diffraction pattern from the second order diffraction pattern for the dif fraction image. This markedly improves the diffraction image for multi-/hyperspectral imaging since any mul- ti-/hyperspectral characteristics can more easily be determined from the diffraction image by a computa tional algorithm.

In an embodiment, the frame is made of plas tic cardboard. This allows a simple implementation, which may even be formed as a flat pack, for example for transportation, and assembled for use when neces sary .

In an embodiment, the apparatus comprises a frequency filter positioned to filter the incident light before it arrives at the objective of the digi tal camera. One or more frequency filters can be posi tioned before and/or after the diffraction grating. The filter allows optimizing the apparatus for a spe cific application. The filter may be interchangeable and it may be removably attached to the apparatus. The filter may be, for example, a band-pass filter or a band-stop filter.

In a second aspect, a photographic digital camera is used together with a diffraction grating for dispersing incident light towards an objective of the digital camera to provide a diffraction image for a diffraction photograph for use in multi-/hyperspectral imaging. This may be performed using an apparatus in accordance with the first aspect or any combination of its embodiments. A photograph may be taken with the digital camera to produce a diffraction photograph. Similarly, a sequence of photographs, such as a video, may be taken to produce a sequence of diffraction pho tographs, such as a diffraction video. From the dif fraction photograph, one or more multi-/hyperspectral characteristics may be determined.

In an embodiment, a border defining a hole smaller than the diffraction grating is configured to limit the amount of incident light on the diffraction grating to reduce overlap in the diffraction image. Features of the border are described also in conjunc tion with the apparatus.

In a third aspect, a method comprises receiv ing image information corresponding to at least a part of a diffraction photograph obtained using a photo graphic digital camera with means for dispersing inci dent light for providing a diffraction image for the diffraction photograph. The means may be, for example, a diffraction grating or an apparatus in accordance with the first aspect or any combination of its embod iments. The method further comprises determining at least one multi-/hyperspectral characteristic from the image information using a computational algorithm. In certain embodiments, this can be done by calculation using the laws of physics, and optics, in particular as the diffraction grating has spatially spread dif ferent wavelength components in a deterministic man ner. The image information can be the diffraction pho tograph as such, but it can also be a modified or re duced part of the diffraction photograph, for example.

In an embodiment, the means comprise a dif fraction grating for dispersing incident light towards an objective of the digital camera to provide the dif fraction image and a border, defining a hole smaller than the diffraction grating, configured to limit the amount of incident light on the diffraction grating (130) to reduce overlap in the diffraction image. Fea tures of the diffraction grating and the border are described also in conjunction with the apparatus.

In an embodiment, the computational algorithm comprises a machine learning algorithm. In accordance with the disclosure, it has been discovered and veri fied that the machine learning can be adapted to de termine multi-/hyperspectral characteristics from the image information, which can notably improve the speed and capabilities of the computational algorithm. The machine learning algorithm can be adapted to use dis persion of incident light in the diffraction photo graph to create a multi-/hyperspectral image from the image information. The machine learning algorithm may be a deep neural network. It may comprise have an in put layer and an output layer but also multiple layers in between. The machine learning algorithm may be trained on image pairs where each pair comprises a first image corresponding to a multi-/hyperspectral photograph captured by a multi-/hyperspectral camera and a second image corresponding to a diffraction pho tograph captured by a photographic digital camera. The photographs are captured at substantially the same lo cation so that the first image can be used as ground truth .

In a further embodiment, the computational algorithm comprises a convolutional neural network (CNN) . This has been found to provide an efficient al gorithm for various different situations.

In an embodiment, the computational algorithm is adapted to utilize one or more dilations for deter mining multi-/hyperspectral characteristics from the image information. Dilation models, as such, are known to a person skilled in the art. In accordance with the present disclosure, it has been found, however, that a dilation model may be used in conjunction with the diffraction photograph to determine multi- /hyperspectral characteristics from the diffraction photograph where frequencies are spatially spread in a constant manner. For example, the one or more dila tions may correspond to spatial displacements for se lecting pixels from the image information. The spatial displacements may correspond to pixel differences in a diffraction pattern in the image information. The spa tial displacements may be determined with one or more limiting points corresponding to a maximum of the dif fraction pattern. For example, a first dilation may correspond to a pixel difference between a zeroth or der point of the diffraction pattern in the image in formation and a first diffraction component, corre sponding to a first-order maximum of the diffraction pattern for a first frequency. A second dilation may then correspond to a pixel difference between a zeroth order point of the diffraction pattern in the image information and a second diffraction component, corre sponding to a first-order maximum of the diffraction pattern for a second frequency, wherein the first fre quency is larger than the second frequency. The larg est frequency may correspond to the smallest dilation and/or the smallest frequency may correspond to the largest dilation. This allows a dilation model to be used for mapping the diffraction pattern to the image information for extracting one or more multi- /hyperspectral characteristics from the diffraction image. The one or more dilations may be used to define indices for pixels in the image information. When the computational algorithm comprises a CNN, the one or more dilations may be used for dilated convolutions in the CNN. The dilated convolutions may be one or more dimensional, for example two dimensional.

In a further embodiment, the computational algorithm comprises a machine learning algorithm asso ciated with a first resolution used for training the machine learning algorithm. The computational algo- rithm further comprises determining a resolution cor responding to the image information and scaling the one or more dilations if the resolution of correspond ing to the image information is different from the first resolution used for training of the machine learning algorithm. In this way, dilations can be used to adapt the computational algorithm to process also image information having different resolution than that corresponding to the underlying machine learning algorithm. This allows the computational algorithm to be used flexibly with different types of digital cam eras. For example, if the first resolution is smaller than the resolution corresponding to the image infor mation, e.g. the resolution of the diffraction photo graph, the one or more dilations can be scaled up. The one or more dilations may, for example, be scaled by a constant factor corresponding to the resolution corre sponding to the image information divided by the first resolution .

In an embodiment, the method comprises using the at least one multi-/hyperspectral characteristic for any combination of the following: to produce a multi-/hyperspectral image from the image information, to segment one or more regions from the image infor mation or to characterize one or more material proper ties from the image information.

In a fourth aspect, a method for generating a computational algorithm for multi-/hyperspectral imag ing using a photographic digital camera is disclosed. The method comprises using a photographic digital cam era coupled to an apparatus to obtain a diffraction photograph corresponding to a scene from a first loca tion. The apparatus may be an apparatus in accordance with the first aspect or any combination of its embod iments. The method also comprises using a multi- /hyperspectral camera to obtain a multi-/hyperspectral photograph corresponding to the scene from a second location, wherein the second location is substantially the same as the first location. The diffraction photo graph and the multi-/hyperspectral photograph can be obtained in any order and photographs can be obtained from a single scene or from different scenes, option ally with varying conditions such as lighting condi tions. The method further comprises using a computa tional algorithm to produce a multi-/hyperspectral im age by inverting diffraction from the diffraction pho tograph and determining a difference value for a meas ure of difference between the multi-/hyperspectral, as obtained image using the photographic digital camera, and the multi-/hyperspectral photograph. Finally, the method comprises modifying one or more parameters of the computational algorithm to reduce the difference value. The computational algorithm may comprise any features described in connection of the third aspect or any of its embodiments.

In a fifth aspect, a computer program product comprises instructions which, when the program is exe cuted by a computer, cause the computer to carry out the method of the fourth aspect and/or any combination of its embodiments.

As disclosed in accordance with any of the aspects or embodiments above, the multi-/hyperspectral information may, at least, be provided at any wave length range within 400-1000 nanometers. The upper limit may be reduced, for example, to 700 nanometers when the photographic digital camera comprises a fre quency filter blocking wavelengths above 700 nanome ters, but even in these cases the operation range may be extended if the filter is removed from the camera.

The diffraction photograph may be used to produce a multi-/hyperspectral image by inverting dif fraction from the diffraction photograph. However, multi-/hyperspectral information contained in the dif fraction photograph may also be used without first producing the complete multi-/hyperspectral image. One or more multi-/hyperspectral characteristics may be determined directly from the diffraction photograph and then used, for example to segment one or more spa tial regions in the scene captured in the diffraction photograph. Alternatively or additionally, they may be used in material characterization i.e. to characterize one or more material properties from the scene cap tured in the diffraction photograph. The diffraction photograph may thus be used for long-range imaging, even satellite surveying, and/or quality control. It may also be used for medical imaging to diagnose bio logical subjects and/or for recognizing counterfeit material. Other possible uses include food quality as surance, gas and/or oil exploration and applications in agriculture. It is noted that as the provision of the diffraction photograph allows multi-/hyperspectral imaging without forming the full multi-/hyperspectral image, the apparatus and methods as described herein may allow marked improvement in speed and efficiency for many specific applications, such as those involv ing segmentation and/or characterization.

The machine learning algorithm can be a su pervised machine learning algorithm or an unsupervised machine learning algorithm. For a supervised machine learning algorithm, the algorithm can be trained using one or more pairs of photographs, wherein each pair comprises a diffraction photograph and a multi- and/or hyperspectral photograph, for example according to the fourth aspect or any of its embodiments. The photo graphs in a pair then correspond to the same scene. The training can be performed without restricting the field-of-view of the digital camera. For an unsuper vised machine learning algorithm, no corresponding multi- and/or hyperspectral photographs are required for training the algorithm. The unsupervised machine learning algorithm can therefore be arranged to gener- ate a diffraction photograph for a multi- and/or hy- perspectral photograph without any reference multi- and/or hyperspectral photographs. The unsupervised ma chine learning algorithm can therefore be used for multi-/hyperspectral imaging without separate train ing, such as training in laboratory conditions. The unsupervised machine learning can be facilitated, for example, by capturing a diffraction photograph using the border as described above.

As described above, a border configured to limit the amount of incident light on the diffraction grating to reduce overlap in the diffraction image allows separating a part of the diffraction pattern from the non-diffracted incident light in the diffrac tion image so that the diffraction image has one or more spatial regions comprising only the part of the diffraction pattern and not any non-diffracted inci dent light. This allows the diffraction image and/or a diffraction photograph to have a region corresponding to a regular photograph, such as an RGB photograph, and a region corresponding to a multi- and/or hyper spectral photograph. The diffraction image can thus comprise a region of non-diffracted incident light, which may correspond to an RGB image, for example at a location corresponding to the center of the sensor ar ea of the digital camera. The diffraction image also comprises one or more regions of diffracted incident light, for example on the opposite sides of the region of non-diffracted incident light. As an example, two regions of diffracted incident light can be on the op posite sides of the region of non-diffracted incident light in one dimension, such as a horizontal dimen sion. Alternatively or additionally, two regions of diffracted incident light can be on the opposite sides of the region of non-diffracted incident light in an other dimension, such as a vertical dimension. The border and the diffraction grating may be optimized so that the use of the sensor area of the digital camera is optimized. For example, this means that the dif fraction photograph comprises a region of non- diffracted incident-light, which may correspond to an RGB image, adjacent to a region of diffracted incident light in one dimension or in two perpendicular dimen sions such as the horizontal and the vertical dimen sion. The distance between these two regions can therefore be negligible or absent so that the sensor area of the digital camera is optimally used.

The border limits the field-of-view for the diffraction grating. When a diffraction photograph is captured, the border can limit the incident light al lowed at the diffraction grating so that a scene to be captured does not overlap with the diffraction pattern for multi- and/or hyperspectral imaging. For this pur pose, the border may advantageously define a rectangu lar or substantially rectangular hole such as a square hole. The border can also limit the incident light so that point-spectrum can be measured from the diffrac tion image. For this purpose, the border may advanta geously define a dot-like hole, which may be roundly shaped, for example as a circle, but is not restricted to such a shape. What is important there is that the hole is small enough for point-spectrum imaging, for example less than 1 millimeter in diameter. In any case, the degree of overlap for the diffraction image may be determined by the position of the border, such as its distance from the diffraction grating, a size for the field-of-view of the diffraction grating, for example the size of the hole defined by the border, and the grating constant of the diffraction grating. These may be arranged so that the overlap is reduced or absent, or at least substantially absent.

The border may be arranged to eliminate, at least substantially, the overlap of the zeroth and first order diffraction patterns for multi- /hyperspectral imaging. However, it may be arranged to eliminate, at least substantially, the overlap of any other diffraction patterns as well, for example that of the first and second order pattern. The overlap of all relevant diffraction patterns for multi- /hyperspectral imaging may thus be eliminated, includ ing for example the zeroth, first and second order patterns but possibly also the third order pattern or more .

It is to be understood that the aspects and embodiments described above may be used in any combi nation with each other. Several of the aspects and em bodiments may be combined together to form a further embodiment .

BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are included to provide a further understanding and constitute a part of this specification, illustrate embodiments and together with the description help to explain the principles of the invention. In the drawings:

Figs, la and lb illustrate an apparatus ac cording to an embodiment in a perspective view and an exploded perspective view, respectively,

Fig. 2 illustrates a method according to an embodiment,

Figs. 3a and 3b illustrate an example proce dure for multi-/hyperspectral imaging, Fig. 4 illustrates another method according to an embodiment,

Fig. 5 illustrates a border according to an embodiment,

Fig. 6 illustrates a diffraction image.

Like references are used to designate equiva lent or at least functionally equivalent parts in the accompanying drawings . DETAILED DESCRIPTION

The detailed description provided below in connection with the appended drawings is intended as a description of the embodiments and is not intended to represent the only forms in which the embodiment may be constructed or utilized. However, the same or equivalent functions and structures may be accom plished by different embodiments.

Figures la and lb show an example of an appa- ratus 100 for enabling a photographic digital camera 110 to be used for multi-/hyperspectral imaging. The apparatus 100 can be an accessory, which may be con figured for one or more different types of digital cameras 110. The apparatus 100 comprises a frame 120, which can be made of, for example, metal, plastic or cardboard. The frame 120 is configured so that it can be coupled with a photographic digital camera 110, for example so that it may be removably attached to the digital camera 110. Specifically, the frame 120 may be adapted to be coupled with an objective 112 of the digital camera. The frame 120 may comprise coupling means such as a snap-on clip, a screw assembly, an ad hesive surface or an adapter for coupling the appa ratus 100 with the photographic digital camera 100 or the objective 112 in particular. The coupling means may comprise a thread for coupling with the digital camera 100, for example with the lens of the digital camera 100. This can be particularly advantageous when the digital camera 100 is a system camera, such as a DSLR. When the digital camera is the camera of a mo bile phone or a portable computing device, the cou pling means may advantageously comprise a self- adhesive surface, which may be re-usable. In some em bodiments, the frame 120 may be adapted to enclose the digital camera 110 so that the digital camera 110 is to be positioned inside the frame 120 when the appa ratus 100 is in use. In other embodiments, the frame 120 may smaller than the digital camera 110 allowing the apparatus 100 to function as a compact accessory.

The apparatus 100 further comprises a dif fraction grating 130, which is configured to disperse incident light towards the objective 112 of the digi tal camera 110, when the apparatus is in use, to pro vide a diffraction image on the objective 112, so that the diffraction image can be captured by the digital camera 110 to produce a diffraction photograph adapted for use in multi-/hyperspectral imaging. The diffrac tion grating 130 is coupled to the frame 120, for ex ample by removable or fixed attachment. The diffrac tion grating 130 may be a one-dimensional or two- dimensional diffraction grating. In particular, using a two-dimensional grating allows producing a two- dimensional diffraction pattern, which may, at least in some situations, make it easier to determine multi- /hyperspectral characteristics from the diffraction image. The diffraction grating 130 may be a transmis sive or reflective diffraction grating. A transmissive diffraction grating 130 may be configured to be posi tioned substantially perpendicularly with respect to the optical axis of the objective 112 of the digital camera 110, whereas a reflective diffraction grating 130 may, for example, be configured to be positioned substantially parallel with respect to the optical ax is of the objective 112. This allows controlling the angle between the scene to be photographed and the digital camera 110. For this purpose, but also for other purposes, the apparatus 100 may comprise one or more additional optical elements for directing the in cident light before it arrives at the objective 112 of the digital camera 110. However, in many embodiments these are not necessary and the apparatus 100 may be adapted so that the diffraction grating alone provides sufficient optical manipulation of the incident light to enable multi-/hyperspectral imaging with the digi tal camera 110.

The diffraction grating 130 is adapted to spread a plurality of wavelength bands of the incident light to produce a diffraction image, which can be captured in a diffraction photograph for multi- /hyperspectral imaging. The diffraction grating 130 may have a grating constant larger than e.g. 350 or 500 nanometers. It has been noted that in some embodi ments the grating constant needs to be smaller than 2000 nanometers to appropriately spread the spectrum of incident light for multi-/hyperspectral imaging. The diffraction grating 130 may be rectangular but it may also be of another shape. The diffraction grating 130 may be adapted to provide a diffraction image which, when captured in a diffraction photograph, co vers at least 50-80 percent of the sensor area of the digital camera 110. In particular, it may be adapted to provide a diffraction image which comprises a dif fraction pattern which, when captured in a diffraction photograph, covers at least 50-80 percent of the width and/or height of the sensor area of the digital camera 110, when the extent of the diffraction in one dimen sion corresponds to the distance between the two op posing first order diffraction maxima for a threshold wavelength. The threshold wavelength may be 700-1000 nanometers so that the sensor area of the digital cam era 110 can be optimally covered, but for some appli cations the threshold wavelength may be smaller, al lowing the apparatus 100 to be adapted for multi- /hyperspectral imaging focused on wavelengths below 700 nanometers. To allow improved use of the sensor area of the digital camera 110, the apparatus 100 may be adapted for the diffraction pattern to cover, as defined above, at least 90 percent of the width and/or height of the sensor area of the digital camera 110 or substantially the whole width and/or height of the sensor area. The diffraction grating may be substan tially the same size as the surface 114 of the objec tive 112 of the digital camera 110 or smaller, the surface 114 of the objective 112 being the outer sur face of the objective 112 adapted to receive incident light into the digital camera 110 for photographing, for example the surface of a lens.

The frame 120 is configured for positioning the diffraction grating 130 in front of an objective 112 of the digital camera 110, when the apparatus 100 is in use. There, the diffraction grating 130 may be substantially on the surface 114 of the objective 112. In some embodiments, the diffraction grating 130 may be positioned further away from the surface 114 of the objective 112, for example when additional optical el ements such as lenses and/or filters are used between the diffraction grating 130 and the digital camera 110. Coupling means, which may be part of the frame 120, may be adapted to align the diffraction grating 130 with the objective 112 of the digital camera 110 so that the photographic digital camera 110 can be used for multi-/hyperspectral imaging.

Optionally, the apparatus 100 comprises one or more borders 140, which are configured to limit the amount of incident light on the diffraction grating 130. A border 140 can therefore also be considered as a field stop. The one or more borders may be substan tially opaque. The one or more borders 140 may be adapted to produce one or more spatial regions in the diffraction image, and consequently in the diffraction photograph, where the intensity of non-dispersed light is attenuated or where non-dispersed light is absent altogether. The one or more borders 140 may define a hole 142, for example in the frame 120, for admitting incident light at the diffraction grating 130. In an embodiment, the hole 142 is a square hole, but in oth er embodiments it may be, for example, rectangular, round or oval. The hole 142, or the one or more bor ders 140, may be adapted to face the diffraction grat ing 130, for example coaxially. The hole 142, or the one or more borders 140, may be adapted to be posi tioned coaxially with the optical axis of the objec tive 112 of the digital camera 110, when the apparatus 100 is in use. The hole 142 may have width and/or height smaller than that of the diffraction grating 130. For example, the hole 142 may have width and/or height smaller than 1-2 centimeters. The hole 142 may have width and/or height of at least 3 millimeters to prevent the diffraction from the hole 142 itself to adversely affect the diffraction image, e.g. by blur ring the diffraction image. Consequently, the hole 142 may, for example, be at least 3 millimeters x 3 milli meters in size in both dimensions. It has been noted that the hole 142 may be relatively large, for example when the digital camera is a system camera such as a DSLR, so that a hole 142 of size 20 millimeters times 20 millimeters may be used at least for some embodi ments. However, the hole 142 may also be small, for example 1 millimeter or less in diameter, e.g., in width and/or height, to allow point-spectrum imaging.

The space between the hole 142 and the dif fraction grating 130 can be covered, for example by an opaque border 140, so that the amount of incident light arriving at the diffraction grating 130 without passing the hole is 142 is negligible. The inner sur face covering the space between the hole 142 and the diffraction grating 130 can be non-reflective so that any stray light into the space, for example through the hole 142, does not substantially reflect from the inner surface. For this purpose, the inner surface may comprise a coating of non-reflective material, such cloth, such as felt cloth, and/or paint, such as matt paint. The coating may be, for example, black and/or coarse. It has been found that for material having an absorption coefficient of 0.99 or more, the diffrac tion image can be particularly improved for multi- /hyperspectral imaging. As an alternative or an addi tion to coating, a structure for preventing reflec tions for the diffraction grating may be included in the space between the hole 142 and the diffraction grating 130, for example as part of the border 140, the inner surface or the frame 120 or as a separate structure that may be coupled to any of these. The structure may be an absorbing structure. It may com prise, for example, one or more sub-structures, such as rings, inside each other.

The apparatus 100 may comprise a chamber 122 for receiving incident light before it arrives at the diffraction grating 130 and/or for directing incident light to the diffraction grating 130. The chamber 122 may define the inner surface covering the space be tween the hole 142 and the diffraction grating 130. The chamber 122 may also comprise the inner surface, in particular if the chamber 122 forms a part of the frame 120. The chamber 122 may be part of the frame 120 or it may be separately coupled to the frame 120, for example by releasable attachment. The chamber 122 may be cylindrical but other shapes are possible as well. The chamber 122 may comprise one or more holes 142 as described above and the one or more holes 142 may be act as the sole source of incident light on the diffraction grating 130. In one embodiment, the cham ber 122 comprises exactly one hole 142, which can be a rectangular hole, for admitting incident light at the diffraction grating 130, the hole being adapted to be substantially coaxially aligned with both the diffrac tion grating 130 and the optical axis of the objective 112 of the digital camera 110. The chamber 122 may have a length of, for example, 1-50 centimeters. The chamber 122 may be adapted to the digital camera 110, in particular to the type of the digital camera 110. For example, when the digital camera 110 is a camera phone, the chamber 122 may have a length of 2-10 cen timeters, whereas when the digital camera 110 is a system camera, such as a DSLR, the chamber 122 may have a length of 5-30 centimeters. The length corre sponds to the axial distance between the diffraction grating 122 and the point of entry for the incident light into the chamber 122, e.g. the border 140 or the hole 142.

The digital camera 110 may be adapted to cap ture photographs when a shutter release 116 is acti vated. The shutter release 116 may be, for example a physical or a virtual button to be pressed to capture a photograph. The shutter release 116 may be integrat ed to the digital camera 110 and/or it may be arranged as a remote trigger mechanism. Any measures available for the digital camera 110 to capture photographs may be made available also to capture diffraction photo graphs, including, for example, a timer.

Figure 2 shows an example of a method for providing and using multi-/hyperspectral image infor mation. The method comprises several parts which may be performed independently from each other.

To produce a diffraction photograph for mul- ti-/hyperspectral imaging, an ordinary digital camera (a photographic digital camera 110) can be used. The photographic digital camera is coupled 210 with a dif fraction grating, which may be the diffraction grating 130 as described above. In turn, the diffraction grat ing may be part of an apparatus 100 as described above. The diffraction grating may be attached to the photographic digital camera for example by removable attachment. The attachment, such as a removable at tachment, may be done, for example, by coupling means, such as a snap-on clip, a screw assembly, an adhesive surface or an adapter. The coupling means may be adapted to align the diffraction grating with an ob- jective of the digital camera so that the photographic digital camera can be used for multi-/hyperspectral imaging. After the photographic digital camera is cou pled with the diffraction grating, the digital camera can be used to capture 220a scene to be photographed through the diffraction grating to produce a diffrac tion photograph of the scene. This allows the scene to be captured as a snapshot image. It is specifically noted that the diffraction photograph for multi- /hyperspectral imaging may be captured with the in trinsic image capturing mechanism of the digital cam era, e.g. with a press of a button. This allows sub stantially instantaneous capture of two-dimensional diffraction photographs for multi-/hyperspectral imag ing. For this purpose, the digital camera may be adapted to be operated with a shutter release, for ex ample a physical or a virtual button, to capture a photograph. The scene may comprise one or more targets for which multi-/hyperspectral information is to be determined. For example, the scene may comprise one or more objects for which one or more material parameters are to be determined from multi-/hyperspectral infor mation. Alternatively or in addition, the scene may comprise one or more surfaces for which one or more positions and/or one or more segments is to be deter mined from multi-/hyperspectral information. It is further noted that to determine the at least one mul- ti-/hyperspectral characteristic, no other information than information derivable directly from the diffrac tion photograph is required, including any information regarding the intrinsic configuration of the diffrac tion grating and/or its configuration with respect to the digital camera. Instead, the at least one multi- /hyperspectral characteristic can be determined solely based on the diffraction photograph. For example, res olution of the image information corresponding to the diffraction photograph may be determined directly from the image information itself. The image information may thus include only the pixels corresponding to the at least part of the diffraction photograph. However, it is naturally possible also to provide supplemental information to the computational algorithm for deter mining the at least one multi-/hyperspectral charac teristic. Such supplemental information may include, for example, information about lighting conditions and/or any information obtained by the digital camera pertaining to the diffraction photograph.

Multi-/hyperspectral image information such as one or more multi-/hyperspectral characteristics can be produced from a diffraction photograph. This may be performed using a computer-implemented method. For this purpose, a computing device comprising a pro cessor may be used, where the computing device has at least one memory comprising computer program code. The at least one memory and the computer program code can be configured to, with the at least one processor, cause the system to determine at least one multi- /hyperspectral characteristic from image information received by the computing device. The computing device may be, for example, a computer server, an accessory to the digital camera or a mobile computing device such as a mobile phone. The process of producing mul- ti-/hyperspectral information from the diffraction photograph may also be performed by distributed compu ting with multiple computing devices. First, image in formation corresponding to at least a part of a dif fraction photograph is received 230, for example by any of the computing devices as described above. The image information is obtained using a photographic digital camera with means for dispersing incident light for providing a diffraction image for the dif fraction photograph. The means may be a diffraction grating or an apparatus 100 as described above. The image information corresponds to a two-dimensional im- age having a diffraction pattern, for example compris ing one or more diffraction maxima and/or minima. Then, at least one multi-/hyperspectral characteristic is determined 240 from the image information using a computational algorithm.

There exists various ways to determine multi- /hyperspectral characteristics from the image infor mation, when the image information corresponds to at least a part of a diffraction photograph. In the con text of this disclosure, it has been found that one particularly advantageous way to perform this, in con junction with many embodiments, is to use a machine learning algorithm such as a CNN as the computational algorithm. These algorithms, as such, are known to a person skilled in the art so that they are readily available for use. As a specific example, it has been found that one or more dilations such as dilated con volutions may be used to determine 240 the at least one multi-/hyperspectral characteristic. Dilated con volutions, as such, are known by a person skilled in the art and a dilated convolution may be defined as a convolution where the elements of the kernel of the convolution are dilated by a dilation factor d, where only every (d+1) : th element is taken as input into the convolution operation, starting from the middle of the kernel. For example, a dilation factor of d=0 would correspond to an unmodified convolution, and a dila tion factor of d=l, would take every second element as input, whereas for d=2, the convolution would take every third element as input. To determine multi- /hyperspectral characteristics, with or without convo lutions, even large dilations may be used, so that the corresponding dilation factors may be at least 10, 50 or even larger. As an example, dilation factors of 70- 130 may be used, where smaller dilations correspond to smaller wavelengths in the diffraction photograph and larger wavelengths correspond to larger wavelengths in the diffraction photograph. Additionally, the dila tions may depend on the size or resolution of an im age. If the image is scaled down or up, also the dila tions may be scaled down or up. The scaling factor for the dilations may be equal to the scaling factor for the image. Using a dilation model, optionally with large dilations, has been found to significantly im prove the image recognition performance for the dif fraction photograph.

When at least one multi-/hyperspectral char acteristic has been determined from the image infor mation, the characteristic may be used 250 in various ways, depending on the specific application. This may also be performed using a computer-implemented method. For this purpose, the same computing device or a simi lar computing device as described above may be used. For example, the at least one multi-/hyperspectral characteristic may be used to produce a multi- /hyperspectral image. A multi-/hyperspectral image may be produced by inverting diffraction from a diffrac tion photograph or from image information correspond ing to at least a part of a diffraction photograph. The results may even be arranged to be displayed on a display of the digital camera 110, e.g. when the digi tal camera 110 is a mobile phone. This can also be done, when the multi-/hyperspectral characteristic is used to segment one or more regions from the image in formation and/or to characterize one or more material properties from the image information. Alternatively or in addition, the results may be used in a segmenta tion and/or characterization device.

Figures 3a and 3b illustrate examples for de termining one or more multi-/hyperspectral character istics 310, 320 from a diffraction photograph 300 (al ternatively, this may be image information correspond ing to at least a part of a diffraction photograph) . As one example, an at least three-dimensional multi- /hyperspectral tensor 310 is produced, where a first and a second dimension correspond to the two spatial dimensions in the plane of the diffraction photograph 300 and a third dimension is a spectral dimension cor responding to wavelength of incident light. This il lustrates also the multi-/hyperspectral characteris tics 310, 320, which may be considered as the image values, i.e. amount of light captured in the diffrac tion photograph, in the spectral dimension. For multi- and/or hyperspectral imaging, there are naturally more than three of such image values in the spectral dimen sion for each spatial point, e.g. more than ten or even more than one hundred. For example, having six or more of such image values in the spectral dimension may be used in some applications for multispectral im aging. However, for hyperspectral imaging the number of such image values may be much larger, e.g. 50-300. A plurality of consecutive wavelength ranges (li, X2, ... l h) , each of which may be very small, then correspond to consecutive values in the spectral dimension of the multi-/hyperspectral tensor. One wavelength range (e.g. li) corresponds to one slice 320 in the multi- /hyperspectral tensor. The wavelength ranges (li, X2, ... X n) may be equally large. The production of the multi- /hyperspectral tensor 310, or one or more slices 320 thereof, then corresponds to determining the at least one multi-/hyperspectral characteristic.

In figure 3b, one example for producing in formation for a multi-/hyperspectral tensor 310 is il lustrated. In this example, dilated convolutions (Di, D2, ... D n) are be used. The dilated convolutions are adapted to select correct values from the diffraction photograph 300 and deliver them forward in the compu tational algorithm such as a machine learning algo rithm comprising a neural network (CNN) . In the left most dashed region, it is illustrated how a dilated convolution may be used to select image values of the pixels in the diffraction photograph 300 (or at least a part of it) . The selected image values are multi plied by weight factors corresponding to the dilated convolutions and they are used to produce a new se quence of convoluted image values, which may differ in length from the number of selected image values. The weight factors of the dilated convolutions are parame ters, which may be determined, for example, by a ma chine learning algorithm. In this case, they can be learned by calculating differences between a multi- /hyperspectral photograph taken by a multi- /hyperspectral camera and a diffraction photograph taken during a learning process for the machine learn ing algorithm. In the rightmost dashed region, it is illustrated how the dilated convolutions may be used in the computational algorithm. Several dilated convo lutions are used and their number may be determined based on the resolution of the diffraction photograph. In an embodiment, nine or more dilated convolutions are used allowing at least one multi-/hyperspectral characteristic to be determined. In principle, there is no upper limit for the number of dilated convolu tions but the largest pixel number corresponding to a single spatial dimension may be used as a practical upper limit. The convoluted image values are concate nated after which they may be fed into one or more re sidual block modules or elements performing corre sponding operations. The operations to be performed include batch normalization and summation, for example as illustrated in the lower right part of Fig. 3a.

Figure 4 illustrates how a computational al gorithm such as machine learning algorithm for multi- /hyperspectral imaging using a photographic digital camera may be generated. This involves using a photo graphic digital camera 110 coupled to an apparatus 100 as described above to obtain a diffraction photograph corresponding to a scene 410, e.g. as described above, from a first location. The scene 410 may be set on a dark and/or single-colored background 410 to improve contrast. Alternatively or additionally, one or more frames 450 may be used between the first location and the scene 410 to block stray light. Before and/or af ter this, a multi-/hyperspectral camera 420 is used to obtain a multi-/hyperspectral photograph corresponding to the scene 410 from a second location, the second location being substantially the same as the first lo cation. For this purpose the digital camera 110 and/or the multi-/hyperspectral camera 420 may be coupled to a positioning device 430 such as a slide for aligning them at substantially the same location. The digital camera 110 may be used to capture one or more diffrac tion photographs of the scene 410. Also, the multi- /hyperspectral camera 420 may be used to capture one or more multi-/hyperspectral photographs of the scene 410. A computational algorithm is used to produce a multi-/hyperspectral image by inverting diffraction from the diffraction image. This may be done, for ex ample, as described above. Then a difference value is determined for a measure of difference between the multi-/hyperspectral image and the multi- /hyperspectral photograph. The computational algorithm can then be optimized by minimizing this difference value, which involves modifying one or more parameters of the computational algorithm to reduce the differ ence value. This may be performed repeatedly, for ex ample until a threshold value for difference is reached .

In the following, further detailed examples are provided. The computational algorithm may be a ma chine learning algorithm. The computational algorithm may be based on a CNN used for complex image-to-image visual tasks, such as single-image super resolution (SISR) . Multiple concurrent convolutions may be used, optionally with very large dilations to allow main- taining a large spatial range for convolution filters while using few parameters. It has been found that such filters accurately model the underlying phenomena of diffraction, and present a way of automatically de tecting the filter dilatations based on empirical im age data. The computational algorithm can be adapted to use diffraction images of much higher spatial reso lution than ones seen during training, to output imag es of markedly improved quality while keeping training feasible .

The functioning of the diffraction grating can be visualized by a narrow-band laser being pro jected at an object so that the resulting image is taken through a two-dimensional diffraction grating as incident light. The laser is projected at a single point in the image, but due to the diffraction grating the first order diffraction pattern is diffracted to eight other positions as well, one in each major di rection. The specific location of these additional po sitions depends on the wavelength of the light, but since in this example case only a narrow band of the spectrum is emitted from the laser, the diffraction pattern components are located according to the wave length of the laser. In a first layer, multiple dilat ed 2D convolutions may be used, for example ranging from a dilation of 70 to 130. For each dilation, one or more filters, e.g. 5 filters of size k, may be used, where k = 3, for example. The resulting feature maps can then be concatenated in the channel dimen sion. This is equivalent to having a single layer with multiple sizes of dilation that each produces a subset of the output channels. The example with k = 3 cap tures the first order diffraction components (the ze ro : th ones being in the middle), but not the subse quent ones, as they simply repeat the first components with lower intensity. Since the diffraction pattern is around the actual point of interest in the image, a slightly larger image may be used than the one gener ated in order to capture the diffraction pattern for all parts of the image. In such a case, the input to the computational algorithm can be slightly larger than the output of the computational algorithm. After the first dilated convolutions, the feature maps can be cropped to the real output size, while the width and height of the image may be kept constant using ze ro-padding. This has the advantage of not having to lose information in the process of making the feature maps smaller in the computational algorithm. The re sulting channels of feature maps, e.g. 300 channels, are then forwarded to a residual network that consists of four blocks of 2D convolutional layers and batch normalization, along with an additive residual connec tion. The residual blocks may be modelled in single image super-resolution, e.g. as disclosed in Ledig, Christian, et al . "Photo-Realistic Single Image Super- Resolution Using a Generative Adversarial Network." CVPR. Vol. 2. No. 3. 2017. There, the input of the computational algorithm is a low resolution image and the output is a spatially higher resolution image but the input and output may here have substantially the same spatial resolution. However, the output may have a larger spectral resolution. The residual blocks can learn to correct diffraction artefacts that otherwise leak into the output image, improving visual quality.

To optimize for the quality of the recon structed multi-/hyperspectral images, similarity met rics for RGB images, well-known in the art, may be mixed with a similarity metric for multi- /hyperspectral data. The quality of the multi- /hyperspectral image may be evaluated with respect to an image produced by an actual multi-/hyperspectral camera ("ground truth") . One or more of the following targets may be used. For the first target, each depth slice of the output should match the ground truth vis- ually, as a monochrome image. For the second target, the resulting spectrum in each pixel of the output should match the ground-truth one as closely as possi ble. For the third target, the resulting spectrum should be as smooth and non-noisy as the ones taken with the multi-/hyperspectral camera. For applications of multi-/hyperspectral images relying on the distinct spectral signatures of different materials, the second target may be emphasized. For example, the Canberra distance measure may be used between the spectra of each pixel to make sure they match as closely as pos sible, e.g. as disclosed in Deborah, Hilda, Nol Rich ard, and Jon Yngve Hardeberg. "A comprehensive evalua tion of spectral distance functions and metrics for hyperspectral image processing." "IEEE Journal of Se lected Topics in Applied Earth Observations and Remote Sensing 8.6 (2015): 3224-3234". To optimize with re spect to the first target, the structural similarity measure (SSIM) may be used, for example as disclosed in Wang, Zhou, et al . "Image quality assessment: from error visibility to structural similarity." IEEE transactions on image processing 13.4 (2004): 600-612. In some embodiments, SSIM may work extremely well vis ually in terms of quality of detail, although it may fail to reconstruct the appropriate colors if used alone. SSIM may be applied separately for each depth slice in the input. Finally, to smooth and reduce the noisiness of the spectra, pixels may be regularized by computing the absolute error of subsequent spectral components and taking the mean. An example of a loss function can be obtained by subtracting the SSIM from the Canberra distance and adding the regularized mean, wherein the regularized mean may be further multiplied by a scaling factor, e.g. 0.02. The computational al gorithm may be trained in a supervised, straightfor ward manner, for example with Adamax as the optimiza tion algorithm. A dilation range can be specified based on the range of the diffraction. The dilation range can be used in convolutions, for example those of a CNN. The range of the diffraction may depend on the imaging setup, such as on the camera, lens, chosen resolution, and on the specific diffraction grating used. The di lation range may be adapted to be wide enough to cover the extent of the diffraction pattern, e.g. at least up to a first order maximum. The dilation range may also be limited to prevent introducing excess weights in to the CNN. One or more dilations may be determined visually from a diffraction photograph of a broadband but spatially narrow light source. A suitable lamp, such as an incandescent filament lamp, may be placed behind a small opening to reveal the extent of dif fraction. The first dilation may then be determined as the pixel difference between the light source, i.e. the zeroth order point of the diffraction pattern, and the first diffraction component of the first order diffraction pattern, corresponding to the smallest in teresting wavelength e.g. 400-1000 nm. In an RGB pho tograph this diffraction component would correspond to blue color. The last dilation may be determined as the pixel difference between the light source, i.e. the zeroth order point of the diffraction pattern, and the last diffraction component of the first order diffrac tion pattern, corresponding to the largest interesting wavelength e.g. 400-1000 nm or substantially 700 nm. The dilation range may also be determined from the power cepstrum C(I) of the diffraction photograph. A brief description and history on the use of the cepstrum can be found in Oppenheim, Alan V., and Ronald W. Schafer. "From frequency to quefrency: A history of the cepstrum." IEEE signal processing Maga zine 21.5 (2004): 95-106. The power cepstrum C(I), for example of a 2D image, can be obtained by a Fast Fou rier Transform (FFT) : First, a FFT is taken of an in- put image to transform each color channel of the input image to the Fourier frequency domain, then a loga rithm of the magnitude of the previous taken to trans form products into log sums, where periodic frequency domain components are represented as sums, and final ly, an inverse FFT of the previous is taken to map the periodic frequency domain components into peaks in the quefrency domain. The result of this is the power cepstrum C(I). The diffraction photograph comprises convolutive components of the scene, where each compo nent can be thought of as being a convolution of a narrow spectral range of the scene. These convolutive components can further be thought of as echos of the original scene, shifted in one or more spatial dimen sions, e.g. in two dimensions and in a total of 8 ma jor directions when a two-dimensional diffraction grating is used. The echos appear as periodic signals in the frequency domain, allowing the periodicity to be extracted using the cepstrum. This information can be used to determine one or more dilations, for exam ple for a CNN. A logarithm of the magnitude of the power cepstrum C(I) may be taken to extract periodic components from the frequency domain, the result re ferred hereafter as the logarithmic magnitude que frency C LM (I) · An average of the logarithmic magnitude quefrency C LM (I) for a sufficient number of photographs can be taken to reduce the effect of noise for easy visual identification of the dilation range. The num ber of photographs to average over depends on the noise characteristics of the photographs. The computa tional cost of estimating the dilation range from the power cepstrum C(I) can be low, as clear candidates for dilation ranges may be visible from as a low num ber of images as five.

As an additional or an alternative method for determining the dilation range, point spread functions (PSF) for the individual spectral components of the multi-/hyperspectral image may be estimated. This may be performed, for example, by training a CNN compris ing a PSF filter, which can be equal to the size of the multi-/hyperspectral image slice for each spectrum component. The CNN may further comprise a final convo lutional layer that performs a weighted sum to produce the final diffraction photograph. The CNN can be trained by minimizing the sum of mean SSIM and LI loss for the estimated diffraction image against a known diffraction image. A Fast Fourier Transform may be em ployed when calculating the convolution between the multi-/hyperspectral image slices and PSFs. The end result is a CNN that estimates the diffraction photo graph given a multi-/hyperspectral photograph. In ad dition, learned PSFs are obtained, one for each spec tral component. The PSFs reveal the diffraction pat tern for each spectral component, where the dilation from the center is consecutively larger for consecu tively larger wavelengths, i.e. from the blue spectral range to the red spectral range. To determine the range of the dilations, a sum of the absolute values of all the PSFs can be taken. Compared to employing the power cepstrum for the dilation range estimation, the PSF estimation method may be more costly, despite efficient implementation of the FFT . Both methods may be used to produce approximately the same results by visual inspection for the dilation ranges. Estimation via the power cepstrum can be performed independent of the multi-/hyperspectral photograph, thus it is not limited by the resolution of the multi-/hyperspectral photograph. The dilation range may therefore, in some cases, be estimated for higher resolution diffraction photographs from the power cepstrum, which can be used to adjust the dilation range for higher resolution diffraction photographs. The PSFs for each wavelength can be modeled as a convolution on the multi- /hyperpectral scene. For this, the disclosure of Oka- moto, Takayuki, and Ichirou Yamaguchi. "Simultaneous acquisition of spectral image information." Optics letters 16.16 (1991): 1277-1279 and/or Descour, Mi chael, and Eustace Dereniak. "Computed-tomography im aging spectrometer: experimental calibration and re construction results." Applied Optics 34.22 (1995): 4817-4826 may be used. The resulting diffraction pho tograph is an integral over the spectrum weighed by the spectral sensitivity of the digital camera. This convolution and integration process may result in loss of information, albeit some of the information has been transformed from the spectral domain to the spa tial domain. Given data that has undergone a convolu tion, the original data can be recoverable by means of deconvolution. The components of a weighted sum of convolutions may not be recoverable by means of decon volution .

The computational method may be adapted to run inference on higher resolution images than the ones the model was trained on. For a scale factor (s) , this may be achieved by increasing the dilations, e.g. those of the first layer, by the scale factor. The scale factor may be larger than 1, for example 2-3 or even larger. It may be constant. For example, the set of dilations may be denoted as d lr d2,... d n . To deter mine multi-/hyperspectral characteristics on the dif fraction photographs that are s times bigger than the ones trained on, the trained parameters may be used and the dilations may be multiplied by the scale fac tor, i.e. scaled to sd lr sd2,... sd n . This allows in creasing visual quality of the images, for example in terms of sharpness. Additionally, close aligning the original image pairs may remove artefacts such as col or bleeding.

A multi-/hyperspectral image may be produced from the image information corresponding to at least a part of a diffraction photograph by inverting diffrac- tion from the diffraction photograph. For visual in spection, e.g. for quality assurance of the method, RGB reconstruction of an image from a multi- /hyperspectral image may be used, for example by de termining a standard weighted sum over the different wavelengths. For example, color matching function (CMF) values may be used for this purpose. These can be used for red, green and blue for the visible spec trum and they are based on the intensity perception to a particular wavelength for particular cone cell type (long, medium and short) for a typical human observer. For this, the techniques outlined in Fairchild, Mark D. "Color appearance models." John Wiley & Sons, 2013, may be used. This allows viewing the multi- /hyperspectral photograph as a single color image, which can thus be an RGB image, as opposed to viewing multiple monochrome slices of the multi-/hyperspectral tensor .

As indicated above, overlap in the diffrac tion image can be caused by the diffraction modes cor responding to the diffraction grating 130 and it can be limited or removed by the border 140. Some con structions have been found particularly effective.

With reference to figure 5, the formation of a diffraction image is illustrated, when the border 140 is used. A diffraction image can be formed by us ing a diffraction grating 130 and a border 140. The diffraction image can be captured by a photographic digital camera to obtain a diffraction photograph. The digital camera comprises an objective 112, which may comprise one or more lenses, and a sensor 510 compris ing a sensing area. The digital camera is configured to provide a photograph using the objective 112, and the sensor 510. For this purpose, the objective has a focal length (f) and the sensing area has a diameter (s) , which may correspond to the height and/or the width of the sensing area. The focal length may be ad justable.

The diffraction grating 130 can be used for dispersing incident light towards the objective 112 to provide a diffraction image for a diffraction photo graph. The border 140, when positioned in front of the objective 510, can be used to limit the incident light at the diffraction grating to reduce overlap in the diffraction image, as described above, including sub stantially complete elimination of the overlap. This allows the first order maxima of the diffraction pat tern to be separated from the zeroth order maxima so that they do not overlap. The border 140 may define a hole 142 having a diameter (h) , which may correspond to the height and/or the width of the hole 142. The shape of the hole may be, for example, rectangular, a square, an ellipse or a circle. One or both major axis of the hole 142 may be parallel to that or those of the diffraction grating 130. The diameter of the hole 142 may be adjustable. The hole 142 can be arranged at a first distance (b) for the incident light from the diffraction grating 130. For this purpose, the dif fraction grating 130 may be housed by a frame 120 and/or a chamber 122, which may comprise or be coupled to the border 140. The first distance may be adjusta ble, for example so that a length of the frame 120 and/or the chamber 122 is adjustable. The diffraction grating 130 may be positioned at a second distance (g) for the incident light from the objective 112. For this purpose, the frame 120 may be adapted for posi tioning the diffraction grating 130 with respect to the objective 112. The second distance may be adjusta ble, for example so that a length of the frame 120 and/or the chamber 122 is adjustable. The first dis tance and/or the second distance can be defined to be measurable along the optical axis of the objective 112. It is noted that the first distance and the sec- ond distance correspond to optical distances, along which the path of incident light may be redirected by one or more optical elements such as prisms, mirrors or other reflective and/or refractive elements. The first distance may naturally correspond, at least sub stantially, to the shortest physical distance between the hole 142 and the diffraction grating 130. Similar ly, the second distance may correspond, at least sub stantially, to the shortest physical distance between the diffraction grating 130 and the objective 112. The sum (d) of the first distance and the second distance is the optical distance ( d=g+b) between the hole 142 and the objective 112.

The objective 112 may be described by an f- number (F) defining ratio ( F=f/A) of the focal length and the imaging aperture diameter (A) . A blur angle (a) may be used to define the diameter of the circle of confusion at the imaging plane for the diffraction image. The blur angle may be approximated as

a = 2 arctan (f/ (2F(g+b) ) ) .

A border angle (b) may be defined as

b = 2 arctan (h/2 ( g+b) ) .

The diffraction grating 130 has a grating constant (n) , which can be limited by

n £ A/ ( sin ( a+b) +sin ( b/2 ) ) ,

where a minimum wavelength (A) for multi- /hyperspectral imaging is also indicated. The grating constant can differ for any two dimensions such as the horizontal and the vertical dimension, but it can also be equal or substantially.

As an example, a set of values is given, which can be used individually or in any combination. The focal length may be 4.2 millimeters +/- 0-1 milli meters. The minimum wavelength for multi- /hyperspectral imaging may be 300-400 nanometers or more. The maximum wavelength for multi-/hyperspectral imaging may be 700-800 nanometers or less or even 800- 1000 nanometers, for example with cameras sensitive with this wavelength region or with special filters. The diameter of the hole 142 may be 4-6 millimeters but also larger, for example 1-10 millimeters. The sum of the first distance and the second distance may be 6 centimeters +/- 0-5 centimeters, for example 6 centi meters +/- 0-2 centimeters. The f-number may be 1.7

+/- 0-1. The grating constant may be 350-500 nanome ters or larger. In some embodiments, the grating con stant is 1000-10000 nanometers, which has been found to allow an efficient balance between a sufficient size for the region of non-diffracted incident light of the diffraction image and a maximal dispersion an gle for a photographic digital camera for multi- /hyperspectral imaging. In these, but also in other embodiments, the grating constant may be 10000-20000 nanometers or smaller. The resolution of the digital camera may typically be larger than 1000 pixels in one dimension, for example 4000x3000 pixels.

The condition for no overlap may be satisfied between the zeroth and first order diffraction modes but also for other modes. As an example, for a dif fraction grating 130 the condition for no overlap be tween the first and second diffraction modes of the diffraction pattern can be satisfied when the minimum wavelength for multi-/hyperspectral imaging is larger than the half of the maximum wavelength for multi- /hyperspectral imaging. This may be used for example when the diffraction grating 130 is an ordinary ruled, non-blazed, transmission grating. For a given grating constant in one dimension, an upper limit for the cor responding diameter of the hole 142 may be given by h = X ( g+b) /n-f/F = Xd/n-A.

This means that a grating constant for one dimension, such as the horizontal or vertical dimension of the diffraction grating 130, corresponds to an upper limit for the corresponding dimension of the hole 142, such as the horizontal or vertical dimension of the hole 142. It is noted that the correspondence is determined optically so that the corresponding dimensions may differ from physical dimensions, if the incident light is redirected between the hole 142 and the diffraction grating 130. The farthest extent (I) of the diffrac tion mode for the maximum wavelength (L) for multi- /hyperspectral imaging can be approximated as

I f tan (arcsin (nh+sin (arctan (h/ (2 ( g+b) ) ) ) ) ) with the diameter (h) of the hole 142 corresponding to the upper limit defined above. For this, the focal flange of the digital camera may be equal or close to the focal distance.

Figure 6 illustrates a diffraction image, which may be formed at the sensor of the digital cam era 100. The area 600 of the image can be determined by the size of the sensing area of the sensor of the digital camera 100. The area 600 may be rectangular. It has a first dimension 610, such as the horizontal dimension, which may correspond to a number of pixels, e.g. 1000-4000 pixels or more. It may also have a sec ond dimension 612, such as the vertical dimension, which may correspond to a number of pixels, e.g. 750- 3000 pixels or more. Naturally, also smaller sensing areas may be used for the diffraction image.

The central optical axis for multi- /hyperspectral imaging may be aligned or at least sub stantially aligned with the center of the sensor but it is not necessarily required. As an example, when focusing to a distance corresponding to the sum of the first distance and the second distance, the first area 620 is illuminated with the zeroth order diffraction pattern in the diffraction image, corresponding to the hole 142. Correspondingly, when focusing to infinity, the second area 622 will be illuminated with the ze roth order diffraction pattern in the diffraction im age. The spatial region of the diffraction image cor- responding to zeroth order diffraction pattern (here also "non-diffracted image") is limited within the first rectangle 630 and it may even extend beyond the second area 622. In general, the non-diffracted image expands, when focusing towards infinity. Focusing to infinity may also result in rounding of the corners of the non-diffracted image due to the so-called circle- of-confusion effect. If the non-diffracted image is small, the non-diffracted image may have a substan tially circular shape, when focusing to infinity. The area between the first rectangle 630 and the second rectangle 632 corresponds to the area, where the first order diffraction pattern is directed.

The computing device as described above may be implemented in software, hardware, application log ic or a combination of software, hardware and applica tion logic. The application logic, software or in struction set may be maintained on any one of various conventional computer-readable media. A "computer- readable medium" may be any media or means that can contain, store, communicate, propagate or transport the instructions for use by or in connection with an instruction execution system, apparatus, or device, such as a computer. A computer-readable medium may comprise a computer-readable storage medium that may be any media or means that can contain or store the instructions for use by or in connection with an in struction execution system, apparatus, or device, such as a computer. The exemplary embodiments can store in formation relating to various processes described herein. This information can be stored in one or more memories, such as a hard disk, optical disk, magneto optical disk, RAM, and the like. One or more databases can store the information used to implement the exem plary embodiments of the present inventions. The data bases can be organized using data structures (e.g., records, tables, arrays, fields, graphs, trees, lists, and the like) included in one or more memories or storage devices listed herein. The databases may be located on one or more devices comprising local and/or remote devices such as servers. The processes de scribed with respect to the exemplary embodiments can include appropriate data structures for storing data collected and/or generated by the processes of the de vices and subsystems of the exemplary embodiments in one or more databases.

All or a portion of the exemplary embodiments can be implemented using one or more general purpose processors, microprocessors, digital signal proces sors, micro-controllers, and the like, programmed ac cording to the teachings of the exemplary embodiments, as will be appreciated by those skilled in the comput er and/or software art(s) . Appropriate software can be readily prepared by programmers of ordinary skill based on the teachings of the exemplary embodiments, as will be appreciated by those skilled in the soft ware art. In addition, the exemplary embodiments can be implemented by the preparation of application- specific integrated circuits or by interconnecting an appropriate network of conventional component cir cuits, as will be appreciated by those skilled in the electrical art(s). Thus, the exemplary embodiments are not limited to any specific combination of hardware and/or software.

The different functions discussed herein may be performed in a different order and/or concurrently with each other.

Any range or device value given herein may be extended or altered without losing the effect sought, unless indicated otherwise. Also any embodiment may be combined with another embodiment unless explicitly disallowed .

Although the subject matter has been de scribed in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not neces sarily limited to the specific features or acts de scribed above. Rather, the specific features and acts described above are disclosed as examples of imple menting the claims and other equivalent features and acts are intended to be within the scope of the claims .

It will be understood that the benefits and advantages described above may relate to one embodi ment or may relate to several embodiments. The embod iments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages. It will fur ther be understood that reference to 'an' item may re fer to one or more of those items.

The term 'comprising' is used herein to mean including the method, blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.

It will be understood that the above descrip tion is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exem plary embodiments. Although various embodiments have been described above with a certain degree of particu larity, or with reference to one or more individual embodiments, those skilled in the art could make nu merous alterations to the disclosed embodiments with out departing from the spirit or scope of this speci fication .