Beh, Lai Lien (Plot 49, Bayan Lepas Industrial Zone Phase I, Bayan Lepas Penang, 11900, MY)
PINTAS PTE LTD (151 Chin Swee Road, #09-11/13 Manhattan House, Singapore 6, 16987, SG)
Lai, Siaw Ling (Plot 49, Bayan Lepas Industrial Zone Phase I, Bayan Lepas Penang, 11900, MY)
Beh, Lai Lien (Plot 49, Bayan Lepas Industrial Zone Phase I, Bayan Lepas Penang, 11900, MY)
|1.||A method of color recognition of an object in a machine vision system comprising the steps of: capturing at least two images of said object, with intensity of each said image having different values; deriving two or more average hue values from selected regions on said image of object, with one average hue value for each region selected; deriving a representative hue value of said object using said average hue values; comparing said representative hue value with preset hue values in a hue table; and recognizing color of object when said representative hue value matches one of said preset hue values at or above an imposed confidence level.|
|2.||The method of color recognition of an object as claimed in claim 1, wherein said step of deriving two or more average hue values includes the step of deriving hue values of individual pixels that makes up said selected region on said image of object.|
|3.||The method of color recognition of an object as claimed in claim 2, wherein each said average hue values of each selected region are derived from said hue values of individual pixels that makes up that selected region.|
|4.||The method of color recognition of an object as claimed in claim 3, wherein said step of deriving two or more average hue values includes the step of checking intensity of said selected regions.|
|5.||The method of color recognition of an object as claimed in claim 4, wherein said step checking intensity of said selected region is followed by recapturing image of object at different intensity level if intensity of said selected regions is O or 255.|
|6.||The method of color recognition of an object as claimed in claim 5, wherein said step of deriving two or more average hue values includes the step of verifying a chosen threshold hue as the value of said average hue by comparing a first manipulated image that is based on said thresholded hue against a second manipulated image that is based on thresholded intensity that is derived for a particular said selected region.|
|7.||The method of color recognition of an object as claimed iα claim 6, wherein said step of deriving two or more average hue values is followed by recapturing of image of object at different intensity level if said average hue for any particular selected region could not be derived.|
|8.||The method of color recognition of an object as claimed ia claim 7, wherein said step of deriving two or more average hue values includes recapturing image of object at different intensity level until every said average hue for every said selected region is derived.|
|9.||The method of color recognition of an object as claimed in claim 4,7 and 8 wherein said steps of recapturing image or images of said object includes the step of altering light intensity of image or images to be recapture by altering camera shutter speed, or by altering camera aperture, or by altering angle of view, or by altering intensity of illuminating light source, or by altering intensity of light emitted from said object should said object is a light emitting object.|
|10.||The method of color recognition of an object as claimed in claim 9, wherein said step of recapturing image or images of said object includes the step determining the subsequent machine vision setup variable such as camera shutter speed to be used.|
|11.||The method of color recognition of an object as claimed in claim 6, wherein said step of deriving a representative hue value is by deriving a weighted average of said average hue values, wherein coefficient or specifically weightage of each said average hue are assigned to each said average hue according to corresponding said thresholded intensity.|
|12.||The method of color recognition of an object as claimed in claim 6, wherein said step of deriving a representative hue value is by deriving a measure of central tendency such as mean, said mean can be the exact value or approximate mean, or quantiles such as median, quartiles, deciles, percentiles or mod of said average hue.|
|13.||A method of color recognition of an object in a machine vision system comprising the steps of: capturing at least two images of said object, with intensity of each said image having different values; deriving two or more average hue values from selected regions on said image of object; deriving a representative hue value of said object using said average hue values; transforming said representative hue value to its corresponding representative wavelength, comparing said representative wavelength with preset wavelength values in a wavelength table; recognizing color of object when said representative wavelength value matches one of said preset wavelength values at or above an imposed confidence level.|
|14.||A method of object recognition in a machine vision system comprising the steps of: capturing at least two images of the object, with intensity of each said image having different values; deriving two or more average intensity values from selected regions on each said image of the object, with one average intensity value for each region selected; comparing each said average intensity with preset average intensity on corresponding regions of a prelearned image template; recognizing the object when each said average intensity of selected regions of captured matches the corresponding average intensity of said prelearned image template at or above an impose confidence level.|
|15.||The method of object recognition as claimed in claim 14, wherein said step of deriving two or more average intensity values includes the step of deriving intensity values of individual pixels that makes up said selected region on said image of object.|
|16.||The method of object recognition as claimed in claim 15, wherein each said average intensity values of each selected region are derived from said intensity values of individual pixels that makes up that selected region.|
|17.||The method of object recognition as claimed in claim 16, wherein said step of deriving two or more average intensity values includes the step of thresholding a dtiosen intensity.|
|18.||The method of object recognition as claimed in claim 17, wherein said step of deriving two or more average intensity values is followed by recapturing of image of object at different intensity level if said average intensity for any particular selected region could not be derived.|
|19.||The method of object recognition as claimed in claim 18, wherein said step of deriving two or more average intensity values includes recapturing image of object at different intensity level until every said average intensity for every said selected region is derived.|
|20.||The method of object recognition as claimed ixi claim 18 and 19 wherein said steps of recapturing image or images of said object includes the step of altering light intensity of image or images to be recapture by altering camera shutter speed, or by altering camera aperture, or by altering intensity of light emitted from said object should said object is a light emitting object.|
|21.||The method of object recognition as claimed in claim 20, wherein said step of recapturing image or images of said object includes the step determining the subsequent machine vision setup variable such as camera shutter speed to be used.|
Field of Invention
The invention described herein is a method of color recognition applied using conventional video inspection systems and related devices as such. This method can also be modified to enhance recognition of monochrome images.
Background of the Invention
There are various applications such as in manufacturing environments or in field works where colors need to be recognized rather than measured. Ideally these color inspection systems are developed for recognizing colors like a human eye does. Human eyes can distinguished easily the different hues (colors) in the case where there are different color samples, each having uniform intensity but with very similar hues, such as different shades of red (red, scarlet, crimson etc.) or different shades of orange. However, human eyes can also identify the color of objects such an apple or an orange although there are variation of hue on them. Furtrαermore, human eyes still can identify these colors under different lighting intensity.
In order to do this, such systems should ideally be able to recognize, not measuring, "same" colors with different peak wavelength (e.g. 680 run, 650 nm, 6JO run) as a designated color (e.g. red). In other words for single object, color recognition systems should recognize different liglit with different peak wavelength that falls within a specified range of value just as a human operator would pass off a broad range of "red" on an apple with different peak wavelength as a designated color (that is "red").
In the manufacturing environment, an example of wheic this feature of the color inspection system may be applied would be inspection of different colors of light emitted from a LED. Different color LEDs are used as indicators in electrical or electronic appliances and proper color LEDs need to be installed as specified. However, due to worker's fatigue or sheer carelessness, a different color LED may be installed where it is not supposed to be. Therefore, a color inspection system based on color recognition would be useful to detect such errors.
Various color inspection systems that has been applied in a manufacturing environment operates by either recognizing colors or measuring the wavelengths of colors. Color inspection systems operating by the first principle uses electronic video cameras to ca-pture one or more images of the object under inspection. Then, the captured image is compared with a color template, which specifies the acceptable color for that particular inspection run. The major shortcoming of this method is tolerance for nonconformance is too narrow thus resulting in high over rej ects .
Color inspection system operating by the second principle uses a photosensor coupled to a spectrometer. The disadvantages of this method are the photosensor would be easily saturated when intensity of light from the object inspected is high and narrow tolerance (e.g. ±1 am or less) inherent in this method would result in much higher over reject than the method "using color template matching.
In applications where color recognition is needed, a color recognition system is superior to color measuring system because it can accept a much broader range of light as the designated color whereas color measuring system can pass off only colors of a narrow range of wavelength as the designated color since it is a measuring system. Furthermore, the rarjLge of light accepted would not be so broad as to produce erroneous results by accepting apparently different colors. This feature of broader color acceptance range is necessary because color or light from similar objects under inspection may have different wavelength values dxie to various reasons described below.
Machine vision system may detect different hue, saturation and intensity values from LEDs in the same production batch because of various reasons such as: i) Bias voltage that is used with different LED in that batch. Slight variation in the bias voltage may result in different brightness of the LED since brightness depends on current flowing through. When the bias voltage is larger than the forward breakdown voltage of the LED, current passing through LED varies non-linearly with the changes of voltage; ii) Different orientation of the XED. Since the radiation intensity profile of a LED is directed to front, which is unlike a light bulb which possess a spherical radiation profile. Therefore, misalignment of LED by a. few degrees or more than ten degrees may cause the intensity received by the video camera to drop very much; iii) LED placed out of focus. LED intensity captured by video camera is brightest when image of LED is in focus and; iv) Incorrect video camera exposure settings. The shutter speed and mechanical aperture setting both affects the amount of light collected overtime, that is the LED intensity received by the video camera; v) Last but not least, the inherent performance variations within a batch of LED itself, which is what the machine vision system should detect provided previous mentioned factors i), ii), iii) and iv) are not present.
When different hue, saturation and intensity values are converted to wavelength, the ldght from each LED of the same production batch would apparently has different wavelength when this is actually not so. Besides this, LEDs of a single color, e.g. red produced, by different manufacturers have different peak wavelength. The aggregate of all these reasons results in high over rejects when color inspection systems in the prior art are applied in color recognition situations.
However, since the LEDs are used as indicators, whether actual or apparent variation in peak wavelength of the light emitted is not an issue as long as these different batches of LEDs much gives out the designated color. Therefore, a visual inspection system using color recognition is more suitable for said task in hand rather than an inspection system basecfi on color measurement.
In other image recognition situations or tasks, such as objects recognition, shape recognition etc., a monochrome machine vision system rather than its color counterpart can be employed to save cost since color information (hue) is not needed. With monochrome images, obyect recognition can be problematic when images taken by the machine vision system are saturated. This occasionally happens to images of reflective objects taken at outdoor. This method for color recognition can be modified to identify objects in monochrome images while maintaining its working principle of taking multiple images at different intensities.
Therefore it is an objective of the invention to provide a method for recognizing colors of a light emitting object, which method at the same time may also be use to recognize color a non-luminescence object that reflects light by using a machine vision system. Specifically, the invention should be capable of overcoming the difficulty of erroneous recognition. Such a capability includes the ability to recognize light or radiation having different wavelength but belonging to the designated color, as determined by its user. Tb_e different wavelength recognized by the system may be apparent wavelength, which may be due to factors i), ii), iii) iv) or other unmentioned factors, such as different lighting condition for a non-luminescence objector actual wavelength (factor v).
Furthermore, the invention is meant to be applied in various manufacturing environments that possess different color recognition situations.
Another advantage of the invention over existing systems that perform the same purpose in the prior art is the ability to recognize colors using existing video inspection systems witlh minimal hardware requirements, whether optical or electronic.
It is also intended that the use of the invention may be extended to recognize colors of various objects such as objects emitting light or objects reflecting light. Furthermore, both kinds of objects may even have irregular surface features or uneven surface. Therefore, th_e method may also be applied in field work.
It is also intended that when the invention is modified and applied on monochrome images, it will enhance object recognition especially with images of reflective objects taken at outdoor. Images having such objects can be easily overexposed. However, some parts of the object may be underexposed if a higher shutter speed is used, thus posing problems in subsequent processing of the image.
Summary of the Invention
A method for recognizing color of an object which emits light by itself or under illumination and a machine system to carry it out is disclosed. The method allows one to recognize color instead of measuring the wavelength of the color. This is carried out by comparing a representative hue value of captured object and comparing it with preset hue values in a hire table. Said representative hue of the object under inspection is obtained from at least two average hue derived from one or more images of the object. The method includes steps to capture images of same object at different intensity levels. The extracted hue value can also be converted to wavelength for comparison with different wavelength values to identify any particular color. The method can be expanded to identifying an object by its color. The method can also be modified to identify objects in monochrome images from captured images that have different intensities.
Brief Description of the Drawings
Fig. 1 is a schematic diagram of a machine vision system in which the methods and systems of the present invention can be used.
Fig. 2 is a flow chart showing the basic implementation of the present invention to recognize color in a captured image.
Fig. 3 is a flow chart showing the basic implementation of the present invention to enhance object recognition in a captured monochrome image.
Detailed Description of the Preferred Embodiment
The invention is a new method for inspecting or recognizing color using a machine vision system (100). With reference to figure 1, the machine vision system (100) would include a color CCD camera (102) or image source (102) connected to a frame grabber (103). The frame grabber (103) is preferably controlled by a computer algorithm (200) known as color chart system (CCS) which implements said invention. The frame grabber (103) is installed in a computer or signal processor (101) of the machine vision system (100), linked to the microprocessor (104) via the system bus (105), while the computer algorithm CCS (200) is stored in and executed from the mass storage unit (106) of the computer. The computer algorithm CCS (200) may include custom controls such as minimum and maximum wavelength values for any specific color. The inspection results may be displayed on a display unit (107) to an operator or be used in a feedback loop to control a machine, a process or quality control via the input/output port (108) of the system, a controller (109) and any related machine (110).
The camera may be any image source operating in analog or digital mode or line scan camera such as NTSC and PAL. Analog outputs of images from any image source used, such as the color CCD camera (102) would be sampled and digitized by the frame grabber (103). Digitized images are stored into a frame buffer having many pixels. Meanwhile, a digital camera (102a) can be directly connected to system bus (105), eliminating the used of a frame grabber (103). The system bus (105) used may be either a PCI, an EISA, ISA or VL syst&m bus or any other standard bus. In a typical system such as these, hue H, saturation S axid intensity I value for each pixel can be easily derived from RGB values of each pixel as provided by the camera or image source.
The method of color recognition (200) as shown in figure 2 can be applied to any object (111) but it will be exemplified as follow using color inspection of a single LED as an example. The object (111) is preferably a unit which can produce light but for the object which is a non-produce light unit, an illuminator (not shown) will be added to the system. One shot (one static image) of the LED will be captured (201) by the color camera (102) at a default aperture and speed setting to obtain an image of the LED at a first intensity level (202). After the image of the object is digitized, at least one or more region of pixels of Qie object (LED) is selected (203). In a manufacturing environment, the region or regions of pixels to be selected can be predetermined since similar image of object can be easily recapture. The selected region would correspond to part of the object (LED) image. For example, should the object (LED) image cover a continuous region of 30 pixels, then tlhe entire selected region would be within the object (LED) image (i.e. solid region of 30 pixels) and covers a substantial portion of it, such as 20 pixels of the LED image.
Then, hue H, saturation S and intensity I for each pixel in the selected region of the object (LED) is derived (204). Next, average hue Hsub(avg) is derived from the selected pixels (206). Average values of hue Hsub(avg) would be a better representation of the LED's coLor than hue H of any single pixel alone because light emitted from different points on the LED do not have uniform hue values.
One of the many ways to derive the average hue Hsub(avg) (206) is by applying threshold technique and followed by blob analysis on the selected pixels. After these two steps a fϊirst manipulated image of the selected pixels based on the chosen threshold hue Hsub(thres(m)) (n= l,2,...i) is obtained. The first manipulated image is compared with a second manipulated image of the selected pixels that is based on thresholded intensity I of each pixels to verify the validity of the derived hue value (207). Basically, as long as intensity of each selected pixels is not equal to 0 (i.e. that is shutter speed too high, no colored image on the selected region) or 255 (i.e. that is shutter speed too slow, overexposure on the selected region) (205X there is one chosen threshold hue Hsub(thres(n)) that can be accepted (209) as the average hue Hsub(avg) for the selected region when the first manipulated image matches the second manipulated image.
Let's suppose after the first round of thresholding and blob analysis treatment on the selected pixels, the first arbitrary threshold hue value Hsub(thres(l)) used cannot be accepted as the average hue Hsub(avg) for that region, then thresholding and blob analysis on the hue value is reiterated (207) at a second chosen threshold hue value Hsub(thres(2)) and so on, until every possible Hsub(thres(n)) has been tried or one of the possible Hsub(thres(n)) is accepted as the Hsub(avg) for the selected region of pixels.
Let's suppose another situation where after the reiteration, the needed average hue Hsub(avg;) couldn't be obtained (208). In this case, each selected pixels may have some intensity value. Then a second or more images of the object will be recapture a different shutter speed (210, 201) so that the intensity level of the second image is different from the first image. Eacfe. subsequent image obtained (202) will be subjected the preceding steps described above (20> , 204, 205, 206, 207) until the needed average hue Hsub(avg) is obtained (209) for that region. After that, the algorithm will store each average hue Hsub(avg) (211) that is derived.
In the preferred embodiment, there also could be more than one region to be selected for the purpose of color recognition. Therefore, the subsequent steps mentioned before (203, 204- , 205, 206, 207) may be repeated for subsequent images that are captured. AU these images will have different intensity value and there will be an average hue Hsub(avg(n)) (n=l,2,...i) for each region. In this situation, shots with different intensity values are captured (213, 201 ) until the average hue Hsub(avg(n)) for each region is derived (212). Besides varying the shutter speed, other variables such as the camera aperture, illuminating source intensity, angle of view, the intensity of the light emission from the object can be varied in order to obtain subsequent images of the object at different intensity level.
In any actual situation, almost all color image of any object that is captured will have varying hue, intensity and saturation values from one pixel to another. Therefore, capturing a single image of an object to identify its color is possible in principle but not reliable in practice because the confidence level of its result i.e. the identified color, such as from 80 to 97% is not high enough for applications in manufacturing environment (e.g. LED color inspection) or in real world (to be exemplified later). Therefore, it is preferable that multiple images be captured in a single inspection of an object to identify its color. The resulting confidence level of its result be high such as 99.99% or more. Therefore, it is also preferable "that the CCS is programmed to take more shots at different intensity to derive a few more average hue value Hsub(avg(m)) (m-l,2,...i) even though a first average hue Hsub(avg(l)) may be successfully derived from the first captured image.
It is the essence of this invention that effective color recognition is carried out by means of capturing images of object at different intensities. Therefore, a few average hue Hsxib(avg) should derived from the images in order that high accuracy of color recognition can be achieved. While too few shots captured at different intensity would compromise the accuracy of color recognition, too many shots captured at different intensity for the sake of increasing accuracy would not be cost effective, especially in a manufacturing setting.
Suppose a number of images of the same object has been captured. Out of these shots, there may be large number of shots that could not be used to derived the needed average hue value Hsub(avg) or Hsub(avg(m)) or Hsub(avg(n)). Furthermore, when two setting variable s of the color recognition system, such as shutter speed and aperture, may be changed (210, 213) to obtain images with various intensity levels, certain setting combinations would be redundant as they would result in same intensity level.
Therefore, it is preferable that the invention include a means of deriving shutter speed such as using curve fitting and combined with interpolation or extrapolation, or using fuzzy logic or neural network techniques to derived the new shutter speed. Preferably, whatever method is used to derive the shutter speed, the average hue Hsub(avg) or Hsub(avg(n)) should be derived using the least number of shots while maintaining the imposed color recognition accuracy.
A representative hue value Hsub(rep) can be derived (214) from all the derived average hue values Hsub(avg(m)) or Hsub(avg(n)) after these values are successfully obtained. Hsub(rep) may be derived from a weighted averaging method, with coefficient or more specifically weightage assigned to the different average hue value Hsub(avg(m)) or HsubCavg(n)) obtained dependent on their corresponding intensity value. Other means of deriving Hsub(rep) such as mean, quantiles, mod or other measures of central tendency may used. Mean used may be an exact arithmetic mean or an approximate mean for a group distribution of the average hue values. Quantiles used may be median, quartiles, deciles or percentiles each of which can be obtained after all the obtained average hue values axe ranked in increasing or decreasing order.
Then the representative hue Hsub(rep) would be compared (215) with preset hue Hsub(preset(n)) (where n=l,2,...i) values in a hue table. This is different from conventional inspection systems which compares all three values Hsub(avg), Ssub(avg) and Isiαb(avg) with a color template which has hue, saturation and intensity values. When the representative hue Hsub(rep) matches a particular preset hue Hsub(preset(n)) (216) at or above prescribed level of confidence, the color of the object is identified (217).
As it is not intuitive for a human operator to described color in terms of hue. the derived representative hue Hsub(rep) from the accepted images can also be converted to wavelength LAMBD Asub(rep) using known transformation. The visual display unit C 107) °f the inspection system may preferably display the representative LAMBDAsub(rep). the name of the corresponding color of LAMBD Asub(rep) and the inspection status such as whether the color is accepted or rejected and the captured image to a human operator.
In still another embodiment, CCS may also be programmed to compare LAMBDAsub(rep) with wavelength values set in a wavelength table in order to determine in what color region it belongs.
The advantages of using such a machine vision system with such a color recognition algorithm is that there is low rate of over rejects. Furthermore, colors which have close hues that can be differentiated by human eyes can all be differentiated by the inventio n. Therefore, the invention can recognized LEDs with "same" color produced in different batches or by different manufacturers. Furthermore, the colors can be correctly identified for light-emitting objects or light-reflective objects whether the reflections are diffuse or specular or a mixture of both as found in a typical recognition task. Besides recognizing color from surface of varying features, the invention also can recognized color under various lighting conditions. These have been proven in field tests conducted in manufacturing environments. In order to enhance the color identification in different lighting environment, suitable filters may be used for filtering out stray lights. It is also intended that the application of the invention be extended to recognized- different color present in a particular frame of image. The basic steps of the invention as described above allows this to be done. An example of application would be recognizing many LEDs that has different colors in a single shot. The steps are: 1) Identifying different regions based on different colors. A region having similar color is identified on the basis of having similar hue, but average hue Hsub(avg(m)) for each region are not derived yet by that software at this juncture; 2) Zooming in to one of the regions and choose a few LEDs in that region and extract hue H, saturation S and intensity I values for pixels in each region as described beforehand; 3) All the subsequent steps described beforehand (after H, S and I are extracted for each selected pixels) are carried out to identify different color LEDs.
Objects under inspection may also include colonies of microorganisms, or a single microorganism with their organels visible to observer, or grains of rice, and not just limited to large objects. On the other hand, future applications may include making maps from aerial photos by applying this color recognition method. Based on color recognition, coLonies can be counted, the physiology of the microorganisms can be studied and rice grains can be selected for packing according to their shades of white.
This invention may also be applied to enhance quality of monochrome image captured, especially those captured at outdoor so that predetermined objects can be identified. At least two or more monochrome images can be captured using a modified algorithm (Fig. 3) that retains the basic concept of taking multiple images at different intensities as outlined in the color recognition algorithm (Fig. 2). Modifications to be made on the algorithm are only on those steps that derive and make use of hue and saturation values since monochrome images have intensity value only. This specifically means that the new modified algorithm (Fig. 3) will retain the general order of execution and executes certain similar conditional terms, except where various hue H and saturation S values which were there before this are now being removed or replaced by intensity I values.
In the modified algorithm (Fig. 3), steps which were similar with that in the CSS algorithm are labeled with the same number. The aim of using this algorithm is to correctly identify objects in monochrome images when they are taken under different lighting condition, especially at outdoor environments. AU the monochrome images are taken from the same viewing angle. Furthermore, the system learns the shape of the object oτ objects before hand. The pre-learned shapes are stored in the form of image templates.
In the execution of this modified algorithm, a first image is captured and regions of pixels are selected just as before (201, 202, 203). After that, intensity I for eacht pixel in the selected regions are derived (304) and for each region, an average intensity Isub(avg) is derived (306) by thresholding a chosen intensity value. The object is identified (3 12) by verifying the derived average intensity Isub(avg) (307) for the captured image. The average intensity Isub(avg) of each selected regions are compared with average intensity of corresponding selected regions on the image template (307). If the derived average intensity Isub(avg) of the captured image is not the same as that in the image template (208), the image of the object will be retaken at different shutter speed (210, 201). Algorithm used in step 210 will chose a higher shutter speed when captured image is much brighter than the Image template and a slower speed when captured image is dimmer. Suppose the derived average intensity whether Isub(avg) or Isub(avg(m)) or Isub(avg(n)) matches that on the image template to high confidence level, these values are stored (311) for further image processing (314). But basically, steps 201, 202, 203, 304, 306, 307, 208, 210, 209, 312, 313 of the algorithm enables objects in monochrome images to be correctly identified.
In a typical situation, different parts of an object having the same color may have very different intensity values due to uneven illumination or lighting condition, e.g. when there is a shadow falling part of the object. For a single monochrome image, when more than one region are selected, average intensity Isub(avg(n)) (n=l,2,...i) for each region are derived (307). When more than one image at different intensities are taken, average intensity value Isub(avg(m)) (m=l,2,...i) of the selected region are derived (307) for each m-th image taken. Steps 201 through 312 may be carried more than once so that more than one image is captured and all average intensities of every selected region are identified. This ensures that derived average intensities may match that in the image template to a high confidence level, thus identifying the object at a high confidence level (313).
Taking images at different intensity (210, 201) can be effected in similar manners described before as with the color images. In the case of images captured at outdoor, the only probable ways are to change the shutter speeds, mechanical apertures or applying neutral density filters
While that which have been described are considered to be preferred embodiments of the present invention, it will be apparent to those skilled in the art that various modifications and variations can be made, and equivalents may be substituted for elements thereof without departing from the spirit or scope of the present invention. Thus, it is intended that the present invention not be limited to the particular embodiments disclosed- as the best mode contemplated for carrying out the present invention, but that the present invention includes all embodiments falling within the scope of the appended claims and their equivalents.