Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR USING CALIBRATION PATCHES IN ELECTRONIC FILM PROCESSING
Document Type and Number:
WIPO Patent Application WO/2001/013174
Kind Code:
A1
Abstract:
A method and system (100) for using reference patches (108) to enhance electronic film processing of a scene image (104) contained on a first area of a film (112) include creating a reference patch (108) on a seconde area of the film (112); coating the film (112) with a developing solution to form a scene image (104) and a patch image (108); scanning the film (112) coated with the developing solution to generate signals corresponding to digital representations of the scene image (104) and the patch image (108); calculating image processing parameters from the signals associated with the patch image (108); and processing the digital representations of the scene image (104) using the image processing parameters calculated from the patch image (108) to produce color values which more accurately reflect the original scene and which are pleasing to the eye.

Inventors:
KEYES MICHAEL P (US)
CANNATA PHILIP E (US)
Application Number:
PCT/US2000/022681
Publication Date:
February 22, 2001
Filing Date:
August 17, 2000
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
APPLIED SCIENCE FICTION INC (US)
KEYES MICHAEL P (US)
CANNATA PHILIP E (US)
International Classes:
G03B27/32; G03D5/06; (IPC1-7): G03D5/06; G03B27/32
Foreign References:
US5627016A1997-05-06
US5739897A1998-04-14
US5664253A1997-09-02
Attorney, Agent or Firm:
Barker, Scott N. (OH, US)
Download PDF:
Claims:
WE CLAIM:
1. A method for electronically processing exposed undeveloped film using a reference patch, the method comprising: exposing the film to a calibration light source to create a reference patch; coating a developer solution onto the film; electronically scanning the coated film to produce an electronic image and an electronic reference image; calculating processing parameters using the electronic reference image; and processing the electronic image using the processing parameters.
2. The method of Claim 1, wherein the step of scanning the coated film comprises electronically scanning a dye image in the coated film to produce an electronic image and an electronic reference image.
3. The method of Claim 1, wherein the step of scanning the coated film comprises electronically scanning a latent silver image in the coated film to produce an electronic image and an electronic reference image.
4. The method of Claim 1, wherein the step of scanning the coated film comprises electronically scanning a latent silver image and a dye image in the coated film to produce an electronic image and an electronic reference image.
5. The method of Claim 1, wherein the step of calculating processing parameters using the electronic reference image comprises defining a color correction map using the electronic reference image and a standard.
6. The method of Claim 1, wherein the reference patch comprises a gray scale reference patch.
7. The method of Claim 1, wherein the step of electronically scanning the coated film comprises electronically scanning the coated film using infrared light to produce an electronic image and an electronic reference image.
8. The method of Claim 1, wherein the step of electronically scanning the coated film comprises electronically scanning the coated film using white light to produce an electronic image and an electronic reference image.
9. The method of Claim 1, wherein the step of electronically scanning the coated film comprises electronically scanning the film using front, back, frontthrough and back through imaging.
10. The method of Claim 9, wherein the step of calculating processing parameters using the electronic reference image comprises: plotting mean values for the electronic reference image versus front channel mean values to form a first plot; plotting through channel mean values for the electronic reference image versus back channel mean values to form a second plot; determining a first equation which best fits the first plot; determining a second equation which best fits the second plot; calculating a new data value for each pixel in the front imaging signal based on the first equation; and calculating a new data value for each pixel in the back imaging signal based on the second equation.
11. The method of Claim 10 wherein the first and second equations are cubic equations.
12. An electronic film processing system for processing exposed undeveloped film, comprising: a calibration light source operable to create a reference patch on the film; an applicator operable to apply a film processing solution to the film; at least one imaging station operable to scan the coated film and produce an electronic image of a scene and the reference patch; and a computer processor coupled to the at least one imaging station, wherein the electronic image of the reference patch is used in calculating image processing parameters for correcting the electronic image of the scene.
13. The electronic film processing system of Claim 12, wherein the applicator comprises at least one slot coater device.
14. The electronic film processing system of Claim 12, wherein the at least one imaging station comprises: a light source operable to illuminate the film; and a sensor system operable to detect metallic silver particles in the film.
15. The electronic film processing system of Claim 12, wherein the at least one imaging station comprises: a light source operable to illuminate the film; and a sensor system operable to detect color dyes in the film.
16. The electronic film processing system of Claim 12, wherein the at least one imaging station comprises: a first imaging station comprising: a first light source operable to illuminate the film; and a first sensor system operable to detect metallic silver particles in the film; and a second imaging station comprising: a second light source operable to illuminate the film; and a second sensor system operable to detect color dyes in the film.
17. A scene image prepared in accordance with the following steps: providing a film with a latent scene image on a first area of the film; creating at least one reference patch having known spectral properties on a second area of the film; applying a developer solution to the film; developing the coated film to form a scene image and developed reference patch images; scanning the coated film to generate signals corresponding to digital representations of the developed scene image and the developed patch images; calculating image processing parameters from the signals associated with the developed patch images; and processing the digital representations of the developed scene image using the calculated image processing parameters to produce color values for the scene image.
18. The scene image of Claim 17, wherein the developed scene image signals and the developed patch image signals are generated through front, back, frontthrough and back through imaging.
19. The scene image of Claim 17 wherein generating signals corresponding to digital representations of the images further comprises: directing a first illumination source toward a first surface of said film; capturing radiation reflected from said first surface on a pixel by pixel basis; assigning a data value to the amount of reflected radiation from said first surface for each pixel to form the front imaging signal; directing a second illumination source toward a second surface of said film, said second surface and said first surface forming opposite sides of said film; capturing radiation reflected from said second surface of said film on a pixel by pixel basis; assigning a data value to the amount of reflected radiation from said second surface for each pixel to form the back imaging signal; capturing radiation transmitted through said film from said second illumination source on a pixel by pixel basis; assigning a data value to the amount of transmitted radiation from said first surface for each pixel to form the frontthrough imaging signal; capturing radiation transmitted through said film from said first illumination source on a pixel by pixel basis; and assigning a data value to the amount of transmitted radiation from said second surface for each pixel to form the backthrough imaging signal.
20. The scene image of Claim 19, wherein processing the digital representations further comprises: reading the front, back, frontthrough and backthrough signals for reference gray patches; plotting through channel mean values for each gray reference patch versus front channel mean values to form a first plot; plotting through channel mean values for each gray reference patch versus back channel mean values to form a second plot; determining a first equation which best fits the first plot; determining a second equation which best fits the second plot; calculating a new data value for each pixel in the front imaging signal based on the first equation; and calculating a new data value for each pixel in the back imaging signal based on the second equation.
Description:
METHOD AND SYSTEM FOR USING CALIBRATION PATCHES IN ELECTRONIC FILM PROCESSING TECHNICAL FIELD This invention relates generally to electronic film processing, and more particularly to a method and system for using calibration patches in electronic film processing.

BACKGROUND OF THE INVENTION Conventional color photographic film generally contains three superimposed color sensing layers. Each layer contains a light sensitive material, typically silver halide, that is spectrally sensitive to a specific portion of the visible light spectrum. In general, the top layer responds primarily to light of short wavelength (blue light), the middle layer responds primarily to light of medium wavelength (green light) and the bottom layer responds to light of long wavelength (red light). When film with these types of spectral sensitivities is exposed to visible light, each spot on the film records the intensity of blue, green and red light incident on that location of the film. This film record of incident light referred to as the latent image.

In conventional color photographic development systems, the exposed film is chemically processed to produce color dyes in the three layers with color densities directly proportional to the blue, green and red light intensities recorded as the latent image.

Conventional chemical film processing involves the application of a number of processing solutions to the film to produce the color dyes. A developer solution is first applied to the exposed film. In a chemical reaction, the developer solution interacts with the silver halide and couplers in the film to produce elemental silver particles and a respective color dye. The development process is halted at the proper exposure by rinsing the developer solution off the film or a stop solution to the film. A fixing solution is then applied to the film to convert the unexposed silver halide to elemental silver. A bleaching solution is then applied to the film to rinse away the elemental silver and leave the color dyes in each layer. Additional processing solutions can be applied to the film as required to preserve the film. The film is then dried and forms a conventional film negative. Through a conventional printing process, the film negative can be used to produce positive prints of the film negative. The film negative can also be electronically scanned to produce a digital image.

Image enhancement has been the subject of a large body of film processing technology. In particular, color correction is a specific area of image enhancement that is particularly problematic. One difficulty with color correction is trying to correct colors that are incorrect without changing the true colors of the pictured scene.

SUMMARY OF THE INVENTION Briefly summarized, the present invention provides a method for using a calibration patch to electronically process exposed film. In one application, the film is exposed to a calibration light source that creates a reference patch. Developer solution is then applied to the film. The film is then electronically scanned without removing the developer solution to produce an electronic image and an electronic reference image. The electronic reference image is then used to calculate processing parameters for the electronic image. In a particular application, the electronic scanning process comprises electronically scanning the silver metal particles in the film. In another application, the electronic scanning process comprises electronically scanning the dyes in the film. In yet another application, the electronic scanning process comprises a combination of the above.

In another application of the present invention, a method for using a calibration patch in a electronic film processing system is provided. In this application, an unexposed area of the film is exposed to a calibration signal to create a reference patch. The film is then developed to form scene images composed of silver metal particles. Signals corresponding to digital representations of scans of the scene image and the reference patch are generated.

Image processing parameters from the signals associated with the reference patch are calculated and used in processing the signals of the digital representations corresponding to the scene image. The image processing parameters then produce initial color values which more accurately reflect the original scene and which are pleasing to the eye.

The invention also provides an improved scanning device used to process a developed scene image on a film according to the present invention. The scanning device comprises a number of optic sensors, an electromagnetic energy source and computer processor. The optic sensors generate digital representations of the scene image and a reference patch image.

The computer processor uses the digital representations to calculate image processing

parameters based on the reference patch to produce improved color values for the digital representations of the scene image.

The various features and characteristics of the present invention will become more apparent when taken in conjunction with the drawings and the following description wherein like referenced numerals represent like parts.

BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a perspective view of a scanning device in accordance with the present invention; FIG. 2 is an illustration of a duplex film processing system in accordance with the present invention; and FIG. 3 is a plan view of film containing scene images and reference patches according to the present invention.

DETAILED DESCRIPTION The method of the present invention involves exposing undeveloped film to a calibration light source to produce a reference patch. A developer solution is applied to the film to develop the film. The film is electronically scanned without removing the developer solution. In one embodiment, as described herein, the film is electronically scanned using a duplex scanning method whereby the elemental silver in the film is scanned. Duplex scanning generally utilizes two electromagnetic radiation sources, with one positioned in front and one in back of the film. The radiation from these sources is attenuated by the amount of silver metal particles at each spot on the film. The attenuated radiation is detected and converted to digital signals using appropriate optical and electronic systems. Based on the amount of detected radiation, one embodiment of the present invention produces four values, referred to as front, back, front-through and back-through data, for each pixel on the film. These signals are directly related to the silver metal particles that form the scene image.

An improved digital film scanning apparatus 100, which includes a calibration exposure device 107 is shown in Fig. 1. The digital film scanning apparatus 100 may in turn

be part of a larger optical and electronic system. The scanning apparatus 100 operates by converting electromagnetic radiation from an image to an electronic (digital) representation of the image. The image being scanned is typically embodied in a physical form, such as on a photographic media, i. e., film, although other media may be used. The electromagnetic radiation used to convert the image into a digitized representation is infrared light; however, visible light, microwave and other suitable types of electromagnetic radiation may also be used to produce the digitized image.

One embodiment of a duplex scanning apparatus 100, as illustrated in Fig. 1, is used to illustrate the present invention and its operation. It will be understood that other suitable embodiments of scanning apparatus 100 may be employed without departing from the scope of the present invention. For example, another embodiment, the scanning apparatus 100 operates to scan the film through the developer using visible light to detect color dyes in the film. In another embodiment, the scanning apparatus 100 utilizes a combination of visible light detection of the color dyes and duplex imaging, as described in detail below, to measure the metallic silver particles in the film.

The scanning apparatus 100 generally includes a number of optic sensors 102. In the embodiment illustrated, the optic sensors 102 measure the intensity of electromagnetic energy passing through and/or reflected by the film 112. The source of electromagnetic energy is typically a light source 110 which illuminates the film 112 containing the scene image 104 and the reference patch image 108 to be scanned. Radiation from the source 110 may be diffused or directed by additional optics such as filters (not shown) and one or more lenses 106 positioned near the sensors 102 and the film 114 in order to illuminate the images 104 and 108 more uniformly. Furthermore, more than one source may be used. Source 110 is positioned on the side of the film 112 opposite the optic sensors 102. This placement results in sensors 102 detecting radiation emitted from source 110 as it passes through the images 104 and 108 on the film 112. Another radiation source 111 is shown placed on the same side of the film 112 as the sensors 102. When source 111 is activated, sensors 102 detect radiation reflected by the images 104 and 108. This process of using two sources positioned on opposite sides of the film being scanned is described in more detail below in conjunction with Fig. 2.

The optic sensors 102 are generally geometrically positioned in arrays such that the electromagnetic energy striking each optical sensor 102 corresponds to a distinct location 114 in the images 104 and 108. Accordingly, each distinct location 114 in the scene image 104 corresponds to a distinct location, referred to as a picture element, or"pixel"for short, in the scanned, or digitized image 105. The images 104 and 108 on film 112 are usually sequentially moved, or scanned, across the optical sensor array 102. The optical sensors 102 are typically housed in a circuit package 116 which is electrically connected, such as by cable 118, to supporting electronics for computer data storage and processing, shown together as computer 120. Computer 120 can then process the digitized image 105. Alternatively, computer 120 can be replaced with a microprocessor and cable 118 replaced with an electrical circuit connection.

Optical sensors 102 may be manufactured from different materials and by different processes to detect electromagnetic radiation in varying parts and bandwidths of the electromagnetic spectrum. The optical sensor 102 comprises a photodetector (not expressly shown) that produces an electrical signal proportional to the intensity of electromagnetic energy striking the photodetector. Accordingly, the photodetector measures the intensity of electromagnetic radiation attenuated by the images 104 and 108 on film 112.

Turning now to Fig. 2, a conventional color film 220 is depicted. As previously described, one embodiment of the present invention uses duplex film scanning which refers to using a front source 216 and a back source 218 to scan a film 220 with reflected radiation 222 from the front 226 and reflected radiation 224 from the back 228 of the film 220 and by transmitted radiation 230 and 240 that passes through all layers of the film 220. While the sources 216,218 may emit polychromatic light, i. e., multi-frequency light, the sources 216, 218 are generally monochromatic and preferably infrared. The respective scans, referred to herein as front, back, front-through and back-through, are further described below.

In Fig. 2, separate color levels are viewable within the film 220 during development of the red layer 242, green layer 244 and blue layer 246. Over a clear film base 232 are three layers 242,244,246 sensitive separately to red, green and blue light, respectively. These layers are not physically the colors; rather, they are sensitive to these colors. In conventional color film development, the blue sensitive layer 246 would eventually develop a yellow dye, the green sensitive layer 244 a magenta dye, and the red sensitive layer 242 a cyan dye.

During development, layers 242,244, and 246 are opalescent. Dark silver grains 234 developing in the top layer 246, the blue source layer, are visible from the front 226 of the film, and slightly visible from the back 228 because of the bulk of the opalescent emulsion.

Similarly, grains 236 in the bottom layer 242, the red sensitive layer, are visible from the back 228 by reflected radiation 224, but are much less visible from the front 226. Grains 238 in the middle layer 244, the green sensitive layer, are only slightly visible to reflected radiation 222,224 from the front 226 or the back 228. However, they are visible along with those in the other layers by transmitted radiation 230 and 240. By sensing radiation reflected from the front 226 and the back 228 as well as radiation transmitted through the film 220 from both the front 226 and back 228 of the film, each pixel for the film 220 yields four measured values, one from each scan, that may be mathematically processed in a variety of ways to produce the initial three colors, red, green and blue, closest to the original scene.

The front signal records the radiation 222 reflected from the illumination source 216 in front of the film 220. The set of front signals for an image is called the front channel. The front channel principally, but not entirely, records the attenuation in the radiation from the source 216 due to the silver metal particles 234 in the top-most layer 246, which is the blue recording layer. There is also some attenuation of the front channel due to silver metal particles 236,238 in the red and green layers 242,244.

The back signal records the radiation 224 reflected from the illumination source 218 in back of the film 220. The set of back signals for an image is called the back channel. The back channel principally, but not entirely, records the attenuation in the radiation from the source 218 due to the silver metal particles 236 in the bottom-most layer 242, which is the red recording layer. Additionally, there is some attenuation of the back channel due to silver metal particles 234,238 in the blue and green layers 246,244.

The front-through signal records the radiation 230 that is transmitted through the film 220 from the illumination source 218 in back of the film 220. The set of front-through signals for an image is called the front-through channel. Likewise, the back-through signal records the radiation 240 that is transmitted through the film 220 from the source 216 in front of the film 220. The set of back-through signals for an image is called the back-through channel. Both through channels record essentially the same image information since they

both record the attenuation of the radiation 230,240 due to the silver metal particles 234,236, 238 in all three red, green, and blue recording layers 242,244,246 of the film 220.

Several image processing steps are required to convert the illumination source radiation information for each channel to the red, green, and blue values similar to those produced by conventional scanners for each spot on the film 220. These steps are required because the silver metal particles 234,236,238 that form during the development process are not spectrally unique in each of the film layers 242,244,246. These image processing steps are not performed when conventional scanners are used because the dyes which are formed with conventional chemical color processing of the film make each film layer spectrally unique. However, just as with conventional scanners, once initial red, green and blue values are derived for each image, further processing of the red, green, and blue values is usually done to produce images that more accurately reproduce the original scene and that are pleasing to the human eye.

As described in connection with Fig. 1, the scanning apparatus 100 may include a calibration exposure device 107 that produces a calibration, or reference patch 350 in an unexposed area 352 of the photographic film 320, as illustrated in Fig. 3. This step is typically performed by the calibration exposure device 107; however, the reference patch 350 may be formed before or during scene exposure. After scene exposure and reference patch creation, the film 320 is chemically processed, and the attenuation of radiation by the silver metal particles in the area 352 where the reference patches 350 were created is measured.

The measured reference patch attenuation is used to calculate image processing parameter values that are used to digitally process the front, back and through channels collected for the scene image 354 to produce appropriate initial red, green and blue values.

Thus, one implementation of the present invention involves the following process.

After a roll of film has been exposed to light to form scene images 354, reference patches 350 are created on the photographic film 320 in an unexposed area of the film 320. As mentioned previously, this step is preferably done at the time of chemical development and film scanning. The reference patches 350 may be either a gray scale or colored, such as red, green, or blue. Likewise, the reference patches 350 may consist of both gray scale and colored patches. The reference patch 350 exposures and spectral properties are accurately known by either design or by direct measurement. Then, the front, back, front-through, and back-

through signals from each of the reference patches 350 and from each of the scene images 354 are measured. As discussed in greater detail below, the image processing parameters which are used to produce the initial red, green and blue values for the scene images 354 from the signals corresponding to those scene images 354 are derived from the front, back, front- through, and back-through signals corresponding to the reference patches 350. Finally, the front, back, front-through, and back-through channel data for the scene images 354 are processed to produce appropriate initial red, green and blue values using the image processing parameters that were calculated from the reference patches 350.

Next, some possible image processing functions which may be used according to the present invention to produce initial color values from the image processing parameters are discussed. One function that uses the reference patch data taught by the present invention is known as the Regress-to-Through-Channel ("RTC") function. Basically, for each pixel corresponding to each of the four channels, the pixel's color values are adjusted such that gray pixels have exactly the same value in each channel. One implementation for performing Regress-to-Through-Channel image processing is as follows. First, the front, back, and through (either the front-through or the back-through channel may be used) signals from the reference gray patches are read. Next, two plots are made, one being the through channel mean values for each gray reference patch versus the front channel mean values. The second plot is the through channel mean values for each gray reference patch versus the back channel mean values. Then, an equation is fit to each plot. Any equation type or algorithm could be used, although a cubic equation is preferred. For each pixel in the front channel for the entire scene image, a new front value is calculated based on the equation that was fit to the through- versus-front plot. Likewise, for each pixel in the back channel for the entire scene image, a new back value is calculated based on the equation fit to the through-versus-back plot.

Manipulating the pixel data in this way has the effect of producing better grays than prior art systems and separating non-gray pixels from gray pixels.

A second image processing function that may make use of the reference patch data is a stretch function. In general, values in the front, back, and through (either the front-through or the back-through channel may be used) channels are adjusted such that they have the same range as an aim signal. The aim signal is defined as that set of signals that the calibration patches are designed to give. That is, there is an aim signal for each patch that directly corresponds to the exposure used to generate that patch. Specifically, the process begins by

reading the through channel signals from reference gray patches placed on the film. Then, a plot is made of the aim values of the gray patches versus the through channel means for each gray reference patch. Next, a best-fit equation is calculated for the plot. While a linear equation is preferred, any best-fit equation may be used. The best-fit equation is then applied to every pixel in the front, back, and through channel data for the entire scene image to give new pixel values in each of the channels. Use of the stretch function produces good blacks and whites in the scene image reproductions.

Yet another image processing function involves calculating a color correction mapping based on gray and color calibration patches, and applying the color correction mapping to the entire scene image. First, all pixel values from the calibration patches and the entire scene image are converted to a suitable color space. Second, a color correction mapping is calculated between the mean values obtained for each calibration patch and the corresponding aim values for each patch. The calculation method may assume that the color correction mapping is linear or non-linear, and it may be implemented as a regression technique or by other methods. One example is to calculate a color correction mapping by applying a linear regression resulting in a 3 x 3 matrix. Third, the color correction mapping is applied to every pixel in the entire scene image. The result is that the scene image is a more accurate representation of the color in the original scene than without the color correction mapping applied.

It is intended that the description of the present invention provided above is but one embodiment for implementing the invention. Variations in the description likely to be conceived of by those skilled in the art still fall within the breadth and scope of the disclosure of the present invention. For example, although the technique of the present invention has been described in the context of its application to standard color film, another option is to use a film built specifically for this purpose. The film would require no color couplers as a color image never develops. While specific alternatives to steps of the invention have been described herein, additional alternatives not specifically disclosed but known in the art are intended to fall within the scope of the invention. Thus, it is understood that other applications of the present invention will be apparent to those skilled in the art upon the reading of the described embodiment and a consideration of the appended claims and drawings.