Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR PERFORMING A COLOR SPACE TRANSFORMATION
Document Type and Number:
WIPO Patent Application WO/1996/000411
Kind Code:
A1
Abstract:
A method for transforming a first image defined by a first multi-dimensional color space (RGB) into a second image defined by a second multi-dimensional color space (CMYK). The method computes the transformation using information derived from a previous transformation of the second image into the first image. The method then minimizes the error produced while transforming the second image back into the first image. As such, an image editing system can display on a video monitor (108) an image that is defined in one dimensional color space (RGB), print using a printer (110) that prints images using a second multi-dimensional color space (CMYK) and edit an image using any multi-dimensional color space (either RGB or CMYK) that facilitates rapid and accurate image editing.

Inventors:
DELEAN BRUNO
Application Number:
PCT/US1995/007806
Publication Date:
January 04, 1996
Filing Date:
June 20, 1995
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
LIVE PICTURE INC (US)
International Classes:
G03F3/08; H04N1/60; (IPC1-7): G03F3/08
Foreign References:
US4328515A1982-05-04
US4275413A1981-06-23
Other References:
See also references of EP 0766844A4
Download PDF:
Claims:
What is Claimed Is:
1. A color space mapping method for mapping color data from a first multidimensional color space to a second multidimensional color space comprising the steps of: creating a table of values based upon transforming, using a transformation function, said second multidimensional color space into said first multidimensional color space; inputting a plurality of pixel values; applying said transformation function to at least one of said pixel values in said plurality of pixel values to produce at least one transformed pixel value; updating said table values based upon said transformed pixel value; and performing, based upon said updated table values, said transformation of said color data from said first multidimensional color space to said second multidimensional color space.
2. The method of claim 1 wherein said applying step applies said transformation function to at least twentyfive pixel values.
3. The method of claim 1 wherein said table value updating step further comprises the steps of: selecting a transformed pixel value; comparing said selected transformed pixel value to a selected value in said table to determine a difference value between said selected transformed value and said selected table value; repeating said selecting and comparing steps until each of the plurality of pixel values have been compared to the selected table values and have been used to generate a difference value; and updating said table values to minimize said difference values.
4. The method of claim 3 wherein the selected table value is a table value that is nearest said transformed pixel value.
5. The method of claim 1 further comprising the step of smoothing said updated table values.
6. The method of claim 5 wherein said smoothing step further comprises the step of low pass filtering the updated table values.
7. The method of claim 1 wherein said transformation performance step further comprises interpolating a plurality of table values to produce an interpolation function.
8. The method of claim 7 wherein said interpolating step further comprises the step of using trilinear interpolation upon a plurality of the updated table values to produce said interpolation function.
Description:
METHOD FOR PERFORMING A COLOR SPACE TRANSFORMATION

Field of the Invention

The invention relates to image processing systems. More particularly, the invention relates to a method for accurately transforming color information between two color spaces having differing dimensions, e.g., between a red-green-blue (RGB) color space and a cyan-magenta-yellow-black (CMYK) color space and vice versa.

Description of the Background Art

In printing, image retouching and image processing, it is often necessary to convert colors from one representation (color space) into another. Many computer video monitors and scanners, for example, use red-green-blue (RGB) representations for colors, while printers typically represent colors in terms of the amounts of a variety of differently colored inks (for example, cyan-magenta-yellow-black (CMYK)). As such, in a typical computer system, the RGB color space used to produce an image upon a computer screen must be converted into a CMYK color space to facilitate printing of the image depicted on the screen. However, for- any particular two color spaces, it is in many instances much easier to convert in one direction than the other, e.g., convert from CMYK to RGB. For example, converting from CMYK to RGB is relatively easy because the CMYK space has more dimensions than the RGB space.

Specifically, an important task in photocompositing is to take a set of images in CMYK format, modify them, and output the result in CMYK. Many of the intermediate operations (image modifications) are more easily or effectively accomplished in RGB space, so it is often necessary to convert from CMYK to RGB and then back to CMYK. One problem with such a transformation is that a CMYK color space is a four-dimensional space and an RGB color space is a three-dimensional space, so the transformation from CMYK to RGB, though relatively simple,

the CMYK image is lost. Black generation describes an amount of black ink substituted for equal parts of Cyan, Magenta and Yellow for printing purposes. Consequently, it is very important that an image processing system be able to convert from CMYK to RGB and back to CMYK and produce a black component of the CMYK image that closely approximates the black component in the original CMYK image.

Thus, a difficult and widely needed color transformation is the transformation from an RGB color space to a CMYK color space that retains, as closely as possible, the black generation of the original CMYK image. U.S. patent 4,500,919 discloses a particular method for converting from RGB to CMYK which is called the Ink Correction Model (ICM) . The patent mentions that the ICM ". . . could be implemented in one huge lookup table, but this would be uneconomic and would also give no clue as to how to find the data to be stored in the LUT short of an impossibly large printing test." [11:21] Since the time of filing of '919 patent, the cost of memory has been reduced sufficiently that it is no longer uneconomic to use "one huge lookup table". Furthermore, the '919 patent states that, in using a table based transformation, a large printing test must be conducted to facilitate color space transformation calibration. However, such printing tests are time consuming and complicate the transformation process.

Therefore, a need exists in the art for a method that rapidly and accurately transforms images from a first multi-dimensional color space, e.g., RGB, into a second multi-dimensional color space, e.g., CMYK, without using a printing test and which preserves as closely as possible the black generation of an original CMYK image.

Summary of the Invention

The present invention overcomes the disadvantages heretofore associated with the prior art. Specifically, the present invention converts pixel values from one color space to another, e.g. , RGB to CMYK, using a table of interpolated values. The values in the table are filled using data derived

from sample images which have been previously converted in the other direction, e.g., CMYK to RGB. The invention infers from those sample images enough about the forward transformation to build an inverse transformation in the table.

In order to convert from RGB to CMYK while retaining as closely as possible the black generation of the original CMYK files, the present invention examines the CMYK files and implicitly infers a black generation model. It does this by creating a table in RGB space of the CMYK values found in the files. At the beginning, each sample of the RGB table is initialized with a value determined from a default transformation of RGB into CMYK using any default black generation strategy. The choice of this transformation is not very important because it is highly modified in the following steps performed by the invention. Next, each pixel of each CMYK image used for creation of the table is converted into RGB,and then the appropriate entries in the RGB-space table are modified so that the interpolation of the table entries at the RGB values yields a value as close as possible to the CMYK pixel color. Once the table has been constructed, it may be low-pass filtered (smoothed) , so that the values are highly continuous and no visible artifacts can be identified in the conversion. If the CMYK values of the input images are converted to RGB and the resulting RGB values are converted back to CMYK using the table described above, the original CMYK values with their original black generation are reconstructed with high accuracy as long as all the input images used the same black-generation strategy. If several input images are used that were created with different black-generation strategies (different UCR, GCR, and the like) , the table is constructed using an average of the different strategies.

A key advantage of the current invention is that the user need not know anything about the black-generation strategy used in the CMYK file. It is inferred automatically by the inventive method. In situations where people are collaborating over long distances and it is impractical to do a large series of printing tests to facilitate optimization of the color space

transformation process, the invention has great advantages over the prior art.

Brief Description of the Drawings

The teachings of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings, in which:

FIG. 1 depicts a computer system capable of executing an image processing program as well as a color space transformation program in accordance with the present invention;

FIG. 2 depicts a flow chart of a color space transformation table generation routine as executed upon the computer system shown in FIG. 1; and

FIG. 3 depicts a flow chart of a color space transformation routine that uses the table generated using the routine depicted in FIG. 2.

To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures.

Detailed Description FIG. 1 is a block diagram of a conventional computer system 100 capable of executing a color space transformation computer program 118. This program contains a routine that implements the method of the present invention to transform an image based in a first color space into an image based in a second color space. As such, the images can then be respectively printed and displayed on a printer 110 and a display monitor 108 even though the printer and display monitor utilize differing types of color spaces to produce an image.

Specifically, the computer system 100 contains an image input device 102, a computer 104, a user input device 106, a display monitor 108 and a printer 110. In operation, an original image is created by an image input device 102 such as a

conventional image scanner. The scanned image (also referred to herein as a source image or original image) is formatted by the scanner into an image file 120 using a particular color space (e.g., RGB or CMYK). The image is then stored in the memory 116 within computer 104. Alternatively, the original image could be generated by a drawing or drafting program (shown generally as image processing program 122) executed on the computer 104 or another computer and stored, as image file 120, in memory 116. The computer 104 contains at least one central processing unit (CPU) 112, memory 116, and various well-known CPU support circuits 114. An illustrative computer is a Macintosh Quadra model 900 manufactured by Apple Computer, Inc. of Cupertino, California. The transformation program 118 and the image editing program 122 as well as one or more images are stored in the memory 116.

In operation, a user typically manipulates the user command input device 106 such as a mouse, trackball, light pen, and/or keyboard, to control, via the computer 104, the image input device 102, e.g., an image scanner. The image scanner, in a conventional manner, scans a hardcopy of an image and stores a digitized representation of the hardcopy in the memory 116 as an image file 120. Subsequently, the user can instruct the CPU 112 to execute the image processing program 122 and also to recall an image file (original image) from the memory. The image processing program 122 interacts, as necessary, with the transformation program 118 to facilitate color space transformation and display of the image on the monitor 108 and the printer 110.

Broadly speaking, the transformation program 118 contains an executable routine that transforms a color space having n-dimensions to a color space having m-dimensions. In particular, consider a transformation T: R n -> R m from an n-dimensional color space to an m-dimensional space and suppose that n > m. For example, R n may be a CMYK color space and R m may be an RGB color space. Since n has more dimensions than m, the mapping transformation T will generally be many-to-one. In other words, there are typically many different colors x in R n

such that T(x) = y where y is a given color. In the general case, there will be an (n-m) -dimensional set of colors x such that T(x) = y for any given y.

Inverting T is problematic. Since T is many-to-one, there is no full inverse T "1 such that T ' '(T(x)) = x for all x.

Nonetheless, to perform accurate inverse color transformations in practical situations, it is important to be able to recover x as closely as possible from T(x) .

The present invention makes use of the observation that even though R n is a higher-dimensional space than R , for many purposes, not all of R n is used to produce colors in an image. In fact, usually no more than an m-dimensional εubspace of R n is used to produce a pixel color. Hence, for many purposes, it suffices to invert T(x) on an m-dimensional subspace.

The invention, which is embodied in a software routine 200 shown in FIG. 2, operates as follows:

The routine begins by creating an m-dimensional grid which samples R m , an m-dimensional color space image. For example, if R m is in an RGB color space, then, at step 204, the routine creates, in memory, a table G. The table contains a grid that illustratively consists of 32 x 32 x 32 elements P. This grid is referred to as an interpolation table G. In each element P of table G, the routine places an n-dimensional value H(i,, i 2 , . . . i m ) = (h,,h 2 , . . . h n ) . These values of H are initialized using a default mapping from color space R" 1 to color space R". For example, if R m is RGB and R n is CMYK, the default transformation can be given by a standard transformation with a particular UCR or GCR black generation strategy.

At step 206, the routine defines an interpolation function S(x,, x 2 , . . . x m ) = (s,, s 2 , ... s n ) based on the present elements H in the grid. For example, S can be the tri-linear interpolation of the entries H in G. With the interpolation defined, S is then a function from R" 1 to R n . The goal is to set

the elements H in G such that the function S accurately inverts the given transformation function T(x).

In order to establish the values H, the routine looks for colors in R n that transform, according to the forward transformation T, to locations in R m near the samples in the interpolation table G. In order to achieve this goal, the routine stores with each element P, a distance D(P) to the nearest sample found thus far. If the method subsequently finds a closer sample, the routine updates G(P) and D(P) . D(P) is initialized to the largest representable value at the beginning of the method.

Specifically, the source for the colors C is a source image containing a collection of pixels x in R n that ideally -spans the range of colors of interest. For each x in C, the routine, at step 208, computes y = T(x). The routine, at step 210, then finds the point P in G which is closest to y. If, at step 212, D(P) is less than a distance Q between y and P, the method, at step 214, sets G(P) to y and D(P) to Q. Through step 216, the routine repeats this operation for all the pixels in the source image.

If, at step 212, the distance D(P) of a point P in the grid G is not zero, then T(G(P)) is not exactly equal to P. In order to reduce or eliminate this error, the routine, at step 218, uses a second phase in which, for each point P in the grid G, the routine minimizes the squared error (T(G(P)) - P) 2 by modifying value H. This produces a continuous minimization of the H values over m-variables and can be done using simple gradient descent, or a coarse-to-fine search technique. These techniques and other more sophisticated continuous minimization methods are well described in the literature (c.f. Practical Optimization, Gill et. al., Academic Press 1984).

It is possible that the table resulting from the second phase, i.e., an updated table G containing modified H values, can be insufficiently smooth if the colors in the source image do not cover the entire color space. Consequently, there will

be a transition region between the portion of G(P) which still contains the original default mapping and the portion that contains the updated mapping based on the source image. To ensure that there are no objectionable artifacts produced by this boundary, the routine, at step 220, smoothes the values H in G in a third phase. Any low-pass filter may be used for this smoothing operation. For mappings from RGB to CMYK using a 32 x 32 x 32 table, averaging each CMYK sample H with its 9 neighbors produces acceptable results. The routine ends at step 222.

Using the grid G and the interpolation function S, any input pixel value from an m-dimensional color space is accurately transformed into a pixel value in an n-dimensional color space. FIG. 3 depicts a flow chart of a transformation routine 300 that performs such a dimensional transformation upon input pixel values.

The routine is entered at step 302 and proceeds to step 304. At step 304, a pixel value from an m-dimensional color space, e.g., RGB color space, is input. At step 306, the routine determines the H values in the table G that are nearest the input pixel value. The nearest H values are, at step 308, interpolated using the interpolation function S. At step 310, the routine outputs a pixel value in n-dimensional color space, e.g., CMYK color space. Lastly, the routine ends at step 312.

In operation, the two routines (FIGs. 2 and 3) have obtained quite satisfactory results transforming colors from RGB to CMYK using the conventional tri-linear interpolant. The resulting transformation has the property for RGB to CMYK transformation that it reproduces very accurately the black generation in the original CMYK file. In experiments, the observed errors are typically on the order of one or two percent of the original CMYK values.

Using the present invention within an image editing system, an operator can convert a series of images from CMYK to RGB, re- touch or edit the images in RGB, and then transform the result

back into CMYK, knowing that the black-generation of the resulting CMYK image will very closely match the original black- generation.

Although one embodiment incorporating the teachings of the present invention has been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings.