Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGING APPARATUS AND PROGRAM
Document Type and Number:
WIPO Patent Application WO/2012/140939
Kind Code:
A1
Abstract:
An imaging apparatus (100) includes an imaging unit (80) that images a test object, an analysis unit (202) that outputs a feature amount of an image which is captured by the imaging unit, a storage unit (206) that stores an evaluation function which has the image feature amount as a variable, for evaluation of the image, a selection unit (203) that selects one image from two or more images including an image specified based on a value of the evaluation function, and a changing unit (204) that changes the evaluation function based on the one image in a case where the one image selected by the selection unit is different from the specified image.

Inventors:
FUKUTAKE NAOKI (JP)
NAKAJIMA SHINICHI (JP)
Application Number:
PCT/JP2012/052567
Publication Date:
October 18, 2012
Filing Date:
January 30, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NIKON CORP (JP)
FUKUTAKE NAOKI (JP)
NAKAJIMA SHINICHI (JP)
International Classes:
G02B21/36; G02B21/14
Foreign References:
US20050046930A12005-03-03
JPH063599A1994-01-14
US20010021914A12001-09-13
US20040229133A12004-11-18
JP2011087821A2011-05-06
JP2009237109A2009-10-15
Attorney, Agent or Firm:
SHIGA Masatake et al. (Marunouchi Chiyoda-ku, Tokyo, JP)
Download PDF:
Claims:
CLAIMS

1. An imaging apparatus comprising:

an imaging unit that images a test object;

an analysis unit that outputs a feature amount of an image which is captured by the imaging unit;

a storage unit that stores an evaluation function which has the image feature amount as a variable, for evaluation of the image;

a selection unit that selects one image from two or more images comprising an image specified based on a value of the evaluation function; and

a changing unit that changes the evaluation function based on the one image in a case where the one image selected by the selection unit is different from the specified image. 2. The imaging apparatus according to claim 1, wherein the evaluation function has a weighting coefficient multiplied by the variable, and

wherein the changing unit changes the weighting coefficient.

3. The imaging apparatus according to claim 1 or 2, wherein the imaging unit has an illumination optical system which illuminates the test object, and

wherein the two or more images are images which are captured by changing intensity distributions of illumination light which illuminates the test object.

4. The imaging apparatus according to claim 3, further comprising:

a spatial modulation unit that varies intensity distributions of the illumination light; and

an optimization calculating unit that performs an optimized calculation of the intensity distribution of the illumination light which is appropriate for observation of the test object using the changed evaluation function,

wherein the imaging unit performs imaging each time the spatial modulation unit varies the intensity distribution of the illumination light, the analysis unit outputs an image feature amount for each captured image, and the optimization calculating unit performs calculation using the changed evaluation function.

5. The imaging apparatus according to claim 1 or 2, wherein the imaging unit includes an illumination optical system that illuminates the test object, and

wherein the two or more images are images which are captured by changing wavelengths of illumination light which illuminates the test object. 6. The imaging apparatus according to claim 5, further comprising:

a spatial modulation unit that varies the intensity distribution of the illumination light; and

an optimization calculating unit that performs an optimized calculation of a wavelength of the illumination light which is appropriate for observation of the test object using the changed evaluation function,

wherein the imaging unit performs imaging each time the spatial modulation unit varies the wavelength, the analysis unit outputs the image feature amount for each captured image, and the optimization calculating unit performs calculation using the changed evaluation function.

7. The imaging apparatus according to any one of claims 1 to 6, wherein the selection unit displays two or more images which are calculated by the optimization calculating unit such that an observer can select one image, and

wherein the changing unit changes the coefficient such that an evaluation function of the selected one image is heightened.

8. The imaging unit according to claim 4 or 6, further comprising a determination unit that determines whether the test object is an absorption object or a phase object,

wherein the determination unit determines whether the test object is the absorption object or the phase object before the optimization calculating unit performs optimized calculation.

9. The imaging apparatus according to any one of claims 1 to 8, wherein an image feature amount of the test object comprises a spatial frequency component, a histogram, a contrast, or a maximal gradient, which are amount of at least a part of the image of the test object.

10. A program for imaging the test object using an imaging apparatus comprising an imaging unit that images the test object and a computer that is connected to the imaging unit, the program enabling the computer to execute:

outputting a feature amount of an image captured by the imaging unit;

storing an evaluation function having the image feature amount as a variable for evaluation of the image;

selecting one image from two or more images including an image specified based on a value of the evaluation function; and

changing the evaluation function based on the one image in a case where the one image selected by the selection unit is different from the specified image.

Description:
DESCRIPTION

IMAGING APPARATUS AND PROGRAM BACKGROUND

[0001]

The present invention relates to an imaging apparatus and a program which derive the intensity distribution of illumination light appropriate for observation.

Particularly, the present invention relates to an imaging apparatus and a program which can obtain an image according to an observer's preference.

Priority is claimed on Japanese Patent Application No. 2011-087821, filed April 12, 2011, the contents of which are incorporated herein by reference.

[0002]

Generally, in bright-field microscopes, an intensity distribution of illumination light is adjusted by varying a circular aperture. In addition, there are cases where the shape of the aperture is selected and used through determination of an observer. In phase-contrast microscopes, generally, a ring aperture and a phase ring form the intensity distribution of illumination light.

[0003]

Since the intensity distribution of illumination light has a great effect on an observed image of a test object, there has been study into making the observed image of the test object better by improving the circular aperture, the ring aperture, the phase ring, and the like. For example, Japanese Unexamined Patent Application Publication No. 2009-237109 shows a phase-contrast microscope in which a modulation unit is provided so as to surround a ring region where a phase ring is provided in a ring shape, and the modulation unit and regions other than the modulation unit are formed so as to have different transmission axes, and thereby contrast can be continuously variable.

SUMMARY

[0004]

However, in the above-described microscope, shapes of the aperture are set to a degree, and thus there is a limitation on adjustment of the intensity distribution of illumination light. In addition, since even in a case where the shape of the aperture is selected it is selected based on the determination or experience of an observer, the shape of the aperture is not necessarily a shape which enables an image of the test object during observation to be observed in the best state. Further, the best state of a test object image is different for every person depending on the observers' preferences. Therefore, it is difficult for the shape of the aperture to be set automatically in the best state depending on observers' preferences, and for a test object to be observed. This is also difficult in a camera or the like which images landscapes or the like.

Aspects of the present invention provide an imaging apparatus and a program which derive the intensity distribution of illumination light appropriate to observe a test object according to an observer's preference.

[0005]

An imaging apparatus according to a first aspect includes an imaging unit that images a test object, an analysis unit that outputs a feature amount of an image which is captured by the imaging unit, a storage unit that stores an evaluation function which has the image feature amount as a variable for evaluation of the image, a selection unit that selects one image from two or more images including an image specified based on a value of the evaluation function, and a changing unit that changes the evaluation function based on the one image in a case where the one image selected by the selection unit is different from the specified image.

[0006]

A program according to a second aspect is a program for imaging a test object using an imaging apparatus having an imaging unit that images the test object and a computer that is connected to the imaging unit. The program enables the computer to execute outputting a feature amount of an image captured by the imaging unit, storing an evaluation function having the image feature amount as a variable for evaluation of the image, selecting one image from two or more images including an image specified based on a value of the evaluation function, and changing the evaluation function based on the one image in a case where the one image selected by the selection unit is different from the specified image.

[0007]

According to aspects of the present invention, there are provided an imaging apparatus and a program which make a test object image suitable for an observer's preference.

BRIEF DESCRITION OF THE DRAWINGS

[0008]

FIG. 1 is a schematic configuration diagram of a microscope system in the first embodiment.

FIG. 2 is a schematic configuration diagram of a computer in the first embodiment.

FIG. 3 is a diagram illustrating an evaluation function Q (f) stored in a storage unit according to a first example and a concept thereof in the first embodiment. FIG. 4 shows an example of the flowchart illustrating an operation of the microscope system in the first embodiment.

FIG. 5 shows an example where the selection unit displays a plurality of test object images on a display unit in the first embodiment.

FIG. 6A is a schematic configuration diagram of a microscope system in the second embodiment.

FIG. 6B is a plan view of a first spatial light modulation device in the second embodiment.

FIG. 6C is a plan view of a second spatial light modulation device in the second embodiment.

FIG. 7 is a schematic configuration diagram of a computer in the second embodiment.

FIG. 8 is a diagram illustrating an evaluation function Q (f) stored in the storage unit according to a second example and a concept thereof in the second embodiment.

FIG. 9 shows an example of the flowchart illustrating an operation of the microscope system in the second embodiment.

FIG. 10A shows an example of the transmission region of the first spatial light modulation device which is obtained by a genetic algorithm, in the second embodiment.

FIG. 10B shows an example of the transmission region of the first spatial light modulation device which is obtained by a genetic algorithm, in the second embodiment.

FIG. 11 is a flowchart for specifying an observer's preference before optimization calculation.

FIG. 12A is a diagram illustrating an example of the intensity distribution of illumination light and a spatial frequency component when Fourier transform is performed. FIG. 12B is a diagram illustrating an example of the intensity distribution of illumination light and a spatial frequency component when Fourier transform is performed.

FIG. 12C is a diagram illustrating an example of the intensity distribution of illumination light and a spatial frequency component when Fourier transform is performed.

FIG. 12D is a diagram illustrating an example of the intensity distribution of illumination light and a spatial frequency component when Fourier transform is performed.

DESCRIPTION OF THE REFERENCE SYMBOLS

21 DISPLAY UNIT

30 ILLUMINATION LIGHT SOURCE

40 ILLUMINATION OPTICAL SYSTEM

41 FIRST CONDENSER LENS

42 SECOND CONDENSER LENS

44 WAVELENGTH FILTER

50 STAGE

60 TEST OBJECT

70 IMAGING OPTICAL SYSTEM

71 OBJECTIVE LENS

80 IMAGE SENSOR

90, 390 FIRST SPATIAL LIGHT MODULATION DEVICE

91, 391 ILLUMINATION REGION 92, 392 LIGHT-SHIELDING PART

100, 300 MICROSCOPE SYSTEM

200, 220 COMPUTER

201 IMAGE PROCESSING UNIT

202 FOURIER ANALYSIS UNIT

203 SELECTION UNIT

204 CHANGING UNIT

205 OPTIMIZATION CALCULATING UNIT

206 STORAGE UNIT

207 DETERMINATION UNIT

208 ELEMENT MODULATION UNIT

209 FILTER DRIVING UNIT

396 SECOND SPATIAL LIGHT MODULATION DEVICE

397 PHASE MODULATION REGION

398 DIFFRACTED LIGHT TRANSMISSION REGION

DESCRIPTION OF EMBODIMENTS

[0010]

(First Embodiment)

A microscope system 100 having a bright-field microscope which can freely change the shape of an aperture will be described as a first embodiment. The microscope system 100 derives the intensity distribution of illumination light appropriate to observe an image of a test object during observation in a good state and automatically adjusts the intensity distribution of illumination light.

[0011] <Microscope System 100>

FIG. 1 is a schematic configuration diagram of the microscope system 100. The microscope system 100 mainly includes an illumination light source 30, an illumination optical system 40, a stage 50, an imaging optical system 70, and an image sensor 80. The microscope system 100 is connected to a computer 200. Hereinafter, a description will be made assuming that the central axis of light beams emitted from the illumination light source 30 is the Z axis direction, and directions which are vertically mutually perpendicular to the Z axis are the X axis direction and the Y axis direction.

[0012]

The illumination light source 30 irradiates, for example, a test object 60 with white illumination light. The illumination optical system 40 includes a first condenser lens 41, a wavelength filter 44, a first spatial light modulation device 90, and a second condenser lens 42. In addition, the imaging optical system 70 includes an objective lens 71. The stage 50 places the test object 60 having an unknown structure such as, for example, a cell tissue thereon, and can move in the X and Y axis directions. In addition, the imaging optical system 70 images transmitted light or reflected light of the test object 60 on the image sensor 80.

[0013]

The first spatial light modulation device 90 of the illumination optical system 40 is disposed at, for example, a position forming a conjugate with respect to a position of the pupil of the imaging optical system 70 inside the illumination optical system 40. The first spatial light modulation device 90 may use, specifically, a liquid crystal panel or a digital micromirror device (DMD). Further, the first spatial light modulation device 90 has an illumination region 91 of which the size and the shape can be freely changed, and can arbitrarily vary the intensity distribution of illumination light by changing the size or the shape of the illumination region 91. That is to say, the first spatial light modulation device 90 can vary the intensity distribution of illumination light at the conjugate position of the pupil of the imaging optical system 70. Generally, if a diameter of the transmission region 91 of the first spatial light modulation device 90 increases, a resolution can increase through an increase in a numerical aperture of the transmitted light. Further, the wavelength filter 44 limits a wavelength of a transmitted light beam within a specific range. The wavelength filter 44 uses a band-pass filter which transmits, for example, only light of a wavelength in a specific range therethrough. The band-pass filter is prepared which is attachable and detachable and transmits a plurality of light beams of different wavelengths therethrough, and a wavelength of light which is transmitted through the wavelength filter 44 can be controlled by exchanging the band-pass filter.

[0014]

The computer 200 receives an image signal detected by the image sensor 80, performs an image process for the image signal, and displays a two-dimensional image of the test object 60 on a display unit 21 such as a monitor. In addition, the computer 200 performs a Fourier transform for the image signal, and calculates a spatial frequency component. The calculation or the like performed by the computer 200 will be described later with reference to FIG. 2.

[0015]

In FIG. 1, light emitted from the illumination light source 30 is denoted by dotted lines. The light LW11 emitted from the illumination light source 30 is converted into parallel light LW12 by the first condenser lens 41. The light LW12 has a specified wavelength range through transmission of the wavelength filter 44 and is incident to the first spatial light modulation device 90. Light LW13 having passed through the illumination region 91 of the first spatial light modulation device 90 becomes light LW14 through transmission of the second condenser lens 42, and then travels to the stage 50. Light LW15 having passed through the stage 50 becomes light LW16 through transmission of the imaging optical system 70, and forms an image of the test object 60 on the image sensor 80.

[0016]

Configuration of Computer 200>

FIG. 2 is a conceptual diagram illustrating a configuration of the computer 200. The computer 200-includes an image processing unit 201, a Fourier analysis unit 202, a selection unit 203, a changing unit 204, an optimization calculating unit 205, a storage unit 206, a device modulation unit 208, and a filter driving unit 209. The computer 200 is connected to the display unit 21 and an input unit 26 such as a keyboard. They can communicate with each other via a bus line BUS or the like.

[0017]

The image processing unit 201 and the Fourier analysis unit 202 receive an image signal from the image sensor 80. The image processing unit 201 performs an image process for the image signal from the image sensor 80 so as to be displayed on the display unit 21. The image signal having undergone the image process is sent to the display unit 21, and thus the display unit 21 displays an image of the test object. The image signal having undergone the image process is also sent to the storage unit 206. The Fourier analysis unit 202 performs the Fourier transform for the image signal from the image sensor 80. In addition, the Fourier analysis unit 202 analyzes a Fourier transform value (spatial frequency component) of the test object 60. The Fourier transform value is sent to the optimization calculating unit 205 and is also sent to the storage unit 206 to be stored. [0018]

The selection unit 203 specifies an observer's preference. For example, two or more images where the intensity distribution of illumination light is changed or wavelength filters are changed for the same test object 60 are displayed on the display unit 21. In addition, the observer specifies a preferred image using the input unit 26. The changing unit 204 changes a coefficient a of an evaluation function Q(f) described later based on a result from the selection unit 203.

[0019]

The optimization calculating unit 205 acquires an evaluation regarding the image of the test object 60 as a numerical value using the Fourier transform value of the test object 60 analyzed by the Fourier analysis unit 202 and the evaluation function Q(f) stored in the storage unit 206 in advance. The storage unit 206 stores the image signal sent from the image processing unit 201 and also stores the evaluation function Q(f). In addition, the storage unit 206 stores the Fourier transform value of the test object 60.

[0020]

The optimization calculating unit 205 sends data regarding the intensity distribution of illumination light or an appropriate wavelength of illumination light to the device modulation unit 208 or the filter driving unit 209 based on a result of the evaluation function Q(f). The device modulation unit 208 changes the size or the shape of the illumination region 91 of the first spatial light modulation device 90 based on the intensity distribution of illumination light. In addition, the filter driving unit 209 changes a wavelength range which is transmitted through the wavelength filter 44 based on the data regarding the wavelength of illumination light.

[0021]

<Evaluation Function Q(f): First Example> FIG. 3 shows a first example of the evaluation function Q(f) stored in the storage unit 206. In FIG. 3, for example, the evaluation function Q(f) includes an equation including a Fourier transform value and is exemplified as following Expression 1.

Q(f) = ctixfi + 2 xf 2 ...(1)

where a! and ct 2 are coefficients, and and f 2 are variables (functions).

[0022]

The variables ft and f 2 are, for example, variables which are obtained by normalizing a value which is obtained by multiplying the Fourier transform value FT of the image of the test object by filters ff (ffl and ff2). The Fourier transform value FT as an image feature amount may be the absolute value of a spatial frequency component which is obtained through the Fourier transform or may be a value which is the square of a spatial frequency component obtained through the Fourier transform. FIG. 3 shows the variables fi and f 2 using the absolute value of the Fourier transform.

[0023]

The filters ff (ffl and ff2) are filters which remove a DC component or a low frequency component of an image. FIG. 3 shows schematic filters ff. Here, the black region indicates removal of an image component. The filter ffl removes a DC component of the image signal of the test object 60, and the filter ff2 removes a DC component and a low frequency component of the image signal of the test object 60. The filter ffl and the filter ff2 are different from each other in terms of how much they emphasize the high frequency component of the image signal. Other filters ff as well as the filters ff shown in FIG. 3 may be used.

[0024]

The variables and f 2 are obtained by dividing surface integrals of values obtained by multiplying the Fourier transform value FT as an image feature amount by the filters ff, by a surface integral N of the Fourier transform value FT. With this, the variables fi and f 2 are normalized. The evaluation function Q(f) is obtained by multiplying the variables fi and f 2 by the coefficients c and a 2 . The coefficients and cc 2 are changed by the changing unit 204 shown in FIG. 2. For example, the coefficient oti=l and oc 2 =2.

[0025]

The evaluation function Q(f) according to the present embodiment is an example using the two variables fi and f 2 . However, the present invention is not limited thereto, and another filter ff may be further prepared, and a calculation may be performed based on an evaluation function using three variables.

[0026]

Operation of Microscope System 100>

FIG. 4 shows an example of the flowchart illustrating an operation of the microscope system 100.

In step S 101 , the coefficient of the evaluation function Q(f) is set to an initial value. The initial value is set such that an image where an image of the test object 60 is universally preferred has a high value in an evaluation function. For example, the coefficients oii and ct 2 shown in FIG. 3 are set to ^l and a 2 =l .

[0027]

In step SI 02, the illumination region 91 of the first spatial light modulation device 90 is set to a predetermined intensity distribution of illumination light. For example, the illumination region 91 of the first spatial light modulation device 90 has an entirely open shape. In addition, the wavelength filter 44 is set to a predetermined wavelength filter. The wavelength filter 44 is a wavelength filter which transmits visible light of, for example, 400 nm to 800 nm, therethrough.

[0028]

In step SI 03, an image signal of the test object 60 is detected by the image sensor 80 at the intensity distribution of illumination light or the wavelength filter set in step SI 02. In addition, the detected two-dimensional image is processed by the image processing unit 201 and is displayed on the display unit 21. In addition, the observer may select a partial region of the test object 60 from the two-dimensional image displayed on the display unit 21. The partial region may be one place or two or more places of the test object 60. The observer moves a region of the test object 60 which is desired to be observed to the center, and thus the central region of the image sensor 80 may be automatically selected as the partial region. The partial region may be set by the observer or may be set automatically.

[0029]

In step SI 04, the Fourier analysis unit 202 performs the Fourier transform for the detected image signal, and the optimization calculating unit 205 calculates an evaluation function Q(f) having the variables fi and f 2 including the Fourier transform value FT as an image feature amount. When the partial region is selected, the Fourier analysis unit 202 may analyze an image signal of the selected partial region of the test object 60.

[0030]

In step SI 05, the optimization calculating unit 205 determines whether or not the value of the calculated evaluation function Q(f) is better than a threshold value or the value of a previous evaluation function Q(f). In the first embodiment, the optimization calculating unit 205 calculates the intensity distribution of illumination light or optimization of a wavelength filter using a hill-climbing method. In the initial calculation, a value of the evaluation function Q(f) is compared with the threshold value, and, in the second calculation and thereafter, a value of the evaluation function Q(f) is compared with a value of the previously calculated evaluation function Q(f). If a value of this evaluation function Q(f) is good (YES), the flow proceeds to step SI 06, and if the value of the evaluation function Q(f) is bad (NO), the flow proceeds to step S 107.

[0031]

In step SI 06, the device modulation unit 208 slightly changes the intensity distribution of illumination light of the first spatial light modulation device 90.

Alternatively, the filter driving unit 209 changes the wavelength filter 44 to a wavelength filter which transmits a different wavelength range. Then, the flow proceeds to step S 103. Steps S 103 to S 106 are repeated until the value of the evaluation function Q(f) becomes worse than the value of the previous iteration. The value of the evaluation function Q(f) being worse than the value of the previous iteration means that the value of the previous evaluation function Q(f) is assumed to be the best.

[0032]

In step SI 07, it is determined whether or not the observer selects a preference. An image of the test object 60 which corresponds to the best value of the evaluation function Q(f) obtained in step SI 05 is displayed on the display unit 21. In addition, the selection unit 203 displays a button for selecting the preference so as to be adjacent to the image of the test object 60 on the display unit 21. If the image of the test object 60 displayed on the display unit 21 is suitable for the observer's preference (NO), the best image obtained in step SI 05 is displayed and then the flow finishes. If the observer clicks the button for selecting a preference (YES), the flow proceeds to step SI 08.

[0033]

In step SI 08, the selection unit 203 displays two or more images which are appropriate for observation of the test object 60 on the display unit 21. For example, the selection unit 203 displays the image of the test object 60 where a value of the evaluation function Q(f) is the highest according to the hill-climbing method, but displays a plurality of images where the value is not the highest.

[0034]

FIG. 5 shows an example where four images (IM1 to IM4) of the test object 60 are displayed on the display unit 21. It is assumed that an image where the value of the evaluation function Q(f) obtained by the optimization calculating unit 205 is the highest is the image IM3. In addition, the image IM1 and the image IM2 which are obtained during the repetition of steps S 103 to S 106 are displayed on the display unit 21 , and the image IM4 which is determined as being worse than the value of the evaluation function Q(f) of the image IM3 is displayed. The observer selects one or two images of the four images (IM1 to IM4) displayed on the display unit 21, using the input unit 26. FIG. 5 shows an example where the image IM4 is selected.

[0035]

In step SI 09, the changing unit 204 changes the coefficients o^ and 2 such that a value of the evaluation function Q(f) of the selected image IM4 is the best value. For example, although 0^=1 and a 2 =l in the evaluation function Q(f) hitherto, they are changed to 0^=3 and a 2 =2. In addition, the steps from step SI 02 to SI 07 are repeated again. If the calculation is performed again using the hill-climbing method with the evaluation function Q(f) where the coefficient a is changed, the same image as the selected image IM4 may have the best value of the evaluation function Q(f), or an image different from the image IM4 may have it. In any case, it becomes close to an image of the test object 60 where the observer's preference is emphasized. [0036]

As described above, when the test object 60 is observed, the microscope system 100 according to the first embodiment changes the evaluation function Q(f) including the Fourier transform value (spatial frequency component) which is an image feature amount, according to an observer's preference. In addition, the intensity distribution of illumination light or a wavelength filter is selected such that the evaluation function Q(f) has the best value. Therefore, the microscope system 100 can obtain an image of the test object 60 according to an observer's preference.

[0037]

The processes of the flowchart shown in FIG. 4 may be stored in a storage medium as a program. Further, the program stored in the storage medium is installed on a computer, thereby enabling the computer to perform calculation or the like.

[0038]

(Second Embodiment)

Although the microscope system 100 having the bright-field microscope has been described in the first embodiment, a microscope system 300 having a phase contrast microscope will be described in the second embodiment.

[0039]

<Microscope System 300>

FIG. 6A is a schematic configuration diagram of the microscope system 300.

The microscope system 300 mainly includes an illumination light source 30, an illumination optical system 40, an imaging optical 1 system 70, and an image sensor 80. The microscope system 300 is connected to a computer 220. The illumination optical system 40 includes a first condenser lens 41, a first spatial light modulation device 390, and a second condenser lens 42, and the imaging optical system 70 includes an objective lens 71 and a second spatial light modulation device 396. In addition, a stage 50 is disposed between the illumination optical system 40 and the imaging optical system 70, and a test object 60 is placed on the stage 50.

[0040]

The second spatial light modulation device 396 is disposed at a position of the pupil of the imaging optical system 70 or in the vicinity around it. In addition, the first spatial light modulation device 390 is disposed at a position forming a conjugate with respect to a position of the pupil of the imaging optical system 70 inside the illumination optical system 40. The first spatial light modulation device 390 is constituted by a liquid crystal panel, a DMD, or the like which can arbitrarily vary the intensity distribution of transmitted light. The second spatial light modulation device 396 is constituted by a liquid crystal panel or the like which can change phases. In addition, the second spatial light modulation device preferably has such a configuration that intensity distributions of light as well as phases can be freely changed.

[0041]

In FIG. 6 A, light emitted from the illumination light source 30 is denoted by dotted lines. The light LW31 emitted from the illumination light source 30 is converted into light LW32 by the first condenser lens 41. The light LW32 is incident to the first spatial light modulation device 390. Light LW33 having passed through the first spatial light modulation device 390 becomes light LW34 through transmission of the second condenser lens 42, and then travels to the test object 60. Light LW35 having passed through the test object 60 becomes light LW36 through transmission of the objective lens 71, and is incident to the second spatial light modulation device 396. The light LW36 becomes light LW37 through transmission of the second spatial light modulation device 396 and then forms an image on the image sensor 80. An image signal of the image formed on the image sensor 80 is sent to the computer 220. Further, the computer 220 analyzes a Fourier transform value (spatial frequency component) of the test object 60 based on the image obtained from the image sensor 80. In addition, an illumination shape appropriate for observation of the test object 60 is sent to the first spatial light modulation device 390 and the second spatial light modulation device 396.

[0042]

FIG. 6B is a plan view of the first spatial light modulation device 390. The first spatial light modulation device 390 is provided with a ring-shaped transmission region (illumination region) 391 of light, and a region other than the transmission region 391 forms a light shielding region 392.

[0043]

FIG. 6C is a plan view of the second spatial light modulation device 396. The second spatial light modulation device 396 is provided with a ring-shaped phase modulation region 397, and a phase of light transmitted through the phase modulation region 397 is misaligned by 1/4 wavelength. A phase of light transmitted through a diffracted light transmission region 398 which is a region other than the phase

modulation region 397 is not misaligned. The phase modulation region 397 is formed so as to become a conjugate with the transmission region 391 of the first spatial light modulation device 390.

[0044]

0th order light (transmitted light) of the microscope system 300 is transmitted through the transmission region 391 of the first spatial light modulation device 390, is then transmitted through the phase modulation region 397 of the second spatial light modulation device 396, and then reaches the image sensor 80. In addition, diffracted light emitted from the test object 60 is transmitted through the diffracted light transmission region 398 of the second spatial light modulation device 396, and then reaches the image sensor 80. In addition, the 0th order light and the diffracted light form an image on the image sensor 80. Generally, the 0th order light has larger intensity than the diffracted light, and thus light intensity of the phase modulation region 397 is preferably adjusted.

[0045]

In the first spatial light modulation device 390 and the second spatial light modulation device 396, sizes and shapes of the transmission region 391 and the phase modulation region 397 can be freely changed. In addition, as described in the first embodiment, an illumination shape stored in the storage unit may be derived, and the shape of the transmission region 391 of the first spatial light modulation device 390 may be optimized. The ring-shaped region 397 of the second spatial light modulation device 396 is formed so as to become a conjugate with the transmission region 391 of the first spatial light modulation device 390 at all times. For this reason, the shapes of the transmission region 391 and the ring-shaped region 397 are preferably varied in synchronization with each other.

[0046]

Configuration of Computer 220>

FIG. 7 is a conceptual diagram illustrating a configuration of the computer 220. The computer 220 is basically the same as the computer 200 shown in FIG. 3, and is different from the computer 200 in that it newly includes a determination unit 207 but, on the other hand, does not include the filter driving unit 209. In addition, the computer 220 is different from the computer 200 in that one device modulation unit 208 is connected to the first spatial light modulation device 390 and the second spatial light modulation device 396. Hereinafter, differences with the computer 200 will be mainly described.

[0047]

The determination unit 207 receives an image signal from the image sensor 80. In addition, the determination unit 207 determines whether the test object 60 is a phase object or an absorption object based on a signal related to contrast of the image. The phase object refers to a colorless transparent object which does not change light intensity and changes only a phase in transmitted light. The absorption object is also called a light absorbing object or an intensity object, and is a color object which changes the intensity of transmitted light.

[0048]

The determination unit 207 determines the phase object and the absorption object based on an image of the test object 60 which is captured under the two following conditions. In the first condition, the image sensor 80 detects the test object 60 in a state where the transmission region 391 is circular without the light shielding region 392 in the first spatial light modulation device 390, and the entire surface does not have phase contrast without the phase modulation region 397 in the second spatial light modulation device 396. In the second condition, the image sensor 80 detects the test object 60 in a state where the transmission region 391 has a ring shape in the first spatial light modulation device 390, and the phase modulation region 397 has a ring shape and has phase contrast of 1/4 wavelength in the second spatial light modulation device 396.

[0049]

The determination unit 207 determines whether or not contrast of the image of the test object 60 captured in the first condition is higher than a threshold value, and determines whether or not contrast of the image of the test object 60 captured in the second condition is higher than a threshold value. For example, if the test object 60 is a phase object, contrast of an image of the test object 60 captured in the first condition is low, and contrast of an image of the test object 60 captured in the second condition is high. In contrast, if the test object 60 is an absorption object, contrast of an image of the test object 60 captured in the first condition is high, and contrast of an image of the test object 60 captured in the second condition is low. As such, the determination unit 207 determines whether the test object 60 is a phase object or an absorption object. The result determined by the determination unit 207 is sent to the optimization calculating unit 205.

[0050]

The optimization calculating unit 205 uses the result sent from the determination unit 207 when comparing the Fourier transform value of the test object 60 with a Fourier transform value stored in the storage unit 206.

[0051]

The device modulation unit 208 changes the size or shape of the illumination region 391 of the first spatial light modulation device 390 based on the appropriate intensity distribution of illumination light. In addition, the device modulation unit 208 changes the size or shape of the phase modulation region 397 of the second spatial light modulation device 396 based on the appropriate intensity distribution of illumination light.

[0052]

<Evaluation Function Q(f): Second Example>

FIG. 8 is a second example of the evaluation function Q(f) stored in the storage unit 206. In FIG. 8, for example, the evaluation function Q(f) has a Fourier transform value (spatial frequency component) FT as a variable and is exemplified as following Expression 2. Q(f) = FTx surface integrals of ( 1 -cos(a 3 r))/N ... (2) where a 3 is a coefficient, and r is a radius.

[0053]

The evaluation function Q(f) is a numerical value which is obtained by normalizing a value which is obtained by multiplying the Fourier transform value FT of the image of the test object by a filter ff3. The Fourier transform value FT as an image feature amount may be the absolute value of a spatial frequency component which is obtained through the Fourier transform or may be a value which is the square of a spatial frequency component obtained through the Fourier transform. FIG. 8 shows the absolute value of the Fourier transform.

[0054]

The filter ff3 is a filter removes a DC component and a low frequency component of the image signal in the same manner as the first embodiment. FIG. 8 schematically shows the filter ff3. Here, the black region indicates removal of the image signal. The filter ff3 removes a DC component of the image signal of the test object 60, and also decreases a rate removing the image signal according to an increase in frequency. The filter ff3 uses the image signal of a high frequency. By changing the magnitude of the coefficient a 3 , a ratio removing the image signal is changed according to an increase in the frequency. In the evaluation function Q(f) shown in FIG. 3, a 3 is in a range of 0<oc 3 <l, and, for example, oc 3 =0.7 in FIG. 8.

[0055]

The evaluation function Q(f) is obtained by dividing a surface integral of a value obtained by multiplying the Fourier transform value FT as an image feature amount by the filter ff3, by a surface integral N of the Fourier transform value FT. With this, the evaluation function Q(f) is normalized, and the evaluation function Q(f) according to the second embodiment lies between 0 and 1.

[0056]

Although the evaluation function Q(f) according to the second embodiment uses the cosine function, the present invention is not limited thereto, and may prepare for a filter ff related to other equations.

[0057]

<Operation of Microscope System 300>

FIG. 9 shows an example of the flowchart illustrating an operation of the microscope system 300.

In step S201, the first spatial light modulation device 390 and the second spatial light modulation device 396 are modulated to have two predetermined intensity distributions (first condition: circular uniform illumination, second condition: ring illumination (phase ring)). The test object 60 is imaged by the image sensor 80 in the respective conditions.

[0058]

In step S202, the determination unit 207 determines whether the test object 60 is an absorption object or a phase object based on the contrast of the image of the test object 60. For example, if the test object 60 is a phase object, contrast of an image of the test object 60 captured in the first condition is low, and contrast of an image of the test object 60 captured in the second condition is high. In this flowchart, a description will be made on the premise that the determination unit 207 determines that the test object 60. is a phase object.

[0059]

In step S203, the coefficient oc 3 of the evaluation function Q(f) is set to an initial value. The initial value is set such that an image where an image of the test object 60 is universally preferred has a high value in an evaluation function. For example, the coefficient a 3 shown in FIG. 8 is set to

[0060]

In step S204, since the test object 60 is a phase object in step S202, the transmission region 391 of the first spatial light modulation device 390 and the phase modulation region 397 of the second spatial light modulation device 396 are set for the phase object. In the second embodiment, the optimization calculating unit 205 calculates the intensity distribution of illumination light or optimization of a wavelength filter using the genetic algorithm. For this reason, two intensity distributions of illumination light of current generation are initially prepared. For example, as shown in FIG. 10A, in one intensity distribution, the illumination region 391 and the phase modulation region 397 have a wide ring shape, and the other intensity distribution has a shape where four small openings are uniformly disposed from the center.

[0061]

In step S205, the optimization calculating unit 205 forms a plurality of intensity distributions of illumination light of the next generation according to a crossover or mutation method of the genetic algorithm.

In step S206, an image of the test object 60 is detected by the image sensor 80 using the plurality of intensity distributions of illumination light of the next generation. In addition, the detected two-dimensional image is processed by the image processing unit 201 and is displayed on the display unit 21. Further, the observer may select a partial region of the test object 60 from the two-dimensional image displayed on the display unit 21. The partial region may be one place or two or more places of the test object 60. The observer moves a region of the test object 60 which is desired to be observed to the center, and thus the central region of the image sensor 80 may be automatically selected as the partial region. The partial region may be set by the observer or may be set automatically.

[0062]

In step S207, the Fourier analysis unit 202 performs the Fourier transform for image signals of the plurality of intensity distributions of illumination light, and the optimization calculating unit 205 calculates an evaluation function Q(f) including the Fourier transform value FT as an image feature amount for each of the plurality of intensity distributions of illumination light. When the partial region is selected, the Fourier analysis unit 202 may analyze an image signal of the selected partial region of the test object 60.

[0063]

In step S208, the optimization calculating unit 205 compares values of the evaluation function Q(f) of the image of the test object 60 obtained in step S207, and, among them, selects a first illumination light intensity distribution which is the intensity distribution of illumination light having the best value of the evaluation function Q(f), and a second illumination light intensity distribution which is the intensity distribution of illumination light having the second best value of the evaluation function Q(f). The selected first and second illumination light intensity distributions have, for example, a shape of the transmission region 391 as shown in FIG. 10B.

[0064]

In step S209, it is determined whether or not a crossover or a mutation is performed up to a predetermined generation, for example, the 1000th generation. If the crossover or the like is not performed up to the predetermined generation, the flow returns to step S205. In addition, in step S205, illumination intensity distributions having a higher evaluation function Q(f) are searched based on the first and second illumination light intensity distributions selected in step S208. If the crossover or the like is performed up to the predetermined generation, the flow proceeds to step S210.

[0065]

In step S210, it is determined whether or not the observer selects a preference.

For example, an image of the test object 60 which corresponds to the best value of the evaluation function Q(f) obtained in step S208 is displayed on the display unit 21. In addition, the selection unit 203 displays a button for selecting the preference so as to be adjacent to the image of the test object 60 on the display unit 21. If the observer is satisfied with the image displayed on the display unit 21, the best image obtained in step S208 is displayed and then the flow finishes. If the observer clicks the button for selecting a preference, the flow proceeds to step S211.

[0066]

In step S211, the selection unit 203 displays two or more images which are appropriate for observation of the test object 60 on the display unit 21. For example, the selection unit 203 displays four images in an order that values of the evaluation function Q(f) obtained according to the genetic algorithm are high. The observer selects one or two images of four images displayed on the display unit 21 using the input unit 26.

[0067]

In step S212, the changing unit 204 changes the coefficient 3 such that the value of the evaluation function Q(f) of the selected image is the best value. For example, although cc 3 =0,8 in the evaluation function Q(f) hitherto, it is changed to 3 =0.7. In addition, the steps from step S204 to S209 are repeated again.

[0068] As described above, when the test object 60 is observed, the microscope system 300 according to the second embodiment changes the evaluation function Q(f) including the Fourier transform value (spatial frequency component) which is an image feature amount, according to an observer's preference. In addition, the intensity distribution of illumination light is selected such that the evaluation function Q(f) has the best value. Therefore, the microscope system 300 can obtain an image of the test object 60 according to an observer's preference.

[0069]

The processes of the flowchart shown in FIG. 9 may be stored in a storage medium as a program. Further, the program stored in the storage medium is installed in a computer, thereby enabling the computer to perform a calculation or the like.

[0070]

(Third Embodiment)

In the first embodiment, the evaluation function Q(f) is applied in the hill-climbing method, an image obtained through the hill-climbing method is displayed, and then an observer's preference is specified. In the same manner, in the second embodiment, the evaluation function Q(f) is applied in the genetic algorithm, an image obtained through the genetic algorithm is displayed, and then an observer's preference is specified. In the third embodiment, an observer's preference is specified before optimization is calculated in the hill-climbing method or the genetic algorithm.

[0071]

FIG. 11 is a flowchart for specifying an observer's preference before calculating optimization. In addition, FIGS. 12A, 12B, 12C and 12D are diagrams illustrating an example of the intensity distribution of illumination light for observing the test object 60 before calculating optimization and a spatial frequency component when the Fourier transform is performed. It can be applied to both the first embodiment and the second embodiment, and an example of being applied to the second embodiment will be described in the following.

[0072]

In step S301, the first spatial light modulation device 390 and the second spatial light modulation device 396 are modulated to have two predetermined intensity distributions (first condition: circular uniform illumination, second condition: ring illumination (phase ring)). The test object 60 is imaged by the image sensor 80 in the respective conditions. It is the same as step S201 in FIG. 9.

[0073]

In step S302, the determination unit 207 determines whether the test object 60 is an absorption object or a phase object based contrast of the image of the test object 60. It is the same as step S202 in FIG. 9.

[0074]

In step S303, a plurality of intensity distributions of illumination light are changed, and an image of the test object 60 is detected by the image sensor 80. If the test object 60 is determined as being an absorption object in step S302, for example, the first spatial light modulation devices 390 (Exl and Ex2) shown in FIGS. 12A and 12B are used. For example, the first spatial light modulation device 390 (Exl) has a large transmission region 391, and the first spatial light modulation device 390 (Ex2) has a small transmission region 391. For this reason, contrast or brightness of the images of the test object is different.

[0075]

In a case of the first spatial light modulation device 390 (Exl), as shown in the right part of FIG. 12A, intensity (amplitude) of the spatial frequency component (the longitudinal axis: intensity, the transverse axis: frequency) which has undergone the Fourier transform decreases at a specific slope, for example, from the low frequency to the high frequency. In a case of the first spatial light modulation device 390 (Ex2), as shown in the right part of FIG. 12B, for example, the high frequency of the spatial frequency component is emphasized.

[0076]

If the test object 60 is determined as being a phase object in step S302, for example, the first spatial light modulation devices 390 (Ex3 and Ex4) shown in FIGS. 12C and 12D are used. In a case of the ring shape where the diameter is great the width is small such as the first spatial light modulation device 390 (Ex3), as shown in the right part of FIG. 12C, the spatial frequency component is strengthened from the middle. In a case of the first spatial light modulation device 390 (Ex4) of which the diameter is small and the width is great, as shown in the right part of FIG. 12D, intensity (amplitude) decreases at a specific slope from the low frequency to the high frequency.

[0077]

In step S304, if the test object is an absorption object, the selection unit displays two images of the test object captured by the first spatial light modulation devices 390 (Exl and Ex2) on the display unit 21. If the test object is a phase object, the selection unit displays two images of the test object captured by the first spatial light modulation devices 390 (Ex3 and Ex4) on the display unit 21.

[0078]

In step S305, the observer selects one of the two images.

In step S306, the changing unit 204 sets the coefficient oc 3 of evaluation function Q(f) so as to fulfill the observer's preference.

[0079] In step S307, optimization is calculated according to the genetic algorithm using the evaluation function Q(f) where the coefficient a 3 of a so-called initial value is set to the coefficient a 3 in step S306.

[0080]

The processes of the flowchart shown in FIG. 11 may be stored in a storage medium as a program. Further, the program stored in the storage medium is installed in a computer, thereby enabling the computer to perform calculation or the like.

[0081]

Although the description has been made from the first embodiment to the third embodiment, various modifications may be further applied thereto. For example, the embodiments may be applied to an imaging apparatus such as a camera which images a landscape or the like in addition to the bright-field microscope and the phase contrast microscope.

[0082]

For example, in the microscope system 100, the illumination light source 30 applies white illumination light, and the wavelength filter 44 transmits a specific range of the wavelength of the light beam transmitted therethrough. Instead of using the wavelength filter 44, there may be use of the illumination light source 30 which has a plurality of LEDs which apply light of different wavelengths (for example, red, green, and blue). For example, if white light is irradiated toward the test object 60, red, green, and blue LEDs are simultaneously lighted, and if red light is irradiated toward the test object 60, only the red LED is lighted. As such, a wavelength of illumination light may be selected.

[0083]

Further, in the first embodiment, the example using the hill-climbing method has been shown with reference to FIG. 4. In addition, in the second embodiment, the example using the genetic algorithm has been shown with reference to FIG. 9. In the optimization calculating method includes a Tabu search method, a simulated annealing method, and the like in addition thereto, and the optimization calculating methods may be used. In addition, although the microscope system 300 shown in FIGS. 6 A, 6B and 6C does not include a wavelength filter, the wavelength filter may be included. Further, although, in the first and second embodiments, the Fourier transform value (spatial frequency component) has been used as an image feature amount, the present invention is not limited thereto. For example, a histogram (the longitudinal axis: frequency of appearance, the transverse axis: gray scale value) indicating the frequency of appearance of gray scale values may also be used as an image feature amount. Further, contrast or maximal gradient amount of an image may be used as an image feature amount. The maximal gradient amount is the maximal value of luminance value variations in a spatial luminance value profile (the transverse axis: for example, the X direction position, the longitudinal axis: luminance value).