Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGING OF AREAS
Document Type and Number:
WIPO Patent Application WO/2008/032106
Kind Code:
A2
Abstract:
A method and an apparatus for imaging of a sample on a sample platform is described. For each area that is imaged, preparation for acquisition of an image must take place before image data are acquired. The image data obtained from a first area of the sample during a particular acquiring step is transferred to memory whilst preparation for acquisition of an image of a second area of the sample is taking place.

Inventors:
LUEERSSEN DIETRICH WILHELM KAR (GB)
Application Number:
PCT/GB2007/003530
Publication Date:
March 20, 2008
Filing Date:
September 14, 2007
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
OXFORD GENE TECH IP LTD (GB)
LUEERSSEN DIETRICH WILHELM KAR (GB)
International Classes:
G02B21/36; G02B21/00
Domestic Patent References:
WO2001035325A12001-05-17
WO2001013640A12001-02-22
WO2003023482A12003-03-20
Foreign References:
US4647764A1987-03-03
US20060045388A12006-03-02
US20050243412A12005-11-03
Attorney, Agent or Firm:
MARSHALL, Cameron, John et al. (43-45 Bloomsbury Square, London WC1A 2RA, GB)
Download PDF:
Claims:
CLAIMS

1. A method for imaging a sample on a sample platform with an imaging device, comprising:

(a) preparing for acquisition of an image of an area of the sample; (b) acquiring, at the imaging device, whilst the sample platform is stationary with respect to the imaging device, image data for the image of the area of the sample; and

(c) repeating steps (a) and (b), wherein the image data obtained during a particular acquiring step (b) are transferred from the imaging device to memory during a subsequent generating step (a). 2. The method of claim 1, wherein the imaging device comprises an imaging segment and the step of acquiring comprises acquiring the image data at the imaging segment.

3. The method of claim 2, wherein the preparing step (a) comprises moving the sample platform and the imaging device relative to each other.

4. The method of claim 3, wherein the sample platform or imaging device is repositioned in step (a) for acquisition of an image from a different area of the sample to an area for which an image has been previously acquired and steps (a) and (b) are repeated continuously until image data for the entire sample have been acquired.

5. The method of any one of claims 2 to 4, wherein an image is formed on the imaging segment along an optical axis of the imaging device and the sample platform has a surface on which the sample is located and the step of moving comprises moving the surface substantially perpendicular to the optical axis.

6. The method of any one of claims 2 to 5, wherein the preparing step (a) comprises:

(a1) focussing the imaging device on the area of the sample that is to be imaged, such that an image of an area of the sample is formed on the imaging segment. 7. The method of any one of claims 2 to 6, wherein the acquiring step (b) comprises: (b1) illuminating the sample that is to be imaged.

8. The method of any one of claims 2 to 7, wherein the imaging device includes a charge coupled device.

9. The method of any one of claims 2 to 8, wherein the acquiring step (b) includes storing the image data for an image by photoelectric conversion of light in the imaging

segment and transferring the image data from the imaging segment to a storage segment in the charge coupled device, wherein the storage segment is directly connected to the imaging segment.

10. The method of claim 9, wherein the image data is transferred from the storage segment to the memory during a subsequent preparing step (a) and acquiring step (b).

11. The method of claim 9 or claim 10, wherein the acquiring step (b) comprises storing in the storage segment of the charge coupled device only a quantity of image data corresponding to the area that was imaged immediately prior to the storing step without retaining image data in the storage segment for any other area that has been previously acquired.

12. A computer program product comprising computer-executable instructions for carrying out the method of any one of the preceding claims.

13. An apparatus for imaging a sample, comprising: a control unit; a memory; a sample platform adapted to support the sample; and an imaging device configured to acquire image data for an image of an area of the sample, wherein the imaging device is further configured to transfer, to the memory, first image data for a first image of a first area of the sample whilst preparation for acquisition of a second image of a second area occurs, wherein the sample platform and imaging device are stationary relative to each other during acquisition of image data.

14. The apparatus of claim 13, wherein the imaging device comprises an imaging segment at which the image data is acquired.

15. The apparatus of claim 14, further comprising a control unit connected to the imaging device and the sample platform, wherein the control unit is configured to move the sample platform relative to the imaging device from a first position to a second position to prepare for acquisition of the second image at the imaging segment whilst the first image data is being transferred from the imaging device to the memory.

16. The apparatus of claim 15, wherein the control unit is configured to control the imaging device to acquire, in the imaging segment, second image data for a second image of a second area of the sample corresponding to the second position immediately on completion of the relative movement of the sample platform or imaging device from the first

5 position to the second position.

17. The apparatus of claim 16, wherein the imaging device further comprises imaging optics between the imaging segment and the sample platform, wherein the control unit is configured to control the imaging optics to prepare for acquisition of an image of an area of the sample on the imaging segment by focussing an 10 image of an area of the sample onto the imaging segment.

18. The apparatus of claim 17, wherein the control unit is configured to control the imaging optics to focus the second image of the second area of the sample onto the imaging segment after movement of the sample platform or imaging device from the first position to the second position.

15 19. The apparatus of claim 18, wherein the control unit is further configured to control the imaging device to acquire second image data for a second image of a second area of the sample immediately on completion of focussing of the second image on the imaging segment.

20. The apparatus of any one of claims 17 to 19, wherein the imaging optics and 20 imaging device are configured to image a diffraction-limited element in the sample onto more than one photo-responsive element in the imaging segment.

21. The apparatus of claim 20, wherein each photo-responsive element corresponds to a pixel in each image that is acquired.

22. The apparatus of any one of claims 17 to 21, wherein an image is formed on the 25 imaging segment along an optical axis of the imaging device and wherein the sample platform has a surface on which the sample is located and which is configured to move laterally in a direction perpendicular to the optical axis of the imaging device.

23. The apparatus of claim 22, wherein the surface is configured to tilt about one or more axes, each of which is substantially perpendicular to the optical axis.

30 24. The apparatus of any one of claims 14 to 23, wherein the imaging device includes a charge coupled device which comprises the imaging segment.

25. The apparatus of claim 24, wherein the charge coupled device comprises a storage segment which is directly connected to the imaging segment, wherein the imaging segment

is configured to acquire image data by photoelectric conversion of light and the storage segment is configured to store and receive the image data directly from the imaging segment.

26. The apparatus of claim 24 or claim 25, wherein the charge coupled device is a frame transfer charge coupled device.

27. The apparatus of claim 24 or claim 25, wherein the charge coupled device is an interline transfer charge coupled device.

28. The apparatus of any one of claims 25 to 27, wherein the storage segment is dimensioned to store only the first image data and not the second image data at any given time.

29. A method for imaging a sample on a sample platform with an imaging device substantially as hereinbefore described with reference to the accompanying drawings.

30. An apparatus for imaging a sample substantially as hereinbefore described with reference to the accompanying drawings.

Description:

IMAGING OF AREAS

All documents and on-line information cited herein are incorporated by reference in their entirety.

TECHNICAL FIELD

The present invention relates to an apparatus and method for sample imaging, for example fluorescence microscopy of few or single molecules in microarray samples.

BACKGROUND ART

Imaging apparatus and methods can be used to obtain detailed images of a sample which is to be analysed. This is often done by imaging small areas of the sample in detail and combining images of these small areas to obtain a single detailed image of the whole or a larger part of the sample. Some of these imaging techniques use single dye molecule spectroscopy, single quantum dot spectroscopy, and related types of ultra-sensitive microscopy and spectroscopy. These are techniques that are used in many laboratories worldwide. There are few approaches that apply these techniques to microarray analysis.

Microarray experiments take many forms, but a typical example would involve the determination of specific molecular events, e.g. by fluorescent microscopy, resulting from the application of a sample to the surface of a substrate, e.g. a glass slide, where the surface encompasses one or more entities that may react with a component of the sample being tested. A common microarray analysis method images the emission of two spectrally distinct dyes (e.g., Cy3 and Cy5, emitting around 570nm and 670nm, respectively). Most commercial fluorescence scanners are based on single-point detection, although increasingly there are also CCD-based systems. The typical linear pixel resolution is about

5 to 10μm. Most commercial microarray scanners are operated in essentially an analogue reading mode, even though the data is digitally stored (16bit TIFF files are the norm) and processed. This is because it is only the intensity of the signal that is interpreted, e.g., intensities between experiments carried out on the same microscope slide are compared.

However it is possible, using high-resolution optics and low densities of fluorescent molecules, to spatially discriminate and image single molecules. This comprises an entirely digital method since intensities are quantised and comparable on an absolute basis between different slides. Apart from possible offset counts of the CCD (dark count, configured offsets, etc.), one molecule may result in a particular CCD count, whilst two molecules may result in a count of double that of one molecule. There exist alternative models that describe how the number of single molecules can be extracted from the fluorescence signal stemming from multiple single molecules, even within the same diffraction limited location (Mutch, S, et al. (2007), "Deconvolving Single-Molecule Intensity

Distribution for Quantitative Microscopy Measurements", Biophysical Journal, Vol. 92 (April 2007), pp 2926-2943).

One of the key experimental considerations of single molecule spectroscopy is the use of a high spatial resolution, approaching the diffraction limit or even exceeding it. Techniques currently used to increase the spatial resolution include wide-field microscope optics, conventional as well as specialised confocal microscopy (e.g., 4Pl, and stimulated emission depletion microscopy), scanning near-field optical microscopy (SNOM or NSOM), a method that uses a new Fundamental Resolution Measure (FREM) that is not the Rayleigh criterion

(PNAS, March 21, 2006, vol. 103 No. 12 4457-4462) and Photoactivated Localization Microscopy (PALM, Science Express online publication, 10 August 2006).

A particular type of sample imaging employs a method referred to as "image tiling", whereby a succession of images of different areas of the sample over the entire sample area are captured. These images can be subsequently "tiled" using processing apparatus to obtain an image of the entire sample. Such image "tiling" methods are described in U.S. Patent No. 4,760,385 and implemented in a Slide Scanner produced by Bacus Laboratories Inc. (BLISS).

For each image that is acquired, the sample must be repositioned relative to the imaging device so that a new area of the sample can be imaged and the imaging optics must focus and tilt the sample platform to obtain an optimum image on the CCD. Generally, the sample is illuminated with light, which may, among other things, be used for reflection, transmission, or excitation of fluorescence; coming from the sample, light is incident on the CCD; its origin may be from transmission, reflection, or fluorescence, to name only a few options. More advanced techniques can use coherent and incoherent Raman scattering, time-resolved fluorescence, optical frequency mixing (including up-conversion), etc. A conventional charge coupled device (CCD) comprises rows of photodiodes implemented as P-N junctions in a semiconductor substrate 1 . Charge accumulates in the photodiodes by photoelectric conversion from light incident on the photodiodes. Each photodiode thus represents a given pixel of an image which is being captured by the CCD. Since the photodiodes are arranged in rows, the charge is read out of the photodiodes into a column shift register along each row one pixel at a time over a period of time. This means that, if

1 "Characteristics and use of FFT-CCD area image sensor" (Hamamatsu Technical Information SD-25 http://sales.hamamatsu.com/assets/applications/SSD/Character istics_and_use_of FFT-CCD.pdf)

the image being captured by the CCD changes whilst the image data is being read-out of the rows, the image data will become corrupted as it is read out along the row because the photodiodes continue to receive charge by photoelectric conversion as the row is being read. Such a CCD arrangement is known as a full frame transfer (FFT) CCD and must be used with an electro-mechanical shutter mechanism to avoid the problem of blurring and image data corruption. Moreover, all of the image data for all of the rows of the CCD must be read out of all of the rows into memory before the next frame is acquired. This read-out step takes a relatively long period of time in comparison to the image capture step and limits the speed with which images can be captured. In a conventional sample imaging apparatus and method employing a full frame transfer CCD with image tiling, the following steps take place.

First, the sample platform is moved and possibly tilted so that a given area of the sample can be imaged. The imaging optics are adjusted so that an image of the area is focussed on the imaging section of the CCD. After this has occurred, the sample is illuminated and image data are acquired at the CCD from charge accumulating in the rows of photodiodes. At some point after illumination has stopped, the charge in the rows of photodiodes is transferred to the shift register column. Before the sample is reexposed for imaging of a new area, the charge in all of the photodiodes must be read-out into the shift register. The shift register transfers the image data via an output section into attached memory. The process of reading the image data out of each row of the CCD into the shift register takes a relatively long period of time, but it must be completed before the sample is moved. When the image data has been read-out, the sample platform can be moved and tilted and focussing takes place. This then allows imaging of a subsequent area of the sample.

The above steps of the conventional image tiling method are illustrated in a timing diagram shown in Figure 1. Block 101 shows when positioning (including tilting) of the sample platform takes place. Block 102 shows when focussing of the sample area takes place.

Block 103 shows when illumination of the sample takes place. Block 104 shows when image acquisition in the CCD takes place and block 105 shows when image data is read out of the CCD into memory. As will be seen, image illumination occurs only during the image acquisition step so that the maximum possible exposure of light can be used during the image acquisition step to obtain an image.

By the term "image data", it is intended to mean any representation of the image, such as an electronic representation embodied in digital or analogue signals or a particular quantity of charge. By the term "memory", it is intended to mean any element in which the image

representation is stored, such as the shift register of the CCD or other memory which is external to the CCD.

The timing values shown in Figure 1 are exemplary and should not be deemed to be limiting in any way. Anal. Chem. 76 (2004), 5960-5964 states that the time taken to image a sample using the conventional image tiling method outlined above is as follows:-

Tseq = (A/δ 2 ) [ treadout + (WWC) + ((t iN + t positioning) / LC) ] where δ is the pixel size in the object plane, t,n is the illumination time, tp osMon i n g is the time taken to position the sample platform, W- sh i ft is the time taken to shift one line of the CCD into a readout register and Wj 0Ut is the time taken to digitise each pixel from the readout register. L is the number of lines of the CCD being used and C is the number of pixels per line in the CCD. As will be appreciated, when images of a large number of areas of the sample are acquired successively, the total time taken to acquire all the images depends appreciably on the time taken to read the image data for each image out of the CCD (i.e. W-shift and t readout ). In addition, the paper argues, the time to position the sample is far greater than this time. To quote, "For example, sample shift by 20 μm with a precision according to the pixel size of 200 nm in the object plane takes about 0.2-2 s, depending on the motorized sample stage."

A rough estimation of scanning times for comparable properties of the scan result (1 cm 2 with single-molecule sensitivity and pixel resolution better than 350nm) have been reported as 3.8 months for single-point detection methods, about 10 hours for image tiling methods, and only about 20 minutes using an altogether different method (time-delay and integration, or TDI) described below 2 .

For this reason, it is known to employ an imaging apparatus and method which is based on continuous scanning of a sample which is synchronised with shifting of charge in the rows of photodiodes, thereby permitting continuous scanning and imaging of a sample. An instrument which is designed to operate in this way is known as the CytoScout™ which is manufactured by Upper Austrian Research (Linz, Austria) and employs a special read-out mode (from the CCD and the scanning stage) known as the "time-delay and integration mode" (TDI mode). Another example of this type of scanner is Hamamatsu's NanoZoomer

Digital Pathology System C9600. U.S. Patent No. 6,711,283 also describes a TDI sample imaging process.

: Sonnleitner et al., Proc. SPIE 5699 (2005) 202-210

WO 00/25113 describes an apparatus and method of sample scanning that relies on a time delay and integration (TDI) imaging process. The described process utilises a conventional CCD in conjunction with continuous movement of the sample platform. Alternatively, there are specialised TDI CCD cameras such as Hamamatsu's C10000-201 or C10000-301. The sample platform is moved underneath the CCD at the a speed that synchronises the movement of the charge on the CCD with the movement of the sample image on the sensor; for example, the linear speed of the sample motion, scaled by the magnification of the imaging optics, is identical to the speed of the line transfer. This avoids causing the blurring that would normally occur with a conventional CCD when the sample is moved. In addition, the time taken to image an entire sample is reduced because the image data is read out of the CCD at the same time as the sample platform is being moved. However, such an apparatus and method requires complex and accurate control hardware to maintain the image quality. In addition, image blurring may occur due to the continuous movement of the sample, while the charge is being shifted in steps. The present invention aims to solve the aforementioned problems. In particular, it is an aim of the invention to maintain a large duty cycle, i.e., to keep the time required to image the large sample to a minimum.

In particular, it is an aim of the present invention to provide an apparatus and method for sample imaging that does not rely on complex and accurate control hardware. Moreover, it is an aim of the present invention to provide an apparatus and method that achieves faster scanning than is currently achieved from apparatus and methods employing an "image tiling" process.

By way of background information, it is known to use two areas of connected diode junctions in a CCD 1 . One of the areas is an active (imaging) area and the other area is covered and not exposed to incident light and is used for storage and charge transfer. This type of CCD is known as a frame transfer CCD. In this type of CCD, for each row of photodiodes, there is an imaging section in the row and a storage section in the row. For each frame that is imaged by the CCD, the charge that accumulates in the photodiodes of the imaging section can be transferred at high speed to the storage section along the row. The image data in the storage section can then be read-out whilst the imaging section is exposed to a new image.

Another type of CCD is known as an interline transfer CCD 1 , which also contains P-N junction photodiodes arranged in rows. In contrast to the other types of CCD described above, each photodiode is connected to a cell of a shift register via a transfer gate. The shift registers are not exposed to the incident light. On exposure of the CCD, charge from

photoelectric conversion accumulates in the photodiodes. After a period of time, the charge in each of the photodiodes in the rows is transferred to a corresponding cell in the shift register on activation of a transfer gate located between each row and its corresponding shift register. The shift registers for the entire CCD can then output the image data to memory whilst the photodiodes are exposed to a new image.

It is also known that a camera can signal the moment when the CCD is ready to be exposed to light, i.e. when the CCD is ready to begin acquisition of a new image frame. This may be communicated via an electrical signal (e.g., a TTL pulse), which can be used to strobe a light source (laser, light emitting diode, lamp), or alternatively trigger an electromechanical shutter, or alternatively trigger an electro-optical shutter, or alternatively trigger an acousto-optical modulator in order to control either the light that illuminates the sample, of the light that is emitted from the sample to strike the camera. Recent advances have made it possible to make such a signal available during the transfer of the previously acquired image to memory; one such implementation is the 'pipelined mode' of Opteon Corporation's interline scan CCDs, available in the Agility bundle. It is also known that a camera can signal the duration (i.e. the exposure time) when the CCD is meant to acquire a new image frame.

DISCLOSURE OF THE INVENTION

It is an aim of the present invention to solve the aforementioned problems by providing a system and method that images sample areas with a "tiling" process by outputting image data from an imaging device for a given image of an area of a sample whilst the sample is being repositioned and/or whilst focussing is taking place in preparation for acquisition by the imaging device of an image of a subsequent area.

Although the apparatus and method are designed here in conjunction with a single molecule scanner, it is clear that their application is not limited to single molecule scanners. In particular, the improved apparatus and method can be used in any optical imaging system in which multiple images are acquired and subsequently combined to obtain a single larger detailed image for example, in photography, microscopy, etc.

In view of the foregoing and in accordance with a first aspect of the present invention, there is provided a method for imaging a sample on a sample platform with an imaging device, comprising:

(a) preparing for acquisition of an image of an area of the sample;

(b) acquiring, at the imaging device, whilst the sample platform is stationary with respect to the imaging device, image data for an image of the area of the sample; and

(c) repeating steps (a) and (b), wherein at least some of the image data obtained during a particular acquiring step (b) are transferred from the imaging device to memory during a subsequent preparing step (a). The step of preparing for acquisition of an image includes all the necessary preparatory steps in arranging the imaging device and sample platform to produce an image of a desired area at the imaging device before it is captured. In one sense, this means all the steps necessary to "generate" an image (i.e. set-up the image for acquisition) before the image is actually acquired. This might include positioning of the sample platform and/or imaging device relative to each other, adjusting focussing optics in the imaging device, sending and receiving instructions between control units in order to set, e.g., exposure parameters, and even illumination of the sample for specialised microscopy such as Photoactivated Localization Microscopy.

Preferably, the image data is acquired at an imaging segment of the imaging device and subsequently transferred from the imaging segment to the memory. The acquisition of the image means the capturing of the image, i.e. its conversion from light to image data, commonly referred to as "exposure". The preparing step (and possibly the acquiring step) takes place in parallel with the reading out of image data from the imaging segment into memory. This improves the overall time taken to obtain image data for a given sample area. Moreover, complex control hardware is not required to move the sample platform at the same rate as the transfer of image data from the imaging device to memory. The components of the instrument can still be considered independent modules whose interplay need not be finely tuned, whereas in TDI-based systems and methods, the CCD and the positioning stage need to be customised for each other and considered as a single unit. Imaging diffraction limited features, which are ideally round, can easily result in elliptical features on a TDI system either if the charge transfer speed and the sample transfer speed are not exactly synchronised, or if the movement of the sample stage has a small lateral component.

Preferably, the sample platform is positioned in step (a) for acquisition of an image from a different area of the sample to an area for which an image has been previously acquired and steps (a) and (b) are repeated continuously, preferably, until images for the entire sample have been obtained. A "different" area may be: an adjacent area to the area previously imaged, an area overlapping with the area previously imaged or an area distinct and separated from the area previously imaged (e.g. by a known distance). Thus, steps (a) and (b) are repeated N or more times where N is, for example, 2, 3, 4, 5, 6, 7, 8, 9, 10, 20,

30, 40, 50, 100, 200, 300, 400, 500, 1000, 2000, 3000, 4000, 5000, 6000, 7000, 8000, 10000 or more.

Preferably, an image is formed on the imaging segment along an optical axis of the imaging device and the sample platform has a surface on which the sample is located and the step of moving comprises moving the surface substantially perpendicular to the optical axis. The surface of the sample platform can be arranged vertically, horizontally or at any angle with respect to the ground. Accordingly, the imaging device will be arranged relative to the platform along its optical axis in the way described above. It will, however, be appreciated that the actual optical path may not be linear. In one embodiment of the invention, there is the further step, after the positioning step (a), of focussing an image of the area of the sample that is to be imaged on the imaging segment. Preferably, during the acquiring step, the method may also include the step of illuminating the sample that is to be imaged. In this way, the sample is only illuminated when image acquisition is actually taking place, thereby ensuring that a maximum amount of light is transferred from the sample during exposure.

In one embodiment of the present invention, the imaging device includes a charge coupled device. Although the present application describes technical details of CCD sensors, it will be appreciated that this discussion does not limit the invention to this type of sensor. In particular, CMOS sensors and, in particular, active pixel sensors (APS) might be used instead.

The step of acquiring (b) may include storing the image data for an image by photoelectric conversion of light in the imaging segment and transferring the image data from the imaging segment to a storage segment in the charge coupled device, wherein the storage segment is directly connected to the imaging segment. Advantageously, the image data may be transferred from the storage segment to the memory during a subsequent preparing step (a) and acquiring step (b).

By the term "image data", it is intended to mean any representation of the image, such as an electronic representation embodied in digital or analogue signals or a particular quantity of charge. Since the imaging segment and storage segments are directly connected to each other in the charge coupled device, the transfer of data from the imaging segment into the storage segment is faster than the transfer of data from the storage element into the memory. This means that the charge coupled device is capable of acquiring a new image in the imaging segment as soon as the image data has been transferred to the storage segment. Whilst

the sample platform is being repositioned for acquisition of a second image, the image data for the first image can be transferred out of the storage segment into the memory and this can continue to occur whilst the second image data are being acquired in the imaging segment. However, transfer of image data out of the storage segment into the memory must be completed before acquisition of the second image data are complete.

In one embodiment of the present invention, the charge coupled device is an interline charge coupled device. In an alternative embodiment of the present invention, the charge coupled device is a frame transfer charge coupled device but not a full frame transfer charge coupled device. Preferably, the imaging device can output, during transfer of a previous image into memory, a signal indicative of the time when the charge coupled device is available for exposure to a new image, and this signal preferably has the duration of the desired exposure time. The imaging device (including the CCD) may be able to respond to an external trigger signal during the transfer of the previous image to memory.

Preferably, the step of acquiring comprises storing in the storage segment of the imaging device only a quantity of image data corresponding to the sample area which was imaged immediately prior to the storing step, without retaining image data for any other sample area that has been imaged. This ensures that image data can be transferred from the storage segment to the memory quickly during the positioning step (a), or alternatively during the positioning step (a) and acquiring step (b) combined. Moreover, the size and complexity of the storage segment of the charge coupled device can be minimised.

In a second aspect of the present invention, there is provided a computer program comprising computer-executable instructions for carrying out the method described above.

In a third aspect of the present invention, there is provided apparatus for imaging a sample, comprising: a control unit; a sample platform adapted to support the sample; and an imaging device configured to acquire, in response to signals from the control unit, image data for an image of an area of the sample, wherein the imaging device is further configured to transfer, from the imaging device to the memory, first image data for a first image of a first area of the sample whilst preparation for acquisition of a second image of a second area of the sample occurs, wherein the sample platform and imaging device are stationary relative to each other during acquisition of image data.

In one embodiment of the present invention, the imaging device comprises an imaging segment at which the image data is acquired.

Preferably, the apparatus further comprises a control unit connected to the imaging device and the sample platform, wherein the control unit is configured to move the sample platform relative to the imaging device from a first position to a second position to prepare for acquisition of the second image at the imaging segment whilst the first image data is being transferred from the imaging device to the memory.

By the term "memory", it is intended to mean any element in which the image representation is stored, such as the shift register of the CCD or other memory which is external to the CCD.

Thus, movement of the sample platform takes place in parallel with the reading out of image data from the storage segment into memory which improves the overall time taken to obtain image data for a given sample area. Even more preferably, movement of the sample platform and illumination and exposure of a second sample area onto the imaging segment takes place in parallel with the reading out of first image data from the storage segment into memory which improves the overall time taken to obtain image data for a given sample area. Moreover, complex control hardware is not required to move the sample platform at the same rate as the transfer of image data from the imaging device to the memory.

Preferably, the control unit is configured to control the imaging device to acquire in the imaging segment second image data for a second image of a second area of the sample corresponding to the second position immediately on completion of the movement of the sample platform from a first position to a second position.

In this way, there is no redundant time in the sampling process during which acquisition of an image is being delayed by having to wait for transfer of image data from the imaging segment to the memory.

The sample platform may have a surface, substantially perpendicular to an optical axis of the imaging device, wherein the sample is located on the imaging surface which is configured to move laterally in a direction perpendicular to said optical axis. In addition, the sample platform may be configured to tilt around one or more axes perpendicular to said optical axis.

In one embodiment of the present invention, the apparatus, preferably the imaging device itself, further comprises imaging optics between the imaging segment and the sample platform,

wherein the control unit is configured to control the imaging optics to focus an image of an area of the sample onto the imaging segment, and wherein the control unit is further configured to control the imaging device to acquire in the imaging segment second image data for a second image of a second area of the 5 sample immediately on completion of focussing of the second image on the imaging segment.

The imaging optics may be advantageously configured to detect single molecules from the sample on the imaging segment of the imaging device by detection of the response of fluorescent dyes on molecules in the sample. This may mean that the imaging optics is

10 configured to resolve an element in the sample at the diffraction limit, preferably less than 500nm for visible light. More preferably, the imaging optics' magnification is matched to the pixel size of the imaging sensor such that, in accordance with Nyquist's theorem, the effective pixel size is substantially smaller than the diffraction limit. In one embodiment of the present invention, the imaging device has an effective pixel resolution in the range of

15 50nm to 500nm. More particularly, the imaging device may have an effective pixel resolution in the range of 125nm to 250nm. The effective pixel resolution is calculated by dividing the physical pixel size of the imaging sensor by the lateral magnification of the imaging optics.

Preferably, completion of the focussing for the second image occurs after transfer of the 20 first image data to the memory. This means that the imaging segment is ready to acquire the second image data for a second image of a second area of the sample as soon as focussing has been completed. In this way, there is no redundant time in the sampling process during which acquisition of an image is being delayed by having to wait for transfer of image data from the imaging segment to the memory.

25 The charge coupled device may comprise a storage segment which is directly connected to the imaging segment. The imaging segment may be configured to acquire the image data for a first image by photoelectric conversion of light and the storage segment is configured to store and receive the image data directly from the imaging segment. The storage segment is configured to transfer the image data to the memory.

30 Since the imaging segment and storage segments are directly connected to each other within the imaging device, the transfer of data from the imaging segment into the storage segment is faster than the transfer of data from the storage element into the memory. This means that the charge coupled device is capable of acquiring a new image in the imaging segment as soon as the image data has been transferred to the storage segment. Whilst

35 the sample platform is being repositioned for acquisition of the second image, the first

image data can be transferred out of the storage segment into the memory and this can continue whilst the second image data are being acquired in the imaging segment. However, transfer out of the storage segment into the memory must be completed before acquisition of the second image data are complete. In one embodiment of the present invention, the charge coupled device is an interline charge coupled device. In an alternative embodiment of the present invention, the charge coupled device is a frame transfer charge coupled device.

Preferably, the storage segment is dimensioned to store only the first image data and not the second image data. This ensures that image data can be transferred from the storage segment to the memory quickly during the positioning step (b). Moreover, the size and complexity of the storage segment of the charge coupled device can be minimised.

The term "comprising" encompasses "including" as well as "consisting" e.g. a composition "comprising" X may consist exclusively of X or may include something additional e.g. X + Y.

BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 shows a timing diagram illustrating the steps undertaken in sample imaging according to the prior art;

Figure 2 shows a sample imaging apparatus according to the present invention; Figure 3 shows the components of the sample imaging apparatus of Figure 2;

Figure 4 shows the components of a charge coupled device used in the imaging apparatus of Figure 2;

Figure 5 shows a flow diagram illustrating the method of sampling imaging according to the present invention;

Figure 6 shows a timing diagram illustrating the steps undertaken in sample imaging according to the present invention; and Figures 7a to Ie show experimental data of a positioning stage suitable for use with the imaging apparatus of the present invention.

MODES FOR CARRYING OUT THE INVENTION

Referring to figures 2 and 3, apparatus 200 of the present invention comprises a scanner 210 comprising an imaging device 220 which includes an interline charge coupled device (CCD) 222 connected to a control unit 240. The scanner 210 also comprises imaging platform 230 with an imaging surface 232 which is adapted to receive a sample 234 which is to be imaged. The sample 234 is located on the surface of a microscope slide 235. The

scanner 210 also comprises a platform positioning unit 236 which is able to move the imaging platform 230 horizontally and vertically in three dimensions X 1 Y and Z and tilt the imaging platform 230 in rotational directions θ and φ. In an alternative embodiment (not depicted), the platform positioning unit 236 is able to move the imaging platform 230 horizontally and vertically in two dimensions X and Y, and tilt the imaging platform 230 in rotational directions θ and φ while an additional positioning unit moves the imaging optics, or alternatively part(s) of the positioning optics, in the Z direction.

The scanner 210 includes a control unit 240 which comprises an embedded microprocessor 310 and memory 326 connected thereto. The control unit 240 is connected to the platform positioning unit 236 to control movement of the imaging platform 230 in accordance with computer program instructions executed by the microprocessor 310. The control unit 240 is also connected to a processing device 330 which can access the memory 326 and control the scanner 210 remotely via the microprocessor 310.

The scanner 210 also includes imaging optics 250 included with the imaging device 220 and located between the charge coupled device 222 and the imaging surface 232. The imaging optics 250 are operable under the control of the control unit 240 to automatically focus an image of an area 234a of the sample 234 on the charge coupled device 222. The imaging device 220, including the imaging optics 250, has an optical axis A. The operation of the imaging optics 250 and the apparatus and method for automatic focussing are described in a co-pending United Kingdom patent application filed on even date under agent's reference P041316GB which is herein incorporated by reference.

Illumination means 260 also form part of the scanner 210 and is positioned so that, on activation, an excitation beam is incident on the imaging surface 232 to cause fluorescence or luminescence of the sample 234 for imaging by the imaging device 220. Both the imaging optics 250 and illumination means 260 are connected to the control unit 240 and operate under control of the microprocessor 310 in accordance with executed computer program instructions. The control unit 240 is also connected to the imaging device 220 to control the operation of the charge coupled device 222 and receive control and timing signals from the imaging device. As mentioned above, the charge coupled device 222 is an interline charge coupled device which has the structure shown in Figure 4.

The interline charge coupled device 222 contains P-N junction photodiodes 402 arranged in rows 404. Each photodiode is connected to a cell 405 of a row shift register 406 via a transfer gate 408. The row shift registers 406 are connected to a column shift register

along one end of the shift registers 406. The photodiodes 402 store signal charge by photoelectric conversion from light incident on the rows 404. The rows 404 of photodiodes 402 represent an imaging segment of the charge coupled device 222 and the row shift registers 406 represent a storage segment of the charge coupled device 222. The row shift registers 406 are not exposed to the incident light and do not receive signal charge by photoelectric conversion.

Each photodiode 402 corresponds to a pixel in image data that is subsequently output by the imaging device 220. Charge from photoelectric conversion accumulates in the photodiodes 402 as internal capacitance over a period of time. In this way, the rows 404 store pixel data for a frame of an image formed on the imaging segment of the charge coupled device 222. The image is of an area 234a of the sample 234. After a period of time, the charge in each of the photodiodes 402 in the rows 404 is transferred to the row shift registers 406 on activation of an adjacent transfer gate 408. Once this has occurred, the photodiodes 402 are able to receive signal charge for a subsequent frame. The charge coupled device 222 indicates to the microprocessor 310 that it is ready for exposure to an image by transmitting a TTL status signal. Moreover, the charge coupled device 222 indicates to the microprocessor 310, via a further status signal, an exposure time for acquisition of new image data. Thus, the control unit 210 has information to allow it to ensure that positioning of the sample platform 230 and adjustment of the imaging optics 250 must be complete by the exposure time indicated by the charge coupled device 222.

The row shift registers 406 output the pixel data in series along each row to the column shift register 410 which transfers image data for an entire frame in series to the memory 326. The transfer of all the image data along the row and column shift registers takes significantly longer than the transfer of charge from each photodiode 402 into the row shift register 406.

The apparatus operates in accordance with the invention on the basis of computer program instructions executing in the microprocessor 310. These steps are illustrated in Figure 5.

An image counter, n, is maintained by microprocessor 310 and stored in memory 326 (or other register local to the microprocessor 310). In Figure 5, N represents the total number if images that are to be acquired.

In step 501 , the control unit 240 positions the imaging platform 230 so that an area 234a of the sample 234 is imaged through the imaging optics 250 on the imaging segment of the charge coupled device 222 (i.e. rows 404 of the photodiodes 402). The imaging platform 230 is tilted and moved laterally to obtain the best image of the given area 234a of the sample 234, and the imaging optics 250 is focussed.

In step 502, the control unit 210 automatically focuses the image of the area 234a on the charge coupled device 222 by adjusting the imaging optics 250 in accordance with the method, as mentioned above and described in co-pending PCT patent application filed on even date under agent's reference P044871WO and hereby incorporated by reference. In step 504, the control unit 210 signals to the charge coupled device 222 that an image of the area 234a should be acquired. In step 503 (which occurs during step 504), the control unit 210 directs the illumination means 260 to excite the sample 234 so that light reflected, transmitted or fluoresced (i.e. light emerging) from the surface of the sample 234 is incident on the charge coupled device 222. The light incident on the imaging segment of the charge coupled device 222 generates charge in the photodiodes 402.

After sufficient time for exposure of the charge coupled device 222 to the incident light to allow sufficient charge to build up in the photodiodes 402, the control unit 210, in step 505, operates the transfer gates 408, thereby transferring the charge from each photodiode 402 into a corresponding shift register 404. As soon as step 505 has occurred, the charge coupled device is able to capture a new image (i.e. step 504 can take place) without affecting the previously acquired image data (now in the shift registers 406) which corresponds to the image of the previous area 234a. Generally, the subsequent area to be imaged will be adjacent to the previous area. However, this need not be the case and the scope of the present invention is not limited in this way.

Step 506 occurs in parallel with one or more of steps 501 to 504. The image data contained in the shift registers 404 for the area that was previously imaged is output into the memory 326. This occurs whilst the imaging platform 230 is being repositioned, the image is being focussed, the sample is being illuminated and/or exposed. Step 507 also occurs in parallel with one or more of steps 501 to 504 and steps 505 and 506. The image data which was previously transferred into memory 326 is now output to an external memory, for example in processing device 330.

Steps 501 to 505 and 506 are repeated in the way described above until all areas of the sample 234 have been imaged or until all areas 234, which it is desired to image, have been imaged.

The image data, which represents a succession of images obtained by the charge coupled device 222 (where each image corresponds to an area of the sample 234), are stored in memory 326. On completion of imaging of the desired amount of the sample 234, the image

data are transmitted to the processing device 330 which receives the image data for analysis and/or output on a display screen.

Figure 6 shows a timing diagram of steps 501 to 505 and 506 described above in accordance with the present invention. As will be seen from Figure 6, the time required for an image acquisition cycle in the present invention might be 95ms which is less than for the prior art cycle shown in Figure 1.

The timing values shown in Figure 6 are exemplary and should not be deemed to be limiting in any way. Example of positioning micropositioning stage

The imaging stage used in this example was a Physik lnstrumente (Pl) M-663 with C-856 controller. Figures 7a to 7c show experimental data of the speed and precision of the stage. In this example, the stepsize was chosen to be constant, and the movement was carried out in a step-and-settle mode. The step size was 150 micrometer for each step, and the steps were carried out in forward and backward direction. It is important to note that the steps were carried out with a positioning accuracy of better than 400nm (root mean square for 250 data points). In addition, the absolute position of the stage is known by means of an internal reference of the stage. The typical positioning time was about 25ms (root mean square for 250 data points; the first data point shows a longer time due to the nature of the measuring program).

Figures 7d and 7e show experimental data of the stage position and the time take to focus at each position. The focus mechanism used in this example is described in the co-pending PCT patent application filed on even date under agent's reference P044871WO; this data set is only used to illustrate that these short positioning and focus times can be achieved, and their precise implementation is not important for the present invention.

Example of image acquisition An Opteon camera with Agility bundle used in accordance with the present invention has a special readout mode called "pipelining". The mode allows special timing of images, which can be used for the rapid scheduling of images.

"Conventional" readout of the camera

In this mode, an image is exposed, transferred out of the CCD chip, and only when the transfer is complete, the next exposure can be triggered.

"Pipelined" readout of the camera

In this mode, an image is exposed, and transferred out from the CCD chip; the next exposure is triggered before the transfer is complete, and the trigger time is timed so that

the exposure and the transfer finish at the same time; once the first transfer is complete, the next transfer starts.

Where 3 exposures were used, the time saving for the pipelined mode was twice the exposure time which is expected. For systems with N+1 exposures, the potential time saving was N exposure times. In one example, the camera has a readout time of 85ms. We have shown above that sample positioning can be achieved in 25ms, and automatic focus can be done in less than 40ms (as shown in Figure 7e). When exposure times of 20ms or less are used, the potential time saving can approach (when used in accordance with the present invention):

t — 1 Km ^501 + ^503 + ^502/504 + *505 '

'saving - 1 - lim *→.o

V 4 IOl τ t 102 τ t 104/103 τ 4 IOS / i γ

. ,. 25ms + 39ms + 20ms + 85ms - N _ A0 .

= 1 - lim w→0O = 50%

(25ms + 39ms + 20ms + 85ms) • N where the values t x are the time periods shown in Figures 1 and 6.

The data logs for the two different modes are set out below and show the performance of the camera only. In this case, the theoretical time time saving is

20ms + 85ms - 3 , ,, > , (20ms + 85ms) 3 whereas the experimental value is

.

There is close agreement.

Log of "conventional" readout mode

Getting camera status Open status completed in 0ns

Opening Camera.

Open completed in 7.021ms

Open status completed in 0ns

The camera's serial number is 12448 The camera's product code is B2L70C

Camera firmware version is 3.24

Camera firmware date is 2006/03/27

The camera's maximum frame rate is 15 frames per second.

The readout time for a 640x480 image is 19518us.

The readout time for a 2048x2048 image is 62559us. The camera has 1 color plane (s) and 2 tap(s) per color plane. The gain range on tap 0 is [0.596156, 37.2156]. The gain range on tap 1 is [0.574864, 35.8864]. The camera gain range is [0.596156, 35.8864].

The offset range on tap 0 is [-0.0625, 15.875]. The offset range on tap 1 is [-0.0625, 15.875]. The camera offset range is [-0.0625, 15.875]. The image width range is [4, 2048] . The image height range is [1, 2048] .

The horizontal binning values are [1, 2, 4, 8].

The vertical binning values are [1, 2, 3, 4, 5, 6, 7, 8].

The image pixel depth is 12

Camera supports 8 bit grayscale images Camera supports 16 bit grayscale images

The agility bundle is present The fidelity bundle is present The LUT bundle is present The optiport bundle is present The trigger bundle is present Making image objects

Beginning acquisition of 3 16 bit grayscale images Image acquired 107.644ms after launch.

Camera waited 0ns for trigger. Sensor image dump took 2.000us Exposure period was 20.095ms Readout time was 85.090ms

Image armed at Mon Sep 03 15:01:55.943153 +0000 2007

Image triggered at Mon Sep 03 15:01:55.943169 +0000 2007 Image acquired at Mon Sep 03 15:01:56.048420 +0000 2007

Arm & Trigger timestamp error = 24.000us Image acquired 196.583ms after launch. Camera waited 0ns for trigger. Sensor image dump took 2.000us Exposure period was 20.095ms Readout time was 85.106ms

Image armed at Mon Sep 03 15:01:55.959470 +0000 2007 Image triggered at Mon Sep 03 15:01:56.048403 +0000 2007 Image acquired at Mon Sep 03 15:01:56.153682 +0000 2007 Arm & Trigger timestamp error = 24. OOOus Image acquired 203.275ms after launch.

Camera waited 0ns for trigger.

Sensor image dump took 2.000us

Exposure period was 20.094ms

Readout time was 85.107ms Image armed at Mon Sep 03 15:01:56.057900 +0000 2007

Image triggered at Mon Sep 03 15:01:56.153659 +0000 2007

Image acquired at Mon Sep 03 15:01:56.258928 +0000 2007

Arm & Trigger timestamp error = 24.000us Acquired 3 images in 334.089ms Closing camera.

Close completed in 0ns

Open status completed in 0ns

"pipelined" readout mode Getting camera status

Open status completed in 0ns

Opening Camera.

Open completed in 7.022ms

Open status completed in 0ns The camera's serial number is 12448

The camera's product code is B2L70C

Camera firmware version is 3.24

Camera firmware date is 2006/03/27

The camera's maximum frame rate is 15 frames per second. The readout time for a 640x480 image is 19518us.

The readout time for a 2048x2048 image is 62559us.

The camera has 1 color plane (s) and 2 tap(s) per color plane.

The gain range on tap 0 is [0.596156, 37.2156].

The gain range on tap 1 is [0.574864, 35.8864]. The camera gain range is [0.596156, 35.8864].

The offset range on tap 0 is [-0.0625, 15.875].

The offset range on tap 1 is [-0.0625, 15.875].

The camera offset range is [-0.0625, 15.875].

The image width range is [4, 2048] . The image height range is [1, 2048] .

The horizontal binning values are [1, 2, 4, 8] .

The vertical binning values are [1, 2, 3, 4, 5, 6, 7, 8].

The image pixel depth is 12

Camera supports 8 bit grayscale images Camera supports 16 bit grayscale images

The agility bundle is present

The fidelity bundle is present

The LUT bundle is present The optiport bundle is present The trigger bundle is present Making image objects Beginning acquisition of 3 16 bit grayscale images Image acquired 108.274ms after launch. Camera waited 0ns for trigger. Sensor image dump took 2.000us Exposure period was 20.094ms Readout time was 85.105ms

Image armed at Mon Sep 03 15:01:48.863113 +0000 2007 Image triggered at Mon Sep 03 15:01:48.863128 +0000 2007 Image acquired at Mon Sep 03 15:01:48.968395 +0000 2007 Arm & Trigger timestamp error = 24.000us Image acquired 177.118ms after launch. Camera waited 0ns for trigger. Sensor image dump took 128.000us Exposure period was 20.094ms Readout time was 85.105ms Image armed at Mon Sep 03 15:01:48.879426 +0000 2007

Image triggered at Mon Sep 03 15:01:48.968378 +0000 2007 Image acquired at Mon Sep 03 15:01:49.053672 +0000 2007 Arm & Trigger timestamp error = 24.000us Image acquired 161.601ms after launch. Camera waited 0ns for trigger.

Sensor image dump took 128.000us Exposure period was 20.094ms Readout time was 85.105ms

Image armed at Mon Sep 03 15:01:48.980118 +0000 2007 Image triggered at Mon Sep 03 15:01:49.053665 +0000 2007 Image acquired at Mon Sep 03 15:01:49.138932 +0000 2007 Arm & Trigger timestamp error = 24.000us Acquired 3 images in 294.236ms Closing camera. Close completed in 0ns

Open status completed in 0ns

It will be clear to the man skilled in the art that the present invention has been described by way of example only, and that modifications of detail can be made within the spirit and scope of the invention.