Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD OF STANDARDIZING IMAGE PIXEL VALUES IN A MICROSCOPE-BASED SYSTEM
Document Type and Number:
WIPO Patent Application WO/2024/020406
Kind Code:
A1
Abstract:
based system for pattern illumination, the method comprising: obtaining an initial image of a target field of view (FOV) of a sample with an imaging assembly of the microscope-based system; selecting a standardizing method; applying the standardizing method to the initial image to transform the initial image into a standardized image having standardized image pixel values; generating a mask pattern for the target FOV; and controlling an illuminating assembly of the microscope-based system to illuminate the target FOV of the sample with the mask pattern.

Inventors:
LIAO JUNG-CHI (TW)
HUANG CHUN KAI (TW)
Application Number:
PCT/US2023/070437
Publication Date:
January 25, 2024
Filing Date:
July 18, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SYNCELL TAIWAN INC (CN)
LIAO JUNG CHI (CN)
International Classes:
G02B21/36; G02B21/06; G06T7/11; G06T7/33; G06V10/141; G06V10/22; G06V10/26; G06T3/40; G06V10/50
Foreign References:
US20190339456A12019-11-07
US20180203221A12018-07-19
US8923568B22014-12-30
US6697526B12004-02-24
US20210224954A12021-07-22
US20170154236A12017-06-01
US20210191094A12021-06-24
US20140152794A12014-06-05
US20110050745A12011-03-03
Attorney, Agent or Firm:
THOMAS, Justin (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computer implemented method for standardizing image pixel values in a microscopebased system for pattern illumination, the method comprising: a. obtaining an initial image of a target field of view (FOV) of a sample with an imaging assembly of the microscope-based system; b. selecting a standardizing method; c. applying the standardizing method to the initial image to transform the initial image into a standardized image having standardized image pixel values; d. generating a mask pattern for the target FOV; and e. controlling an illuminating assembly of the microscope-based system to illuminate the target FOV of the sample with the mask pattern.

2. The method of claim 1, wherein the standardizing method further includes a normalization process for processing the initial image to the standardized image.

3. The method of claim 2, wherein the standardizing method includes a normalization process for processing the initial image to a normalized image.

4. The method of claim 3, wherein the standardizing method further includes an image enhancement process for processing the normalized image to the standardized image.

5. The method of claim 2, wherein the normalization process is performed based on an algorithm: Vout = Vin * (V™* out / VmaXin ), wherein Vout is an output pixel value of the normalized image, Vin is a pixel value of the initial image, Vmaxout is a maximum pixel value of the normalized image, and Vmaxn1 is a maximum pixel value of the initial image.

6. The method of claim 4, wherein the image enhancement process is selected from a group consisting of a high pass filter method, a low pass filter method, a band pass filter method, a band reject filter method, a pixel value transformation method, an image transformation method, a histogram-based pixel value transformation method and a combination of at least two methods thereof.

7. The method of claim 6, wherein the pixel value transformation method is selected from a group consisting of a linear transformation method, a log transformation method, an exponential transformation method, a gamma-correlation method, a piecewise linear transformation method and a combination of at least two methods thereof.

8. The method of claim 6, wherein the image transformation method is selected from a group consisting of a Fourier spectrum transform, a Hartley transform, a discrete cosine transform, a discrete sine transform, a Walsh-Hadamard transform, a slant transform, a Haar transform, a discrete wavelet transform, a Laplacian transform, a Sobel transform, a homomorphic filter transfer function, and a combination of at least two methods thereof. 9. The method of claim 6 , wherein the histogram-based pixel value transformation method is selected from a group consisting of a histogram specification method, a histogram equalization method, an adaptive histogram equalization method, a contrast limited adaptive histogram equalization method and a combination of at least two methods thereof. 10. The method of claim 4, further comprising a step of calculating an image histogram from the initial image, wherein the image histogram plots a number of pixels in the initial image (vertical axis) with a pixel value (horizontal axis). 1 1. The method of claim 10, further comprising a step of identifying a minimum pixel value and a maximum pixel value in the image histogram. 12. The method of claim 11, wherein the step of transforming the initial image is transforming the initial image by applying the standardizing method to the initial image based on the minimum pixel value and the maximum pixel value. 13. The method of claim 1, wherein each step of the method is processed by the microscopebased system automatically in real-time. 14. The method of claim 1, wherein more than one image are initially obtained from the microscope-based system, wherein each image obtained is based on a different field of view of the sample. 15. The method of claim 1, wherein an image obtained through the different field of view of the sample is transformed by the same standardizing method in order to have the standardized image for pattern illumination. 16. A method of standardizing image pixel values for a microscope-based system for pattern illumination, the method comprising: a. obtaining an initial image of a target field of view (FOV) of a sample with an imaging assembly of the microscope-based system, the initial image having a quantifiable pixel value for each pixel of the initial image; b. determining a maximum pixel value for a normalization process; c. normalizing the initial image into a normalized image based on the maximum pixel value, an original maximum pixel value of the initial image, and the pixel values of each pixel in the initial image; d. selecting a standardizing method; e. applying the standardizing method to the normalized image to transform the normalized image into a standardized image having standardized image pixel values; f. generating a mask pattern for the target FOV; and g. controlling an illuminating assembly of the microscope-based system to illuminate the target FOV with the mask pattern. 17. The method of claim 16, wherein the standardizing method further an image enhancement process for processing the normalized image to the standardizing image. 18. The method of claim 17, wherein the image enhancement process is selected from a group consisting of a high pass filter method, a low pass filter method, a band pass filter method, a band reject filter method, a pixel value transformation method, an image transformation method, a histogram-based pixel value transformation method and a combination of at least two methods thereof. 19. The method of claim 18, wherein the pixel value transformation method is selected from a group consisting of a linear transformation method, a log transformation method, an exponential transformation method, a gamma-correlation method, a piecewise linear transformation and a combination of at least two methods thereof. 20. The method of claim 18, wherein the image transformation method is selected from a group consisting of a Fourier spectrum transform, a Hartley transform, a discrete cosine transform, a discrete sine transform, a Walsh-Hadamard transform, a slant transform, a Haar transform, a discrete wavelet transform, a Laplacian transform, a Sobel transform, a homomorphic filter transfer function, and a combination of at least two methods thereof. 21. The method of claim 18, wherein the histogram -based pixel value transformation method is selected from a group consisting of a histogram specification method, a histogram equalization method, an adaptive histogram equalization method, a contrast limited adaptive histogram equalization method and a combination of at least two methods thereof. 22. The method of claim 17, further comprising a step of calculating an image histogram from the initial image, wherein the image histogram plots a number of pixels in the initial image (vertical axis) with a pixel value (e.g., a particular brightness or tonal value) (horizontal axis). 23. The method of claim 22, further comprising a step of identifying a minimum pixel value and a maximum pixel value in the image histogram.

24. The method of claim 23, wherein the step of transforming the initial image is transforming the initial image by applying the standardizing method to the initial image based on the minimum pixel value and the maximum pixel value.

25. The method of claim 16, wherein the initial image is obtained using a microscope-based system comprising: a. a microscope comprising an objective and a stage, wherein the stage is configured to be loaded with the sample; b. an imaging assembly comprising a controllable camera and an imaging light source; c. an illuminating assembly comprising a pattern illumination device and an illuminating light source; and d. a processing module which is coupled to the microscope, the imaging assembly, and the illuminating assembly, wherein the processing module is configured to control the imaging assembly to acquire at least one image of a first field of view of the sample, process the at least one image automatically in real-time based on a predefined criterion, so as to determine coordinate information of an interested region in the first field of view of the sample, and e. after the interested region of the first field of view has been fully illuminated by the illuminating assembly, move the stage to subsequent fields of view of the sample to acquire images of the subsequent fields of view, determine coordinate information of interested regions in the subsequent fields of view automatically in real-time based on the predefined criterion, and illuminate the subsequent fields of view with light patterns corresponding to coordinate information of the interested regions in the subsequent fields of view, the light patterns varying through the first and subsequent fields of view.

26. The method of claim 18, wherein the image enhancement process is selected based on a slope of the range of the image pixel values of the initial image.

27. The method of claim 26, wherein the range of image pixel values of the initial image comprise image pixel values for each pixel in the initial image.

28. The method of claim 26, wherein the range of image pixel values of the initial image is a dataset with a slope and a peak.

29. The method of claim 26, wherein the range of image pixel values of the initial image are substantially linear between the initial minimum value and the initial maximum value.

30. The method of claim 26, wherein the range of image pixel values of the initial image are non-linear between the initial minimum pixel value and the initial maximum pixel value.

31. The method of claim 26, wherein the range of image pixel values of the initial image are unsigned integers. 32. The method of claim 26, wherein the range of image pixel values are scaled in the converted image.

33. The method of claim 18, wherein the image enhancement process is selected from the group consisting of linearity transformation, multi-order linear transformation, Sigmoid fit theory transformation, or modified Sigmoid fit theory transformation. 34. The method of claim 16, wherein obtaining the initial image comprises capturing an image with a microscope-based system configured to control an imaging assembly to acquire the initial image, wherein a sample is introduced to the microscope-based system and a field of view of the sample is established.

35. The method of claim 16, wherein each step of the method is processed by the microscope-based system automatically in real-time.

Description:
METHOD OF STANDARDIZING IMAGE PIXEL VALUES IN A

MICROSCOPE-BASED SYSTEM

CLAIM OF PRIORITY

[0001] This application claims priority to U.S. Provisional Patent Application No. 63/368,705, filed on July 18, 2022, titled “METHOD OF STANDARDIZING IMAGE PIXEL VALUES IN A MICROSCOPE-BASED SYSTEM,” which is herein incorporated by reference in its entirety.

INCORPORATION BY REFERENCE

[0002] All publications and patent applications mentioned in this specification are herein incorporated by reference in their entirety to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference.

BACKGROUND

[0003] The field of microscopy has benefitted greatly from advancements in associated imaging systems providing increased spatial and temporal resolution. Microscope-based imaging systems may provide high resolution images that can be processed for different real-world application. Regional and object-based separation or segmentation of images can increase opportunities for identification elements within an image or sample that may translate to improved opportunities in applicable fields. For example, the medical field relies heavily on microscope-based systems for evaluations, diagnosis and therapeutic strategy development. Accordingly, processing images obtained from microscope-based systems requires the highest level of precision and accuracy. However, an image captured by a microscope commonly has some unforeseeable differences of a sample at different fields of view or different samples in the same field of view. Therefore, pre-processing of an image can be a crucial step in image segmentation and artificial intelligence-based processing systems.

[0004] Image processing of microscope-based images can be difficult and cumbersome for computer-based processing models such as artificial intelligence related machine learning and neural networks. For these reasons, it would be desirable to provide improved methods, systems, and tools for pre-processing images obtained from a microscope-based system. It would be particularly desirable to provide simplified systems and methods relating to pre-segmentation image processing that may be used in mask generating pattern development for photochemical illumination. At least some of these objectives will be met by the various embodiments that follow.

SUMMARY OF THE DISCLOSURE

[0005] Described herein are systems and methods for standardizing image pixel values of an image obtained by a microscope-based system.

[0006] A computer implemented method for standardizing image pixel values in a microscope-based system for pattern illumination is provided, the method comprising: obtaining an initial image of a target field of view (FOV) of a sample with an imaging assembly of the microscope-based system; selecting a standardizing method; applying the standardizing method to the initial image to transform the initial image into a standardized image having standardized image pixel values; generating a mask pattern for the target FOV; and controlling an illuminating assembly of the microscope-based system to illuminate the target FOV of the sample with the mask pattern.

[0007] In some aspects, the standardizing method includes an image enhancement process for processing the initial image to a temporary image.

[0008] In one aspect, the standardizing method further includes a normalization process for processing the temporary image to the standardized image.

[0009] In another aspect, the standardizing method includes a normalization process for processing the initial image to a normalized image.

[0010] In some aspects, the standardizing method further includes an image enhancement process for processing the normalized image to the standardized image.

[0011] In one aspect, the normalization process is performed based on an algorithm: V ou t = Vin * ( V max out / V maX in ), wherein V ou t is an output pixel value of the normalized image, Vin is a pixel value of the initial image, V max ou t is a maximum pixel value of the normalized image, and V max in is a maximum pixel value of the initial image.

[0012] In some aspects, the image enhancement process is selected from a group consisting of a high pass filter method, a low pass filter method, a band pass filter method, a band reject filter method, a pixel value transformation method, an image transformation method, a histogram-based pixel value transformation method and a combination of at least two methods thereof.

[0013] In one aspect, the pixel value transformation method is selected from a group consisting of a linear transformation method, a log transformation method, an exponential transformation method, a gamma-correlation method, a piecewise linear transformation method and a combination of at least two methods thereof.

[0014] In some aspects, the image transformation method is selected from a group consisting of a Fourier spectrum transform, a Hartley transform, a discrete cosine transform, a discrete sine transform, a Walsh-Hadamard transform, a slant transform, a Haar transform, a discrete wavelet transform, a Laplacian transform, a Sobel transform, a homomorphic filter transfer function, and a combination of at least two methods thereof.

[0015] In another aspect, the histogram-based pixel value transformation method is selected from a group consisting of a histogram specification method, a histogram equalization method, an adaptive histogram equalization method, a contrast limited adaptive histogram equalization method and a combination of at least two methods thereof.

[0016] In one aspect, the method includes a step of calculating an image histogram from the initial image, wherein the image histogram plots a number of pixels in the initial image (vertical axis) with a pixel value (horizontal axis).

[0017] In another aspect, the method includes a step of identifying a minimum pixel value and a maximum pixel value in the image histogram.

[0018] In some aspects, the step of transforming the initial image is transforming the initial image by applying the standardizing method to the initial image based on the minimum pixel value and the maximum pixel value.

[0019] In other aspects, each step of the method is processed by the microscope-based system automatically in real-time.

[0020] In some aspects, more than one image are initially obtained from the microscopebased system, wherein each image obtained is based on a different field of view of the sample. [0021] In one aspect, an image obtained through the different field of view of the sample is transformed by the same standardizing method in order to have the standardized image for pattern illumination.

[0022] A method of standardizing image pixel values for a microscope-based system for pattern illumination is provided, the method comprising: obtaining an initial image of a target field of view (FOV) of a sample with an imaging assembly of the microscope-based system, the initial image having a quantifiable pixel value for each pixel of the initial image; determining a maximum pixel value for a normalization process; normalizing the initial image into a normalized image based on the maximum pixel value, an original maximum pixel value of the initial image, and the pixel values of each pixel in the initial image; selecting a standardizing method; applying the standardizing method to the normalized image to transform the normalized image into a standardized image having standardized image pixel values; generating a mask pattern for the target FOV; and controlling an illuminating assembly of the microscope-based system to illuminate the target FOV with the mask pattern.

[0023] In some aspects, the standardizing method further an image enhancement process for processing the normalized image to the standardizing image.

[0024] In another aspect, the image enhancement process is selected from a group consisting of a high pass filter method, a low pass filter method, a band pass filter method, a band reject filter method, a pixel value transformation method, an image transformation method, a histogram-based pixel value transformation method and a combination of at least two methods thereof.

[0025] In some aspects, the pixel value transformation method is selected from a group consisting of a linear transformation method, a log transformation method, an exponential transformation method, a gamma-correlation method, a piecewise linear transformation, and a combination of at least two methods thereof.

[0026] In one aspect, the image transformation method is selected from a group consisting of a Fourier spectrum transform, a Hartley transform, a discrete cosine transform, a discrete sine transform, a Walsh-Hadamard transform, a slant transform, a Haar transform, a discrete wavelet transform, a Laplacian transform, a Sobel transform, a homomorphic filter transfer function, and a combination of at least two methods thereof.

[0027] In some aspects, the histogram-based pixel value transformation method is selected from a group consisting of a histogram specification method, a histogram equalization method, an adaptive histogram equalization method, a contrast limited adaptive histogram equalization method and a combination of at least two methods thereof.

[0028] In another aspect, the method includes a step of calculating an image histogram from the initial image, wherein the image histogram plots a number of pixels in the initial image (vertical axis) with a pixel value (e.g., a particular brightness or tonal value) (horizontal axis). [0029] In some aspects, the method includes a step of identifying a minimum pixel value and a maximum pixel value in the image histogram.

[0030] In some aspects, the step of transforming the initial image is transforming the initial image by applying the standardizing method to the initial image based on the minimum pixel value and the maximum pixel value.

[0031] In other aspects, the initial image is obtained using a microscope-based system comprising: a microscope comprising an objective and a stage, wherein the stage is configured to be loaded with the sample; an imaging assembly comprising a controllable camera and an imaging light source; an illuminating assembly comprising a pattern illumination device and an illuminating light source; and a processing module which is coupled to the microscope, the imaging assembly, and the illuminating assembly, wherein the processing module is configured to control the imaging assembly to acquire at least one image of a first field of view of the sample, process the at least one image automatically in real-time based on a predefined criterion, so as to determine coordinate information of an interested region in the first field of view of the sample, and after the interested region of the first field of view has been fully illuminated by the illuminating assembly, move the stage to subsequent fields of view of the sample to acquire images of the subsequent fields of view, determine coordinate information of interested regions in the subsequent fields of view automatically in real-time based on the predefined criterion, and illuminate the subsequent fields of view with light patterns corresponding to coordinate information of the interested regions in the subsequent fields of view, the light patterns varying through the first and subsequent fields of view.

[0032] In one aspect, the transformation method is selected based on a slope of the range of the image pixel values of the initial image.

[0033] In some aspects, the range of image pixel values of the initial image comprise image pixel values for each pixel in the initial image.

[0034] In other aspects, the range of image pixel values of the initial image is a dataset with a slope and a peak.

[0035] In some aspects, the range of image pixel values of the initial image are substantially linear between the initial minimum value and the initial maximum value.

[0036] In some implementations, the transformation method is selected from the group consisting of linearity transformation, multi-order linear transformation, or Sigmoid fit theory transformation.

[0037] In one aspect, the range of image pixel values of the initial image are non-linear between the initial minimum pixel value and the initial maximum pixel value.

[0038] In another aspect, the transformation method is Modified Sigmoid fit theory transformation.

[0039] In some aspects, the range of image pixel values of the initial image are unsigned integers.

[0040] In other aspects, the range of image pixel values are scaled in the converted image.

[0041] In one aspect, obtaining the initial image comprises capturing an image with a microscope-based system configured to control an imaging assembly to acquire the initial image, wherein a sample is introduced to the microscope-based system and a field of view of the sample is established.

[0042] In other aspects, each step of the method is processed by the microscope-based system automatically in real-time.

[0043] In some aspects, more than one image are initially obtained from the microscopebased system, wherein each image obtained is based on a different field of view of the sample. [0044] All of the methods and apparatuses described herein, in any combination, are herein contemplated and can be used to achieve the benefits as described herein.

BRIEF DESCRIPTION OF THE DRAWINGS

[0045] A better understanding of the features and advantages of the methods and apparatuses described herein will be obtained by reference to the following detailed description that sets forth illustrative embodiments, and the accompanying drawings of which:

[0046] FIG. 1 is an example of a microscope-based system that may be used in performing a method according to one or more examples described herein.

[0047] FIG. 2 is a flow chart illustrating a process of generating a mask pattern for photo illumination according to one or more examples described herein.

[0048] FIG. 3 is a flow chart illustrating operator-selected image-pre-processing methods for photo illumination according to one or more examples described herein.

[0049] FIG. 4 is a flow chart showing an adjustable operation of image enhancement methodology selection with operator input according to one or more examples described herein. [0050] FIG. 5 is a flow chart illustrating a method of transforming bit depth of an image in pre-processing for mask generation and photo illumination according to one or more examples described herein.

[0051] FIG. 6A shows an example of an original image of Target FOV of Sample S.

[0052] FIG. 6B shows an example of a transformation image of Target FOV of Sample S by one or more transformation or image enhancement methods according to one or more examples described herein.

DETAILED DESCRIPTION

[0053] Computer-implemented mask-pattern generating method generally involves an image processing algorithm that may include processing techniques, methodologies, artificial intelligence models, and/or microscope-based hardware. For example, a computer implemented method to generate a mask used in photochemical illumination of a target substrate and associated with a microscope-based system used to illuminate regions of interest within a sample. The masks may be generated based on the output or results of the mask-pattern generating method including the Al model (machine learning evaluates inputs against one or more layers of a deep learning neural network) or traditional image processing algorithms. [0054] As images are processed through an Al model, which propagates through one or more layers of algorithms to provide an appropriate output. This process can long and may involve detailed validation that can be cumbersome if the image input has not received sufficient preprocessing. A well-trained Al model may experience increased processing complexity that can introduce a greater margin of error based on a lack of image pre-processing. Additionally, in some examples, the image segmentation process may have capabilities limited to processing specific cases only. These cases or inputs may be negatively influenced by characteristics or attributes of the image such as clarity of the image, contrast, brightness and more.

[0055] Pre-processing of images may be crucial for accurate and efficient function of the image segmentation by the Al model or traditional image processing algorithms. Image segmentation processes (equal to the mask-pattern generating method) may include of one or more image processing algorithms. This may cause this image segmentation process to only handle more specific cases. The input cases will be influenced by many situations, such as clarity, contrast or brightness. The methods described herein may optimize image segmentation processes. For example, optimizing image segmentation processes may increase image segmentation capabilities. Described herein may be methods that improve image processing and pre-processing. For example, methods described herein may optimize and provide improved image processing and pre-processing compared to brightness or contrast image analysis.

[0056] A method for standardizing image pixel values from an image obtained by a microscope-based system may include first using the microscope-based system to capture an image. Then, the process may include applying a histogram to the image. Pixel values may then be analyzed. For example, each pixel may have a value relating to the range of color within a pixel. The pixel values may be quantified, and a range of pixel values may have a minimum pixel value and a maximum pixel value. Then a transformation method may be selected or established. The transformation method selection may be based on one or more characteristics of the pixel value data set. For example, the transformation method may be based on a slope and speak of a trend defined by the range of values between the minimum value and the maximum value.

[0057] An image, as described herein, may be processed for image enhancement using various processes and/or methods. For example, image enhancement may refer to a process of transforming one or more aspects of the image to enhance the image. In some examples, the image enhancement may be sufficient to reduce additional processing through an Al model. For example, a process of transformation may refer to scaling pixel values obtained from an image (e.g., an initial image obtained from a microscope-based system). For example, the initial image obtained from the microscope-based system may be analyzed and a value may be provided for each pixel of the image. The image pixel values may be distributed between a maximum image pixel value and a minimum pixel value. The distribution of the initial image obtained from the microscope-based system may be an unsigned integer within a range based on the bit-depth or bit of the image. Processing these values may be complex and cumbersome for continued processing methods and systems. Accordingly, transforming the image to scrape the image pixel values provides for a distribution of the image pixel values scaled to a range of unsigned integers that may provide for more efficient image processing.

[0058] In some examples, image enhancement may include one or more filters, one or more filter methods, intensity (e.g., pixel values) transformation, one or more image transformations, one or more intensity transformations, intensity transformation by histogram, etc. In some examples, filter methods may include high pass/low pass factors, band pass/band rejection factors, Gaussian filter, Butterworth filter, Ideal filter, Median filter, min filter, max filter, Notch filter, spatial filtering, etc. In some examples, intensity transformation may include one or more Linear transformation, log transformation, exponential transformation, power law transformation (e.g., Gamma-correlation), piecewise linear transformation. In some examples, image transformations may include one or more Fourier spectrum transform, Hartley transform, discrete cosine transform, discrete sine transform, Walsh-Hadamard transforms, slant transform, Haar transform, discrete wavelet transform, Laplacian transform, Sobel transform, homomorphic filter transfer function. In some examples, intensity transformation by histogram may include one or more histogram specification, histogram equalization, local histogram equalization (e.g., adaptive histogram equalization), contrast limited adaptive histogram equalization).

[0059] In some examples, the transformation method (e.g., image enhancement method) may be a linear transformation method. A linear transformation method (e.g., Linear transformation) may include transformation of image pixel values where the values are distributed between a minimum image pixel value and a maximum pixel value in a substantially linear order. In some examples, the image pixel values are linear between the minimum and maximum values. In some examples, the linear transformation method may be a Linearity transformation method. In some examples, the linear transformation method may be a multi-order linear transformation. In some examples, the linear transformation method may be a Sigmoid fit theory transformation. [0060] In some examples, the transformation method may be a non-linear transformation method. In some examples, the non-linear transformation method may be a Modified Sigmoid fit theory transformation. For example, the Modified Sigmoid fit theory transformation may be a hybrid transformation method between nonlinear transformation and linear transformation.

[0061] In some examples, a Sigmoid Fit (e.g., Sigmoid Fitting Theory) may include a subprocess such as choosing a reversal point and coefficient alpha of Sigmoid fitting Transformation. Then, generating accumulated histogram by image pixel of the image. Then, setting a reversal point at the median by accumulated histogram. Then, approaching sigmoid function to the shape of accumulated histogram by adjusting coefficient alpha, according to the valid max and min pixel value to shift the reversal point.

[0062] In some examples, choosing a piecewise point and coefficient alpha of Modified Sigmoid fitting Transformation may include a subprocess including generating accumulated histogram by image pixel of the image and calculating a median and average by accumulated histogram. The difference between of median and average can be the difference between reversal point of sigmoid function and the piecewise point. Then the process may include approaching sigmoid function to the shape of accumulated histogram by adjusting coefficient alpha for both side of the piecewise point, respectively, according to the valid max and min pixel value to shift the reversal point.

[0063] In some examples, an image transformation (e.g., transformation method) may include one or more different processing algorithms, methods, or operations. For example, a transforming method may include image normalization and/or image enhancement. In some examples, the methods, processes, sub-methods, sub-processes described herein may include a combination of one or more algorithms or processes described herein.

[0064] Image normalization process may include one or more algorithms. An example of an image normalization algorithm may include V ou t = Vin * (V max ou t / V max i n ). in some examples, V ou t may refer to output pixel value; Vin may refer to original pixel value; V^Vut may refer to output max pixel value; and/or V max i n may refer to original max pixel value.

[0065] In some examples, the methods described here may include selecting a first transformation method. The transformed or converted image resulting from the first selected transformation method may be evaluated for pixel standardization and one or more additional transformation methods may be applied to the transformed image. In some examples, the first transformation method may be selected, and the converted image may be evaluated. Based on the evaluation, an alternative transformation method may be applied to the initially obtained image. [0066] In some examples, the method for standardizing image pixel values from a microscope-based system may result in the initially obtained image from a microscope-based system to be transformed through the selected transformation method from a first bit depth to a second bit depth. For example, the initial image obtained from a microscope-based system may be a 16-bit image. The selected transformation method may be applied to the initial image obtained from a microscope-based system, thereby converting the 16-bit initial image to an 8-bit pre-processed image having standardized pixel values resulting from the selected transformation method.

[0067] In some examples used herein, the term histogram may refer to a graphical representation of image pixel value intensities of an image. For example, a histogram may be applied to an image or derived from an image whereby the histography provides a graphical representation of the image pixel intensities. In some examples, the image described herein is a color image and the histogram may refer to a histogram for each color (e.g., red, green, blue) In some examples, the histogram may relate to a combination of the individual layers of histogram (e.g., a histogram of a colored image). In other examples, the image may be in grayscale and have a single layer or single histogram based on the image.

[0068] FIG. 1 illustrates an example of a microscope-based system that may be used in performing any method described herein. Additional details of the microscope system may be found in U.S. Pat. No. US 11,265,449, incorporated herein by reference in its entirety. The microscope-based system of this embodiment comprises a microscope 10, an imaging assembly 12, an illuminating assembly 11, and a processing module 13a. The microscope 10 comprises an objective 102 and a stage 101. The stage 101 is configured to be loaded with a sample S. The imaging assembly 12 may comprise a (controllable) camera 121, an imaging light source, and a focusing device 123.

[0069] The stage 101 can be moved to provide different fields of view of the sample S. The sample S may comprise, for example, a fluorescent sample, a reflective sample, or a sample that can be marked by the light projecting from the imaging subsystem. For example, the sample mark can be bleached, activated, physically damaged, or chemically converted. The mark can be analyzed by the imaging subsystem, and the position of the mark may be represented by the result of the projected light.

[0070] In some embodiments, as described in U.S. Pat. No. U.S. 11,265,449, images obtained by camera 121 can be processed in a processing subsystem 13a to identify regions of interest in the sample. For example, when the sample contains cells, particular subcellular areas of interest can be identified by their morphology. In some embodiments, the regions of interest identified by the processing module from the images can thereafter be selectively illuminated with a different light source for, e.g., photobleaching of molecules at certain subcellular areas, photoactivation of fluorophores at a confined location, optogenetics, light-triggered release of reactive oxygen species within a designated organelle, or photoinduced labeling of biomolecules in a defined structure feature of a cell all require pattern illumination. The coordinates of the regions of interest identified by the processing subsystem 13a create a pattern for such selective illumination. The embodiment of FIG. 1 therefore has a pattern illumination assembly 11 which projects light onto sample S through a lens 3, mirror 4, lens 6, and mirror 8. In some embodiments, pattern illumination assembly 11 employs a laser to illuminate through the pattern of the region of interest in the sample S by moving mirror within the pattern illumination assembly 11.

[0071] The microscope, stage, imaging subsystem, and/or processing subsystem can include one or more processors configured to control and coordinate operation of the overall system described and illustrated herein. In some embodiments, a single processor can control operation of the entire system. In other embodiments, each subsystem may include one or more processors. The system can also include hardware such as memory to store, retrieve, and process data captured by the system. Optionally, the memory may be accessed remotely, such as via the cloud. In some embodiments, the methods or techniques described herein can be computer implemented methods. For example, the systems disclosed herein may include a non-transitory computing device readable medium having instructions stored thereon, wherein the instructions are executable by one or more processors to cause a computing device to perform any of methods described herein.

[0072] This disclosure can provide a process of stabilizing image-pixel values relating to an image obtained from a microscope-based system. In particular, the process may begin with obtaining an image from a microscope-based system (e.g., the system illustrated in FIG. 1). The obtained image may then be used in developing a histogram for and based on the image. The histogram may, for example, provide a representation of values relating to attributes of individuals pixels comprising the image. The pixel values may then be evaluated, and a dataset of pixel values determined having a minimum pixel value and a maximum pixel value. The dataset may be the remaining values for the pixels other than the minimum pixel value and the maximum pixel value. The remaining values may be distributed between the minimum value and the maximum value for a particular obtained image. The process may continue such that if the pixel values are linear (e.g., generally linear or in a linear trend) between the minimum and maximum values, a linear-based transformation method is selected. For example, the linear transformation method may be a linearity transformation method, multi -order transformation method, a sigmoid function transformation method or a combination thereof. In some examples, the linear-based transformation method may be a transformation method sufficient to convert the obtained image into a converted image having standardized image-pixel values. If the dataset of pixel values from the obtained image is not linear (e.g., non-linear or substantially non-linear), a non-linear transformation method may be selected to process the image obtained from the microscope-based system. For example, if the dataset if non-linear, a modified Sigmoid Fit Theory transformation function may be applied to the obtained image. In some examples, a combination or modified version of the selected transformation may be applied to the obtained image. For example, a combination method may employ a combination of one or more linear transformation methods with a non-linear transformation method.

[0073] In some examples used herein, the term bit may be used to describe the bit depth of the image. For example, bit depth may refer to the numerical range of possible color values attributed to a pixel of an image being preprocessed, as described herein. In some examples, the initially obtained image may be a 16-bit image. A 16-bit image may be initially captured (e.g., obtained) by a microscope-based system and each image pixel may contain a value of 0-65535. The pixel range of a 16-bit image may have an image pixel value of any value of the range 0- 65535. The range may include 0 or 65535 such that the pixel value of an image pixel may be equal to 0 or may be equal to 65535 or may be equal to any value therebetween. In some examples, the transformed image (e.g., the converted image after the selected transformation method has been applied to the initially obtained image from the microscope-based system) may have an image pixel value of any value of the range 0-255. The range may include 0 or 255 such that the pixel value of an image pixel may be equal to 0 or may be equal to 255 or may be equal to any value therebetween.

[0074] In some examples, the initially obtained image from the microscope-based system may have a bit depth other than 16 bits. For example, the initially obtained bit-depth may be a 4- bit image, 8 -bit image, 10-bit, 12-bit, 14-bit, 32-bit image, 64-bit image or have a bit depth of any bit value based on the image capabilities of the microscope-based system being used. In any example described herein, the method may include initially obtaining an image from a microscope-based image having a quantifiable bit-depth. Pixel values of the obtained image may then be evaluated and quantified relative to the bit-depth of the image obtained. A transformation method, as described herein, may be selected and applied to the obtained image. A converted image may result from the obtained image being transformed by the selected transformation method. [0075] In some examples, the transformed (e.g., the converted) image after the selected transformation method has been applied to the initially obtained image from the microscopebased system bay have a bit depth other than 8-bit. For example, the transformed image may have a bit-depth of 4-bit, 8-bit image, 10-bit, 12-bit, 14-bit, 32-bit image, 64-bit image or have a bit depth of any bit value based on the image capabilities of the microscope and the transformation method used to transform or convert the image to have standardized image pixel values.

[0076] Some implementations provide a process flow for standardizing image pixel values for an image obtained from a microscope-based system. The process may include obtaining a 16- bit image from a microscope-based system. In some examples, the obtained image (e.g., the image obtained from the microscope-based system) has a 16-bit attribute with initial pixel values in a range from and including 0-65535. The 16-bit image is then processed according to any of the methods described herein. For example, a histogram is developed to determine the values of each pixel in the 16-bit image. The values may then be analyzed and a dataset having a distribution of values determined. The distribution of values may be between a minimum imagepixel value and a maximum image pixel value. The distribution of image pixel values may then be determined to be linear or non-linear. A transformation method may be selected based on the distribution of image-pixel values and the transformation method is sufficient to standardize the image pixel values in a resulting converted image. An example of a method described herein may have an obtained image with 16-bit and a converted image of a different bit-depth (e.g., 8- bit).

[0077] In some examples, the methods described herein may or may not include a re-submit or validation process to qualify an image after a first pass or initial processing period. For example, examples can include an additional validation sub-process. The validation may include analysis of the converted image to ensure desired converted image attributes. If the converted image is acceptable and validation is approved, the image may be transmitted or input for further a mask-pattern generating method(e.g., further image processing with an Al model or traditional image processing algorithm). In some examples, if the converted image does not pass validation or is not approved for further image processing, the method may re-submit the first converted image for re-evaluation and processing that may include different transformation methods or redevelopment of a histogram of the obtained image from the microscope-based system. In some examples, the converted image is not subjected to validation.

[0078] The methods described herein may operate without a validation or re-submit system. In some examples, the initial processing or enhancement of an image as described herein may include a single pass or single processing for image enhancement. The resulting image may be incorporated into additional or subsequent aspects of the method in generating a mask pattern without validation (e.g., an un-validated processed and/or enhanced image).

[0079] In some examples, the methods described herein optimize an image for processing with a well-trained Al model. For example, the methods described herein may optimize the resulting converted images for a specific image or a specific sample or a specific region of an image (e.g., a region of interest). In some examples, the initially obtained image may be an entire image obtained by the microscope-based system. In some embodiments, the initially obtained image may be less than the entire image initially obtained image from the microscope-based system. For example, a specific region of an image may be selected then the specific region may be evaluated and a range of pixel values of the specific region may be determined. The specific region may then be subjected to the selected transformation method according to the range of pixel values therein between a minimum pixel value of the specific region and a maximum pixel value of the specific region.

[0080] Methods described herein may include pre-processing of an image obtained from a microscope-based system for use with the well-trained Al model. The Al model may be a machine learning system having one or more layers of algorithms applied to the pre-processed image, as described herein, for mask pattern generation. In some examples, the method of standardizing image pixel values from a microscope-based system may include a method of preprocessing the image to standardize the image pixel values for optimized use of the image as an input into a neural network (e.g., a convoluted neural network).

[0081] In some examples, a method described herein may be automated for additional processing with the Al model. For example, a method described herein may be established and employed for a particular microscope-based system. The method, then, may not require adjustment once it is established for the microscope-based system. For example, once the method has been implemented on a microscope-based system, the method may be sufficient to transform an image to an image having a standardized image pixel values across the image even if the field of view of the microscope-based system is adjusted. In some examples, the method is established and initiated on a microscope-based system at a first field of view. The microscope-based system may be adjusted to a second or subsequent field of view, and the implemented method, according to a method described herein, may not require adjustment as the standardization of image pixel values is similarly applicable to images obtained by the microscope-based system at different fields of view. In some examples, the converted image resulting from an example of any method described herein may be submitted for additional mask-pattern generating method using the Al model or traditional image processing algorithm.

[0082] According to some examples described herein, the microscope-based system initially captures a 16-bit. image. After drawing the histogram by the image, the pixel values obtained may be distributed within a range of values between a minimum pixel value and a maximum pixel value. The distribution of the pixel values other than the minimum pixel value and the maximum pixel value may be smaller than half, smaller than one third, smaller than one fourth, or smaller than a fraction of the maximum image pixel value provided by initially obtained 16- bit image. ’The distribution of the pixel values of the initially obtained image may be due to an intensity level of a signal for fluorescence excitation. In some examples, the level of intensity for florescent excitation may be high or low 7 and the level of intensity may impact the distribution of the pixel values throughout the range of pixel values of the initially obtained image. In some examples, the initially obtained image captured by microscope-based system may be observed to be light or dark or a similar description regarding the image between light or dark.

[0083] In some examples, the di stribution of image pixel values of the initially obtained image from the microscope-based system may have a non-uniform distribution, or a distribution shift. In some examples, the initial distribution of image pixel values may be adjusted or scaled through the transformation process described herein. In some examples, the transformation process scales the image pixel values to have a uniform distribution. For example, the transformation method may identify a first mean of image pixel values identified for the initial image. The mean value of the image pixel values for the obtained image may be greater than 0. In some exampl es, the mean image pixel values of the obtained image are standardi zed through any of the methods described herein such that the mean of the image pixel values of the converted (e.g., transformed image) may be different than the mean image pixel values of the original image obtained from the microscope-based system. In some examples, the mean imagepixel value of the converted (e.g., transformed) image is 0.

[0084] Any method described herein may operate through as a computer-implemented method. For example, one or more of the steps of a method described herein may employ a computer system. For example, where the initially obtained image is a 16-bit images the method may include the use of a high-performance computer. In some examples, the methods described herein may have processing times defined by the initiation of the method to the completion of the transformed (e.g., converted) image that are relative to the processing power of the computer being used by the method. The computer implemented mediod, according to any method described herein, may operate within a microscope-based system. The system may comprise a microscope, an imaging light source, a camera, a first processing module, a second processing module such as field-programmable gate array (FPGA) or application-specific integrated circuit (ASIC), an illumination light source, a shutter, a pattern illumination device such as a pair of galvanometer scanning mirrors, a digital micromirror device (DMD), or a spatial light modulator (SUM), a microscope stage, and an autofocus device. A processing module is programmed to grab a camera image and to process the image on board in real-time based on user-defined criteria to determine the locations of the sample to illuminate. It is then programmed to control a shutter and scanning mirrors to direct an illumination light source to these locations one point at a time. The integrated memory unit (e.g., DRAM), when available, provides the data storage space essential for fast processing and transfer of image data to enable a rapid process. One processing module (e.g., a computer) controls an imaging light source, an autofocus device, and a microscope stage for imaging, focus maintenance, and changes of fields of view, respectively. Imaging, image processing, illumination, and stage movement are coordinated by the program to achieve rapid high content image-guided illumination. A femtosecond laser may be used as the illumination light source to generate a two-photon effect for high axial illumination precision. The image processing criteria may be the morphology, intensity, contrast, or specific features of a microscope image. Mask-pattern generating method is done with image processing techniques such as thresholding, erosion, filtering, or artificial intelligence trained image segmentation methods (e.g., semantic segmentation or instance segmentation). The speed and the high content nature of the device may enable collection of a large amount of location-specific samples for photo-induced molecular tagging, photoconversion, or studies of proteomics, transcriptomics, and metabolomics. In some examples, any method described herein may include the use of a microscope-based system as described in U.S. Pat. No. U.S. 11,265,449.

[0085] In some examples, a microscope-based system may obtain an image associated with a field of view. FIG. 2 illustrates an example of a flowchart 200 of a method described herein. The method can be performed with the microscope-based system described above in FIG. 1. At an operation 202, the microscope-based system may select, identify, or provide a target field of view (FOV) of a sample S The FOV may be associated with a target region or region of interest of the sample (e.g., a subset of the sample). In some embodiments, the target FOV is chosen by a user. In other embodiments, the target FOV is automatically identified by the microscopebased system (e.g., by one or more processors of the microscope-based system).

[0086] At an operation 204, a camera or image acquisition system associated with the microscope-based system (e.g., the camera and/or imaging assembly described above) may obtain or capture an image of the target FOV of the sample S. [0087] Next, at an operation 206, the process to generate a mask pattern for the target FOV may be selected. In some examples, the process is selected by a user. In other examples, the process is selected automatically (e.g., by a processor of the microscope-based system).

[0088] The process may include traditional image processing to binarize the image (operation 208) or Al model inference (operation 210). For traditional image processing, the selection may be associated with one or more image attributes (e.g., pixel values, contrast, etc.). For Al interface processing, the selection may be associated with the selected enhancement method. At operations 208 or 210, the image of the target FOV of sample S can be enhanced or transformed.

[0089] At an operation 212, a mask pattern is then generated based on the enhanced image from operations 208 or 210. The mask pattern may then be employed in illuminating the image (e.g., a region of interest or target) at operation 214. For example, the microscope-based system of FIG. 1 can employ the mask pattern to selectively illuminate regions of interest for, e.g., photobleaching of molecules at certain subcellular areas, photoactivation of fluorophores at a confined location, optogenetics, light-triggered release of reactive oxygen species within a designated organelle, or photoinduced labeling of biomolecules in a defined structure feature of a cell.

[0090] FIG. 3 provides another flowchart 300 that may be similar to the flowchart of FIG. 2 in the processing of an initial image that is subjected to one or more normalization methods and/or one or more image enhancement methods. For example, operations 304, 308, 310, 312, and 314 correspond to operations 204, 208, 210, 212, and 214 of flowchart 200 in FIG. 2. However, after obtaining the initial image at operation 304, any of the normalization methods 305a, 305b, or 305c may be employed. In some examples, any method herein may include subjecting the captured image (e.g., the initial image) to a normalization method followed by an image enhancement method (operation 305a). In some examples, the initial image may be subjected to an image enhancement method followed by a subsequent normalization method (operation 305c). In another example, only normalization methods are applied to the image (operation 305b), or alternatively, only image enhancement methods are applied to the image (operation 305d). The normalization and/or image enhancement process results in a standardized image at operation 307.

[0091] The normalization methods at operations 305a, 350b, or 305c may be any of the normalization methods described herein. For example, the normalization method may process the initial image to a normalized image. The normalization process may be performed based on an algorithm: V ou t = Vin * (V max ou t / V max i n ), wherein V ou t can be an output pixel value of the normalized image, Vin can be a pixel value of the initial image, V max ou t can be a maximum pixel value of the normalized image, and V^in can be a maximum pixel value of the initial image. [0092] The remaining operations in flowchart 300 can proceed as described above in FIG. 2, including selecting a process to generate a mask pattern (traditional image processing in operation 308 or Al model inference in operation 310), generating a mask pattern at operation 312, and using pattern illumination with a microscope-based system in operation 314.

[0093] An operator or a processor of the system may select a standardization method to employ in the enhancement of an image. In some examples, as described here, a microscopebased system may be used in capturing or obtaining an image of a biological sample, that image is then subjected to one or more sub-processes (e.g., standardization, normalization, image enhancement, etc.) The initial image may be subjected to an operator or processor-selected standardization method. In some examples, the operator or processor may select a normalization method. In some examples, the operator or processor may select an image enhancement method. In some examples, the operator or processor may select more than one standardization, normalization, or image enhancement method. The operator or processor-selected methods may be one or more methods described herein. In some examples, the operator or processor-selected method may be the same methods, may be different methods, or may be a combination of one or more methods.

[0094] After the image has been standardized, the operator or processor may also select what mask generating method to employ. For example, the operator may select a traditional image processing method and/or may select an Al model interface method to generate a mask for photo illumination. After the mask pattern has been established, the mask may be applied to the image and the image may be illuminated according to the mask pattern. For example, the image may be illuminated to highlight a region of interest or target within the image.

[0095] As illustrated in FIG. 4, a flowchart 400 is provided with another method of producing a mask pattern for pattern illumination with a microscope-based system. In flowchart 400, operations 404 and 406 can correspond with operations 204 and 206 of flowchart 200 in FIG. 2. At an operation 404, a camera or image acquisition system associated with the microscope-based system (e.g., the camera and/or imaging assembly described above) may obtain or capture an initial image of the target FOV of the sample S.

[0096] Next, at an operation 406, the process to enhance the initial image of the target FOV may be selected. In some examples, the process is selected by a user. In other examples, the process is selected automatically (e.g., by a processor of the microscope-based system). [0097] Next, at an operation 409, the image may be subjected to a processor or operator- selected image enhancement method (e.g., linearity transformation 409a, multi-order linear transformation 409b, sigmoid fitting transformation 409c, modified sigmoid fitting transformation 409d, etc.).

[0098] At an operation 411, the operator or processor may optionally need to adjust one or more parameters of the image enhancement. Adjustment may be associated with one or more image attributes. For example, if the image is too dark, the operator may need to adjust the selected enhancement method and/or may need to adjust the parameters of the image enhancement method. With or without the optional adjustment operation of 411, the result is a transformed or enhanced image of the target FOV.

[0099] Still referring to flowchart 400 of FIG. 4, the remaining operations 406, 408, 410, 412, and 414 correspond to operations 206, 208, 210, 212, and 214 of FIG. 2. For example, at an operation 406, the process to generate a mask pattern for the target FOV may be selected. In some examples, the process is selected by a user. In other examples, the process is selected automatically (e.g., by a processor of the microscope-based system).

[0100] The process may include traditional image processing (operation 408) or Al model inference (operation 410). For traditional image processing, the selection may be associated with one or more image attributes (e.g., pixel values, contrast, etc.). For Al interface processing, the selection may be associated with the selected enhancement method. At operations 408 or 410, the image of the target FOV of sample S can be enhanced or transformed.

[0101] At an operation 412, a mask pattern is then generated based on the enhanced image from operations 408 or 410. The mask pattern may then be employed in illuminating the image (e.g., a region of interest or target) at operation 414. For example, the microscope-based system of FIG. 1 can employ the mask pattern to selectively illuminate regions of interest for, e.g., photobleaching of molecules at certain subcellular areas, photoactivation of fluorophores at a confined location, optogenetics, light-triggered release of reactive oxygen species within a designated organelle, or photoinduced labeling of biomolecules in a defined structure feature of a cell.

[0102] FIG. 5 illustrates additional details that may be associated with the image enhancement methods described herein. As with the previously described methods herein, an initial image of the target FOV of a sample on a microscope-based system may be obtained at an operation 504. At an operation 511, the captured or obtained image may be used to generate a histogram. For example, the histogram may be generated based on pixel values in the image. As is known in the art, a histogram is a summary of the image and for example, can correlate the quantity of pixels with the values (e.g., intensity) of each pixel. The histogram can be used by the user or the system to determine which image processing technique to use (e.g., traditional vs. Al).

[0103] At an operation 509, an image enhancement method can be chosen or selected by the user or a processor of the system. In one example, a linearity transformation at operation 509a chooses a valid minimum pixel value and a valid maximum pixel value. A multi-order linear transformation at operation 509b chooses a valid minimum pixel value and a valid maximum pixel value, followed by choosing a leading coefficient. A sigmoid fitting transformation at operation 509c chooses a valid minimum pixel value and a valid maximum pixel value, followed by choosing a reversal point and a coefficient alpha of a sigmoid function. A modified Sigmoid fitting transformation at operation 509d chooses a valid minimum pixel value and a valid maximum pixel value, followed by choosing a piecewise point and a coefficient alpha of a sigmoid function for both sides of the piecewise point.

[0104] As a result of the image enhancement from operation 509, at operation 515, pixel values may be converted into converted (e.g., new) pixel values based on the chosen transformation method, therefore at operation 516, the transformation image of FOV (8-bets or 16 bits image) is obtained.

[0105] An example of the initial image of FOV of the sample S is shown in Fig. 6A, and a transformation image of FOV is shown in FIG. 6B.

[0106] According to any method described herein, the standardization of image pixel values from a microscope-based system may provide enhanced image pre-processing techniques for use in a medical evaluation. For example, histopathology relates to observational analysis of patient samples. Often, techniques employ the use of one or more stains or probes to highlight or identify the existence of a target or region of interest. The method described herein may be applied to patient samples in a microscope-based system where an image of the sample is obtained and any of the methods described herein are applied. Accordingly, the image may be processed to enhance additional processing using a well-trained Al model to evaluate cellular morphology, cellular activity, intracellular components, disease driving elements such as fusion of genetic material or misshapen protein structures, xenobiotic materials, or bacterial or viral infectious elements.

[0107] The system and method provide by the various examples of this present disclosure may relates to for processing, for example but not limited thereto, a high content of proteins, lipids, nucleic acids, or biochemical species comprising an imaging light source, a photosensitizing light source, a pattern illumination device such as a set of dual-axis high-speed galvanometric scanning mirrors, a microscope body, a focusing device, a high-precision microscope stage, a high-sensitivity digital camera, a control workstation (or a personal computer), a processing module such as a FPGA chip, and application software for camera control, image processing, stage control, and optical path control. It is therefore an object to process the proteins, lipids, nucleic acids, or biochemical species in an area of interest specified by fluorescent signals or structural signature of cell images. Additional object is to collect a large amount of proteins, lipids, or nucleic acids through high content labeling and purification in order to identify biomolecules of interest in the area of interest by a mass spectrometer or a nucleic acid sequencer, followed by proteomic, metabolomic, or transcriptomic analyses.

[0108] The systems and the methods according to some examples of the invention take fluorescent staining or brightfield images first. Mask-pattern generating method is then performed automatically on the images by a connected computer using image processing techniques such as thresholding, erosion, filtering, or artificial intelligence trained semantic segmentation methods to determine the points or areas to be processed based on the criteria set by the operating individual. A high-speed scanning system is used for pattern illumination to shine the photosensitizing light on the points or areas to induce processing of proteins, lipids, nucleic acids, or biochemical species in the illumination regions. Alternatively, DMD or SLM may be used for pattern illumination. Photo-induced processing is achieved by including photosensitizer such as riboflavin, Rose Bengal or photosensitized protein (such as miniSOG and Killer Red, etc.) and chemical reagents such as phenol, aryl azide, benzophenone, Ru(bpy)32+, or their derivatives for labeling purpose. The labeling groups can be conjugated with tagging reagents like biotin that is used for protein or nucleic acid pulldown. Photosensitizer, labeling reagent, and tagging reagent can be separate molecules, or can be one molecule with all three functions. Spatially controlled illumination can induce covalent binding of the labeling reagents onto amino acids, lipids, nucleic acids, or biochemical species in the specific region. As examples, streptavidin is used to isolate biotinylated proteins and then the labeled proteins are purified to be analyzed by a mass spectrometer. RNA may be isolated by associated protein pulldown and then analyzed by RNAseq or RTPCR. Because enough RNAs or proteins are needed to have low background results, efficient high-content labeling is thus a major requirement for this system.

[0109] In some examples, the term “artificial intelligence” may refer to artificial intelligence, artificial intelligence platforms, artificial intelligence algorithms, artificial intelligence processes, artificial intelligence networks, neural networks, machine learning, deep learning, computer- based learning, algorithms, establish systems or networks employing computer based methods or activities to convert, interpret, modify, alter, change, process, or otherwise interact with one or more data sets for a functional purpose, result, quantification, and/or qualification.

[0110] As described herein, the computing devices and systems described and/or illustrated herein broadly represent any type or form of computing device or system capable of executing computer-readable instructions, such as those contained within the modules described herein. In their most basic configuration, these computing device(s) may each comprise at least one memory device and at least one physical processor.

[OHl] The term “memory” or “memory device,” as used herein, generally represents any type or form of volatile or non-volatile storage device or medium capable of storing data and/or computer-readable instructions. In one example, a memory device may store, load, and/or maintain one or more of the modules described herein. Examples of memory devices comprise, without limitation, Random Access Memory (RAM), Read Only Memory (ROM), flash memory, Hard Disk Drives (HDDs), Solid-State Drives (SSDs), optical disk drives, caches, variations or combinations of one or more of the same, or any other suitable storage memory. [0112] In addition, the term “processor” or “physical processor,” as used herein, generally refers to any type or form of hardware-implemented processing unit capable of interpreting and/or executing computer-readable instructions. In one example, a physical processor may access and/or modify one or more modules stored in the above-described memory device. Examples of physical processors comprise, without limitation, microprocessors, microcontrollers, Central Processing Units (CPUs), Field-Programmable Gate Arrays (FPGAs) that implement softcore processors, Application- Specific Integrated Circuits (ASICs), portions of one or more of the same, variations or combinations of one or more of the same, or any other suitable physical processor.

[0113] Although illustrated as separate elements, the method steps described and/or illustrated herein may represent portions of a single application. In addition, in some embodiments one or more of these steps may represent or correspond to one or more software applications or programs that, when executed by a computing device, may cause the computing device to perform one or more tasks, such as the method step.

[0114] In addition, one or more of the devices described herein may transform data, physical devices, and/or representations of physical devices from one form to another. Additionally or alternatively, one or more of the modules recited herein may transform a processor, volatile memory, non-volatile memory, and/or any other portion of a physical computing device from one form of computing device to another form of computing device by executing on the computing device, storing data on the computing device, and/or otherwise interacting with the computing device.

[0115] The term “computer-readable medium,” as used herein, generally refers to any form of device, carrier, or medium capable of storing or carrying computer-readable instructions. Examples of computer-readable media comprise, without limitation, transmission-type media, such as carrier waves, and non-transitory-type media, such as magnetic-storage media (e.g., hard disk drives, tape drives, and floppy disks), optical -storage media (e.g., Compact Disks (CDs), Digital Video Disks (DVDs), and BLU-RAY disks), electronic-storage media (e.g., solid-state drives and flash media), and other distribution systems. [0116] A person of ordinary skill in the art will recognize that any process or method disclosed herein can be modified in many ways. The process parameters and sequence of the steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed.