Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A system and method for processing biology-related data, a system and method for controlling a microscope and a microscope
Document Type and Number:
WIPO Patent Application WO/2020/244779
Kind Code:
A1
Abstract:
A system (100) for processing biology-related data comprises one or more processors (110) coupled to one or more storage devices (120). The system (100) is configured to receive biology-related image-based search data (103) and configured to generate a first high-dimensional representation of the biology-related image-based search data (103) by a trained visual recognition machine- learning algorithm executed by the one or more processors (110). The first high-dimensional representation comprises at least 3 entries each having a different value. Further, the system (100) is configured to obtain a plurality of second high-dimensional representations (105) of a plurality of biology-related image-based input data sets or of a plurality of biology-related language-based input data sets. Additionally, the system (100) is configured to compare the first high-dimensional representation with each second highdimensional representation (105) of the plurality of second high-dimensional representations by the one or more processors (110).

Inventors:
KAPPEL CONSTANTIN (DE)
Application Number:
PCT/EP2019/064978
Publication Date:
December 10, 2020
Filing Date:
June 07, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
LEICA MICROSYSTEMS (DE)
International Classes:
G16H30/40; G16H30/20
Foreign References:
EP1377865A12004-01-07
US8319829B22012-11-27
Attorney, Agent or Firm:
2SPL PATENTANWÄLTE PARTG MBB (DE)
Download PDF:
Claims:
Claims

1. A system (100, 200, 300) comprising one or more processors (110) and one or more storage devices (120), wherein the system (100) is configured to: receive biology-related image-based search data (103); generate a first high-dimensional representation (260) of the biology-related image-based search data (103) by a trained visual recognition machine- learning algorithm executed by the one or more processors (110), wherein the first high-dimensional representation (260) comprises at least 3 entries each having a different value; obtain a plurality of second high-dimensional representations (105, 250) of a plurality of biology-related image-based input data sets or of a plurality of biology-related language- based input data sets; and compare the first high-dimensional representation (260) to each second high-dimensional representation of the plurality of second high-dimensional representations (105, 250).

2. The system of claim 1, wherein the biology-related image-based search data (103) is image data of an image of at least one of a biological structure comprising a nucleotide se quence, a biological structure comprising a protein sequence, a biological molecule, bio log- ical tissue, a biological structure with a specific behavior, or a biological structure with a specific biological function or a specific biological activity.

3. The system of one of the previous claims, wherein the values of one or more entries of the first high-dimensional representation (260) are proportional to a likelihood of a pres ence of a specific biological function or a specific biological activity. 4. The system of one of the previous claims, wherein the values of one or more entries of the second high-dimensional representations (105, 250) are proportional to a likelihood of a presence of a specific biological function or a specific biological activity.

5. The system of one of the previous claims, wherein the system (100) is configured to select a second high-dimensional representation of the plurality of second high-dimensional representations (105, 250) closest to the first high-dimensional representation (260) based on the comparison.

6. The system of claim 5, wherein the system (100) is configured to output at least one of the closest second high-dimensional representation, the biology-related image-based in put data set of the plurality of biology-related image-based input data sets, which corre sponds to the closest second high-dimensional representation, or the biology-related lan guage-based input data set of the plurality of biology-related language-based input data sets, which corresponds to the closest second high-dimensional representation.

7. The system of one of the previous claims, wherein the comparison of the first high dimensional representation (260) with each second high-dimensional representation of the plurality of second high-dimensional representations (105, 250) is based on an Euclidean distance function or an earth mover's distance function.

8. The system of one of the previous claims, wherein the first high-dimensional repre sentation (260) and the second high-dimensional representations (105, 250) are numerical representations.

9. The system of one of the previous claims, wherein the first high-dimensional repre sentation (260) and the second high-dimensional representations (105, 250) comprise each more than 100 dimensions.

10. The system of one of the previous claims, wherein the first high-dimensional repre sentation (260) is a first vector and the second high-dimensional representations (105, 250) are second vectors.

11. The system of one of the previous claims, wherein more than 50% of values of the entries of the first high-dimensional representation (260) and more than 50% of values of the entries of the second high-dimensional representations (105, 250) are unequal 0.

12. The system of one of the previous claims, wherein the values of more than 5 entries of the first high-dimensional representation (260) are larger than 10% of a largest absolute value of the entries of the first high-dimensional representation (260) and the values of more than 5 entries of each second high-dimensional representation of the plurality of second high-dimensional representations (105, 250) are larger than 10% of a respective largest ab solute value of the entries of the second high-dimensional representations (105, 250).

13. The system of one of the previous claims, wherein the trained visual recognition machine- learning algorithm comprises a trained visual recognition neural network.

14. The system of claim 13, wherein the trained visual recognition neural network com prises more than 30 layers. 15. The system of claim 13 or 14, wherein the trained visual recognition neural network is a convolutional neural network or a capsule network.

16. The system of claim 13, 14 or 15, wherein the trained visual recognition neural net work comprises a plurality of convolution layers and a plurality of pooling layers.

17. The system of one of the claims 13-16, wherein the trained visual recognition neural network uses a rectified linear unit activation function.

18. The system of one of the previous claims, wherein the system (100) is configured to obtain the second high-dimensional representations (105, 250) by generating the second high-dimensional representations (105, 250) of the plurality of second high-dimensional representations of the plurality of biology-related image-based input data sets or the plurali- ty of biology-related language-based input data sets by the trained visual recognition ma chine-learning algorithm executed by the one or more processors, wherein each second high-dimensional representation of the plurality of second high-dimensional representations (105, 250) comprises at least 3 entries each having a different value.

19. The system of one of the previous claims, further comprising a microscope (501, 810) configured to obtain the plurality of biology-related image-based input data sets by taking images of a biological specimen.

20. The system of one of the previous claims, wherein the system (100) is configured to select the trained visual recognition machine- learning algorithm from a plurality of trained visual recognition machine- learning algorithms based on the biology-related image-based search data (103).

21. The system of one of the previous claims, wherein the system ( 100) is configured to : receive second biology-related image-based search data and information on a logical opera tor; generate a first high-dimensional representation of the second biology-related image-based search data by the trained visual recognition machine- learning algorithm executed by the one or more processors (110); determine a combined high-dimensional representation based on a combination of the first high-dimensional representation (260) of the first biology-related image-based search data (103) and the first high-dimensional representation of the second biology-related image- based search data according to the logical operator; and compare the combined high-dimensional representation to each second high-dimensional representation of the plurality of second high-dimensional representations (105, 250). 22. The system of claim 21, wherein the logical operator is an A D-operator and the combined high-dimensional representation is determined by adding the first high dimensional representation (260) of the first biology-related image-based search data (103) and the first high-dimensional representation of the second biology-related image-based search data. 23. The system of one of the previous claims, wherein the system (100) is configured to control an operation of a microscope (501, 810).

24. A system (400, 500) comprising one or more processors (110) and one or more stor age devices (120), wherein the system (100) is configured to: receive image-based search data (401); generate a first high-dimensional representation of the image-based search data (401) by a trained visual recognition machine- learning algorithm executed by the one or more proces sors (110), wherein the first high-dimensional representation comprises at least 3 entries each having a different value; obtain a plurality of second high-dimensional representations (405) of a plurality of image- based input data sets; select a second high-dimensional representation (405) from the plurality of second high dimensional representations based on a comparison of the first high-dimensional representa tion with each second high-dimensional representation (405) of the plurality of second high dimensional representations; provide a control signal (411) for controlling an operation of a microscope (501, 810) based on the selected second high-dimensional representation.

25. A system (600, 700, 790) comprising one or more processors (110) and one or more storage devices (120), wherein the system (100) is configured to: determine a plurality of clusters of a plurality of second high-dimensional representations (405) of a plurality of image-based input data sets by a clustering algorithm executed by the one or more processors (110); determine a first high-dimensional representation of a cluster center of the cluster of the plurality of clusters; select a second high-dimensional representation (405) from the plurality of second high dimensional representations based on a comparison of the first high-dimensional representa tion with each or a subset of the second high-dimensional representations (405) of the plu rality of second high-dimensional representations; and provide a control signal (411) for controlling an operation of a microscope based on the se- lected second high-dimensional representation.

26. The system of claim 24, wherein the clustering algorithm comprises a k-means clus tering algorithm or a mean shift clustering algorithm.

27. The system of one of the claims 24-26, wherein the system (100) is configured to determine a microscope target position based on the selected second high-dimensional rep- resentation, wherein the microscope target position is a position at which an image was tak en, which was represented by the image-based input data, which corresponds to the selected second high-dimensional representation, wherein the control signal is configured to trigger the microscope to drive to the microscope target position.

28. The system of one of the claims 24-27, wherein the system (100) is configured to generate the plurality of second high-dimensional representations of the plurality of image- based input data sets by a visual recognition machine- learning algorithm executed by the one or more processors (110).

29. The system of one of the claims 24-28, wherein the system (100) is configured to select a second high-dimensional representation of the plurality of second high-dimensional representations closest to the first high-dimensional representation based on the comparison.

30. The system of one of the claims 24-29, further comprising the microscope config- ured to take a plurality of images of a specimen, wherein the plurality of image-based input data sets represents the plurality of images of the specimen.

31. A microscope comprising a system of one of the previous claims.

32. A method (900) for processing biology-related image-based search data, the method comprising: Receiving (910) biology-related image-based search data;

Generating (920) a first high-dimensional representation of the biology-related image-based search data by a trained visual recognition machine- learning algorithm, wherein the first high-dimensional representation comprises at least 3 entries each having a different value;

Obtaining (930) a plurality of second high-dimensional representations of a plurality of bi- ology-related image-based input data sets or a plurality of biology-related language-based input data sets; and

Comparing (940) the first high-dimensional representation with each second high dimensional representation of the plurality of second high-dimensional representations.

33. A method (1000) for controlling a microscope, the method comprising: Receiving (1010) image-based search data;

Generating (1020) a first high-dimensional representation of the image-based search data by a trained visual recognition machine- learning algorithm, wherein the first high-dimensional representation comprises at least 3 entries each having a different value;

Obtaining (1030) a plurality of second high-dimensional representations of a plurality of image-based input data sets;

Selecting (1040) a second high-dimensional representation from the plurality of second high-dimensional representations based on a comparison of the first high-dimensional repre- sentation with each second high-dimensional representation of the plurality of second high dimensional representations; and

Controlling (1050) an operation of a microscope based on the selected second high dimensional representation. 34. A method (1100) for controlling a microscope, the method comprising:

Determining (1110) a plurality of clusters of a plurality of second high-dimensional repre sentations of a plurality of image-based input data sets by a clustering algorithm;

Determining (1120) a first high-dimensional representation of a cluster center of the cluster of the plurality of clusters; Selecting (1130) a second high-dimensional representation from the plurality of second high-dimensional representations based on a comparison of the first high-dimensional repre sentation with each or a subset of the second high-dimensional representations of the plu rality of second high-dimensional representations; and

Providing (1140) a control signal for controlling an operation of a microscope based on the selected second high-dimensional representation.

35. A computer program having a program code for performing a method according to one of claims 32 to 34 when the program is executed by processor.

Description:
A system and method for processing biology-related data, a system and method for controlling a microscope and a microscope

Technical field

Examples relate to the processing of biology-related data and/or the control of a microscope.

Background

In many biological applications, a vast amount of data is generated. For example, images are taken from a huge amount of biological structures and stored in databases. It is very time- consuming and expensive to analyze the biologic data manually.

Summary

Hence, there is a need for an improved concept for processing biology-related data and/or the control of a microscope.

This need may be satisfied by the subject matter of the claims.

Some embodiments relate to a system comprising one or more processors coupled to one or more storage devices. The system is configured to receive biology-related image-based search data and configured to generate a first high-dimensional representation of the biolo gy-related image-based search data by a trained visual recognition machine- learning algo rithm executed by the one or more processors. The first high-dimensional representation comprises at least 3 entries each having a different value. Further, the system is configured to obtain a plurality of second high-dimensional representations of a plurality of biology- related image-based input data sets or of a plurality of biology-related language-based input data sets. Additionally, the system is configured to compare the first high-dimensional rep- resentation to each second high-dimensional representation of the plurality of second high dimensional representations.

By using a visual recognition machine- learning algorithm an image-based search request can be mapped to a high-dimensional representation. By allowing the high-dimensional rep resentation to have entries with various different values (in contrast to one hot encoded rep resentations), semantically similar biological search terms can be mapped to similar high dimensional representations. By obtaining high-dimensional representations of a plurality of biology-related image-based input data sets or of a plurality of biology-related language- based input data sets, high-dimensional representations can be found equal or similar to the high-dimensional representation of the search request. In this way, it may be enabled to find images or text corresponding to the search request. In this way, the trained visual recogni tion machine- learning algorithm may enable a search for biology-related images among a plurality of biologic images (e.g. database of biologic images) or a search for biology- related texts among a plurality of biology related texts (e.g. scientific paper collection or library) based on an image-based search input. A search within an already existing database or images generated by a running experiment (e.g. images taken by a microscope of one or more biological specimens) may be enabled, even if the images were not labeled or tagged before.

Some embodiments relate to a system comprises one or more processors and one or more storage devices. The system is configured to receive image-based search data and config ured to generate a first high-dimensional representation of the image-based search data by a trained visual recognition machine- learning algorithm executed by the one or more proces sors. The first high-dimensional representation comprises at least 3 entries each having a different value. Further, the system is configured to obtain a plurality of second high dimensional representations of a plurality of image-based input data sets and configured to select a second high-dimensional representation from the plurality of second high dimensional representations based on a comparison of the first high-dimensional representa tion with each second high-dimensional representation of the plurality of second high dimensional representations. Additionally, the system is configured to provide a control signal for controlling an operation of a microscope based on the selected second high dimensional representation. By using a visual recognition machine- learning algorithm an image-based search request can be mapped to a high-dimensional representation. By allowing the high-dimensional rep resentation to have entries with various different values (in contrast to one hot encoded rep resentations), semantically similar search terms can be mapped to similar high-dimensional representations. By obtaining high-dimensional representations of a plurality of image- based input data sets, high-dimensional representations can be found equal or similar to the high-dimensional representation of the search term. In this way, it may be enabled to find images corresponding to the search request. With this information, a microscope can be driven to the respective locations, the images were taken, in order to enable taking further images (e.g. with higher magnification, different light or filter) of the locations of interest. In this way, a specimen (e.g. biological specimen or integrated circuit) may be imaged at low magnification first to find locations corresponding to the search request and afterwards the locations of interest may be analyzed in more detail.

Some embodiments relate to a system comprising one or more processors coupled to one or more storage devices. The system is configured to determine a plurality of clusters of a plu rality of second high-dimensional representations of a plurality of image-based input data sets by a clustering algorithm executed by the one or more processors. Further, the system is configured to determine a first high-dimensional representation of a cluster center of the cluster of the plurality of clusters and to select a second high-dimensional representation from the plurality of second high-dimensional representations based on a comparison of the first high-dimensional representation with each or a subset of the second high-dimensional representations of the plurality of second high-dimensional representations. Additionally, the system is configured to provide a control signal for controlling an operation of a micro scope based on the selected second high-dimensional representation.

By identifying clusters of second high-dimensional representations, second high dimensional representations corresponding to semantically similar content can be combined to a cluster. By determining a cluster center and identifying one or more second high dimensional representations closest to the cluster center by the comparison, one or more images may be found, which represent a typical image of the cluster. For example, different clusters may comprise second high-dimensional representations corresponding to different characteristic parts (e.g. cytosol, nucleus, cytoskeleton) of a biological specimen. The sys tem may be able to provide the control signal so that the microscope moves to the position, wherein the typical image of one or more of the clusters was taken (e.g. to take more images at this position with varying microscope parameters).

Short description of the Figures

Some examples of apparatuses and/or methods will be described in the following by way of example only, and with reference to the accompanying figures, in which

Fig. 1 is a schematic illustration of a system for processing biology-related data;

Fig. 2 is a schematic illustration of another system for processing biology-related data;

Fig. 3 is a schematic illustration of another system for processing biology-related data;

Fig. 4 is a schematic illustration of a system for controlling a microscope;

Fig. 5 is a schematic illustration of a system for controlling a microscope based on biology- related image-based search data;

Fig. 6 is a schematic illustration of a system for controlling a microscope;

Fig. 7a is a schematic illustration of a system for controlling a microscope based on biology- related image-based search data by using a clustering algorithm;

Fig. 7b is a schematic illustration of a system for processing biology-related data by using a clustering algorithm;

Fig. 8 is a schematic illustration of a system for processing data;

Fig. 9 is a flow chart of a method for processing biology-related data;

Fig. 10 is a flow chart of a method for controlling a microscope; and Fig. 11 is a flow chart of another method for controlling a microscope.

Detailed Description

Various examples will now be described more fully with reference to the accompanying drawings in which some examples are illustrated. In the figures, the thicknesses of lines, layers and/or regions may be exaggerated for clarity.

Accordingly, while further examples are capable of various modifications and alternative forms, some particular examples thereof are shown in the figures and will subsequently be described in detail. However, this detailed description does not limit further examples to the particular forms described. Further examples may cover all modifications, equivalents, and alternatives falling within the scope of the disclosure. Same or like numbers refer to like or similar elements throughout the description of the figures, which may be implemented iden tically or in modified form when compared to one another while providing for the same or a similar functionality.

It will be understood that when an element is referred to as being“connected” or“coupled” to another element, the elements may be directly connected or coupled or via one or more intervening elements. If two elements A and B are combined using an“or”, this is to be un derstood to disclose all possible combinations, i.e. only A, only B as well as A and B, if not explicitly or implicitly defined otherwise. An alternative wording for the same combinations is“at least one of A and B” or“A and/or B”. The same applies, mutatis mutandis, for com binations of more than two Elements.

The terminology used herein for the purpose of describing particular examples is not intend ed to be limiting for further examples. Whenever a singular form such as“a,”“an” and “the” is used and using only a single element is neither explicitly or implicitly defined as being mandatory, further examples may also use plural elements to implement the same functionality. Likewise, when a functionality is subsequently described as being implement ed using multiple elements, further examples may implement the same functionality using a single element or processing entity. It will be further understood that the terms“comprises,” “comprising,”“includes” and/or“including,” when used, specify the presence of the stated features, integers, steps, operations, processes, acts, elements and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, processes, acts, elements, components and/or any group thereof

Unless otherwise defined, all terms (including technical and scientific terms) are used herein in their ordinary meaning of the art to which the examples belong.

Fig. 1 shows a schematic illustration of a system 100 for processing biology-related data according to an embodiment. The system 100 comprises one or more processors 110 cou pled to one or more storage devices 120. The system 100 is configured to receive (first) bi ology-related image-based search data 103 and configured to generate a first high dimensional representation of the (first) biology-related image-based search data 103 by a trained visual recognition machine- learning algorithm executed by the one or more proces sors 110. The first high-dimensional representation comprises at least 3 entries each having a different value (or at least 20 entries, at least 50 entries or at least 100 entries having val ues different from each other). Further, the system 100 is configured to obtain a plurality of second high-dimensional representations 105 of a plurality of biology-related image-based input data sets or of a plurality of biology-related language-based input data sets. Addition ally, the system 100 is configured to compare the first high-dimensional representation with each second high-dimensional representation 105 of the plurality of second high dimensional representations by the one or more processors 110.

The biology-related image-based search data 103 may be image data (e.g. pixel data of an image) of an image of a biological structure comprising a nucleotide or a nucleotide se quence, a biological structure comprising a protein or a protein sequence, a biological mole cule, a biological tissue, a biological structure with a specific behavior, and/or a biological structure with a specific biological function or a specific biological activity. The biological structure may be a molecule, a viroid or virus, artificial or natural membrane enclosed vesi cles, a subcellular structure (like a cell organelle) a cell, a spheroid, an organoid, a three- dimensional cell culture, a biological tissue, an organ slice or part of an organ in vivo or in vitro. For example, the image of the biological structure may be an image of the location of a protein within a cell or tissue or an image of a cell or tissue with endogenous nucleotides (e.g. DNA) to which labeled nucleotide probes bind (e.g. in situ hybridization). The image data may comprise a pixel value for each pixel of an image for each color dimension of the image (e.g. three color dimensions for RGB representation). For example, depending on the imaging modality other channels may apply related to excitation or emission wavelength, fluorescence lifetime, light polarization, stage position in three spatial dimensions, different imaging angles. The biology-related image-based search data 103 may be an XY pixel map, volumetric data (XYZ), time series data (XY+T) or combinations thereof (XYZT). Moreo ver, additional dimensions depending on the kind of image source may be included such as channel (e.g. spectral emission bands), excitation wavelength, stage position, logical posi tion as in a multi-well plate or multi-positioning experiment and/or mirror and/or objective position as in lightsheet imaging. For example, the user may input or a database may pro vide an image as a pixel map or pictures of higher dimensions. The biology-related image- based search data 103 may be received from the one or more storage devices, a database stored by a storage device or may be input by a user.

A high-dimensional representation (e.g. first and second high-dimensional representation) may be a hidden representation, a latent vector, an embedding, a sematic embedding and/or a token embedding and/or may be also called hidden representation, a latent vector, an em bedding, a semantic embedding and/or a token embedding.

The first high-dimensional representation and/or the second high-dimensional representa tions may be numerical representations (e.g. comprising numerical values only). The first high-dimensional representation and/or the second high-dimensional representations may comprise more than 100 dimensions (or more than 300 dimensions or more than 500 dimen sions) and/or less than 10000 dimensions (or less than 3000 dimensions or less than 1000 dimensions). Each entry of a high-dimensional representation may be a dimension of the high-dimensional representation (e.g. a high-dimensional representation with 100 dimen sions comprises 100 entries). For example, using high dimensional representations with more than 300 dimensions and less than 1000 dimensions may enable a suitable representa tion for biology-related data with semantic correlation. The first high-dimensional represen tation may be a first vector and each second high-dimensional representation may be a re spective second vector. If a vector representation is used for the entries of the first high dimensional representation and the entries of a second high-dimensional representation, an efficient comparison and/or other calculations (e.g. normalization) may be implemented, although other representations (e.g. as a matrix) may be possible as well. For example, the first high-dimensional representation and/or the second high-dimensional representations may be normalized vectors. The first high-dimensional representation and the second high dimensional representations may be normalized to the same value (e.g. 1). For example, the last layer of the trained language recognition machine- learning algorithm may represent a non-linear operation, which may perform the normalization in addition. The first high dimensional representation and/or the second high-dimensional representations may be gen erated by a trained visual recognition machine-learning algorithm, which may have been trained by a loss function, which causes the trained visual recognition machine- learning algorithm to output normalized high-dimensional representations. However, other ap proaches for the normalization of the first high-dimensional representation and the second high-dimensional representations may be applicable as well.

For example, the first high-dimensional representation and/or the second high-dimensional representations may comprise various entries (at least three) with values unequal 0 in con trast to one hot encoded representations. Corresponding to the first high-dimensional repre sentation, each second high-dimensional representation of the plurality of second high dimensional representations may comprise at least 3 entries each having a different value (or at least 20 entries, at least 50 entries or at least 100 entries having values different from each other). By using high-dimensional representation, which are allowed to have various entries with values unequal 0, information on a semantic relationship between the high-dimensional representations can be reproduced. For example, more than 50% (or more than 70% or more than 90%) of values of the entries of the first high-dimensional representation and/or more than 50% (or more than 70% or more than 90%) of values of the entries of the second high dimensional representations may be unequal 0. Sometimes one hot encoded representations have also more than one entry unequal 0, but there is only one entry with high value and all other entries have values at noise level (e.g. lower than 10% of the one high value). In con trast, the values of more than 5 entries (or more than 20 entries or more than 50 entries) of the first high-dimensional representation may be larger than 10% (or larger than 20% or larger than 30%) of a largest absolute value of the entries of the first high-dimensional rep resentation, for example. Further, the values of more than 5 entries (or more than 20 entries or more than 50 entries) of each second high-dimensional representation of the plurality of second high-dimensional representations may be larger than 10% (or larger than 20% or larger than 30%) of a respective largest absolute value of the entries of the second high dimensional representations. For example, the values of more than 5 entries (or more than 20 entries or more than 50 entries) of one second high-dimensional representation of the plurality of second high-dimensional representations may be larger than 10% (or larger than 20% or larger than 30%) of a largest absolute value of the entries of the one second high dimensional representation. For example, each entry of the first high-dimensional represen tation and/or the second high-dimensional representations may comprise a value between -1 and 1.

The first high-dimensional representation may be generated by applying at least a part (e.g. encoder) of the trained visual recognition machine- learning algorithm with a trained set of parameters to the biology-related image-based search data 103. For example, generating the first high-dimensional representations by the trained visual recognition machine- learning algorithm may mean that the first high-dimensional representation is generated by an en coder of the trained visual recognition machine- learning algorithm. The trained set of pa rameters of the trained visual recognition machine-learning algorithm may be obtained dur ing training of the visual recognition machine- learning algorithm as described below.

The values of one or more entries of the first high-dimensional representation and/or the values of one or more entries of the second high-dimensional representations may be pro portional to a likelihood of a presence of a specific biological function or a specific biologi cal activity. By using a mapping that generates high-dimensional representations preserving the semantical similarities of the input data sets, semantically similar high-dimensional rep resentations may have a closer distance to each other than semantically less similar high dimensional representations. Further, if two high-dimensional representations represent in put data sets with same or similar specific biological function or specific biological activity one or more entries of these two high-dimensional representations may have same or similar values. Due to the preservation of the semantic, one or more entries of the high-dimensional representations may be an indication of an occurrence or presence of a specific biological function or a specific biological activity. For example, the higher a value of one or more entries of the high-dimensional representation, the higher the likelihood of a presence of a biological function or a biological activity correlated with these one or more entries may be.

The trained visual recognition machine- learning algorithm may also be called image recog nition model or visual model. The trained visual recognition machine- learning algorithm may be or may comprise a trained visual recognition neural network. The trained visual recognition neural network may comprise more than 20 layers (or more than 40 layers or more than 80 layers) and/or less than 400 layers (or less than 200 layers or less than 150 layers). The trained visual recognition neural network may be a convolutional neural net work or a capsule network. Using a convolutional neural network or a capsule network may provide a trained visual recognition machine- learning algorithm with high accuracy for bi ology-related image-based data. However, also other visual recognition algorithms may be applicable. For example, the trained visual recognition neural network may comprise a plu rality of convolution layers and a plurality of pooling layers. However, pooling layers may be avoided, if a capsule network is used and/or stride=2 is used instead of stride=l for the convolution, for example. The trained visual recognition neural network may use a rectified linear unit activation function. Using a rectified linear unit activation function may provide a trained visual recognition machine- learning algorithm with high accuracy for biology- related image-based input data, although other activation functions (e.g. a hard tanh activa tion function, a sigmoid activation function or a tanh activation function) may be applicable as well. For example, the trained visual recognition neural network may comprise a convo lutional neural network and/or may be a ResNet or a DenseNet of a depth depending on the size of the input images.

The plurality of second high-dimensional representations 105 of the plurality of biology- related image-based input data sets or of the plurality of biology-related language-based input data sets may be obtained by receiving the second high-dimensional representations 105 from a database (e.g. stored by the one or more storage devices) or by generating the plurality of second high-dimensional representations 105 based on the plurality of biology- related image-based input data sets or the plurality of biology-related language-based input data sets. For example, the system 100 may be configured to obtain the second high dimensional representations by generating the second high-dimensional representations of the plurality of second high-dimensional representations by the trained visual recognition machine- learning algorithm executed by the one or more processors, if the plurality of sec ond high-dimensional representations is based on a plurality of biology-related image-based input data sets. For example, the trained visual model may be able to represent an image in the semantic embedding space (e.g. as second high dimensional representation). Alternative ly, the system 100 may be configured to obtain the second high-dimensional representations by generating the second high-dimensional representations of the plurality of second high dimensional representations by a trained language recognition machine- learning algorithm executed by the one or more processors, if the plurality of second high-dimensional repre- sentations is based on a plurality of biology-related language-based input data sets. Option ally, the second high-dimensional representations may be clustered as described in conjunc tion with Fig. 6, 7a and/or 7b and then the first high-dimensional representation may be compared to each second high-dimensional representation of a cluster center or to second high-dimensional representations closest to cluster centers.

Similar to the biology-related image-based search data 103, each biology-related image- based input data set of the plurality of biology-related image-based input data sets may be image data (e.g. pixel data of an image) of an image of a biological structure comprising a nucleotide or a nucleotide sequence, a biological structure comprising a protein or a protein sequence, a biological molecule, a biological tissue, a biological structure with a specific behavior, and/or a biological structure with a specific biological function or a specific bio logical activity. The trained visual recognition machine- learning algorithm may convert the image data of these images into semantic embeddings (e.g. second high-dimensional repre sentations). The plurality of biology-related image-based input data sets may be received from the one or more storage devices or a database stored by a storage device.

Each biology-related language-based input data set of the plurality of biology-related lan guage-based input data sets may be a textual input being related to a biological structure, a biological function, a biological behavior or a biological activity. For example, a biology- related language-based input data set may be a nucleotide sequence, a protein sequence, a description of a biological molecule or biological structure, a description of a behavior of a biological molecule or biological structure, and/or a description of a biological function or a biological activity. The textual input may be natural language, which is descriptive of the biological molecule (e.g. polysaccharide, poly/oligo-nucleotide, protein or lipid) or its be havior in the context of the experiment or data set. For example, the biology-related lan guage-based search data 101 may be a nucleotide sequence, a protein sequence or a coarse grained search term of a group of biological terms.

A group of biological terms may comprise a plurality of coarse-grained search terms (or alternatively called molecular biological subject heading terms) belonging to the same bio logical topic. A group of biological terms may be catalytic activity (e.g. as some sort of reaction equation using words for educts and products), pathway (e.g. which pathway is involved, for example, glycolysis), sites and/or regions (e.g. binding site, active site, nucleo- tide binding site), GO gene ontology (e.g. molecular function, for example, nicotinamide adenine dinucleotide NAD binding, microtubule binding), GO biological function (e.g. apoptosis, gluconeogenesis), enzyme and/or pathway databases (e.g. unique identifiers for sic function, for example, in BRENDA/EC number or UniPathways), subcellular localiza tion (e.g. cytosol, nucleus, cytoskeleton), family and/or domains (e.g. binding sites, motifs, e.g. for posttranslational modification), open-reading frames, single-nucleotide polymor phisms, restriction sites (e.g. oligonucleotides recognized by a restriction enzyme) and/or biosynthesis pathway (e.g. biosynthesis of lipids, polysaccharides, nucleotides or proteins). For example, the group of biological terms may be the group of subcellular localizations and the coarse-grained search terms may be cytosol, nucleus and cytoskeleton.

A biology-related language-based input data set of the plurality of biology-related language- based input data sets may comprise a length of less than 50 characters (or less than 30 char acters or less than 20 characters), if a coarse-grained search term is used as biology-related language-based input data set, and/or more than 20 characters (or more than 40 characters, more than 60 characters or more than 80 characters), if a nucleotide sequence or a protein sequence is used as biology-related language-based input data set. For example, nucleotide sequences (DNA/RNA) are often about three times longer than polypeptide sequences (e.g. peptide, protein), since three base pairs are coded for an amino acid. For example, the biolo gy-related language-based input data set may comprise a length of more than 20 characters, if the biology-related language-based input data set is a protein sequence or an amino acid. The biology-related language-based input data set may comprise a length of more than 60 characters, if the biology-related language-based input data set is a nucleotide sequence or descriptive text in natural language. For example, the biology-related language-based input data set may comprise at least one non-numerical character (e.g. an alphabetical character).

The trained language recognition machine- learning algorithm may also be called textual model, text model or language model. The language recognition machine-learning algorithm may be or may comprise a trained language recognition neural network. The trained lan guage recognition neural network may comprise more than 30 layers (or more than 50 lay ers or more than 80 layers) and/or less than 500 layers (or less than 300 layers or less than 200 layers). The trained language recognition neural network may be a recurrent neural network, for example, a long short-term memory network. Fusing a recurrent neural network, for example a long short-term memory network, may provide a language recognition ma- chine-learning algorithm with high accuracy for biology-related language-based data. How ever, also other language recognition algorithms may be applicable. For example, the trained language recognition machine- learning algorithm may be an algorithm able to han dle input data of variable length (e.g. Transformer-XL algorithm). For example, a length of first biology-related language-based input data set differs from a length of second biology- related language-based input data set. Protein sequences, for example, typically are tens to hundreds of amino acids long (with one amino acid represented as one letter in the protein sequence). The "semantics", e.g. biological function of substrings from the sequence (called polypeptides, motifs or domains in biology) may vary in length. Thus, using an architecture which is capable of receiving input of variable length may be used.

The one or more processors 110 may be configured to compare the first high-dimensional representation with each second high-dimensional representation of the plurality of second high-dimensional representations. The first high-dimensional representation may be com pared to a second high-dimensional representation by calculating a distance between the first high-dimensional representation and the second high-dimensional representation. The distance (e.g. Euclidean distance or earth mover's distance) between the first high dimensional representation and the second high-dimensional representation may be calcu lated with low effort, if the first high-dimensional representation and the second high dimensional representation are represented by vectors (e.g. normalized vectors). The calcu lation of the distance may be repeated for every second high-dimensional representation of the plurality of second high-dimensional representations. For example, the comparison of the first high-dimensional representation with each second high-dimensional representation of the plurality of second high-dimensional representations is based on an Euclidean dis tance function or an earth mover's distance function. Based on the calculated distances, the system 100 may select one or more second high-dimensional representations based on a selection criterion (e.g. the one or more second high-dimensional representations with clos est distance or within a distance threshold). For example, the system 100 may be configured to select a second high-dimensional representation of the plurality of second high dimensional representations closest to the first high-dimensional representation based on the comparison. The system 100 may output or store the one or more second high-dimensional representations fulfilling the selection criterion, the one or more biology-related image- based input data sets of the plurality of biology-related image-based input data sets, which correspond to the one or more second high-dimensional representations, and/or the one or more biology-related language-based input data sets of the plurality of biology-related lan guage-based input data sets, which correspond to the one or more second high-dimensional representation. For example, the system 100 may output and/or store the closest second high-dimensional representation, the biology-related image-based input data set of the plu rality of biology-related image-based input data sets, which corresponds to the closest sec ond high-dimensional representation, and/or the biology-related language-based input data set of the plurality of biology-related language-based input data sets, which corresponds to the closest second high-dimensional representation.

Due to the usage of high dimensional representations with several entries unequal 0, a com bination of two or more high dimensional representations may be possible in order to search for a logical combination of two or more search terms. For example, the user may input two or more search images and one or more logical operators (e.g. A D-operator or NOT- operator) and the corresponding generated first high dimensional representations may be combined based on the logical operator. For example, the system 100 may be configured to receive second biology-related image-based search data and information on a logical opera tor. Further, the system 100 may generate a first high-dimensional representation of the sec ond biology-related image-based search data by the trained language recognition machine learning algorithm executed by the one or more processors. Additionally, the system 100 may determine a combined high-dimensional representation based on a combination of the first high-dimensional representation of the first biology-related image-based search data and the first high-dimensional representation of the second biology-related image-based search data according to the logical operator. The combined high-dimensional representa tion may be a normalized high-dimensional representation (e.g. a normalized vector).

Further, the system 100 may compare the combined high-dimensional representation to each second high-dimensional representation of the plurality of second high-dimensional repre sentations. Based on the comparison of the combined high-dimensional representation with each second high-dimensional representation of the plurality of second high-dimensional representations, one or more second high representations may be selected based on a selec tion criterion (e.g. the one or more second high-dimensional representations with closest distance or within a distance threshold).

The system 100 may output or store the one or more second high-dimensional representa tions fulfilling the selection criterion, the one or more biology-related image-based input data sets of the plurality of biology-related image-based input data sets, which correspond to the one or more second high-dimensional representations, and/or the one or more biology- related language-based input data sets of the plurality of biology-related language-based input data sets, which correspond to the one or more second high-dimensional representa tion. The selected one or more biology-related image-based input data sets (e.g. biological images) or the selected one or more biology-related language-based input data sets (e.g. biological text) may show or describe biological structures comprising the logical combina tion of search terms as represented by the first biology-related image-based search data, the second biology-related image-based search data and the information on the logical operator. In this way, a search for a logical combination of two or more search images may be ena bled. The logical operator may an AND-operator, an OR-operator or a NOT-operator. The NOT-operator may suppress undesired hits. The NOT-operation may be determined by a search for the negated search term. For example, the embedding (e.g. first high-dimensional representation) of the negated search term may be generated and inverted. Then the k em beddings closest to the embedding of the negated search term may be determined among the plurality of embeddings associated with the images (the plurality of second high dimensional representations) and removed from the plurality of embeddings. Optionally, the mean (e.g. medoid or arithmetic mean) of the remaining plurality of embeddings may be determined. This newly computed second high-dimensional representation may serve for a new query in the embedding space to obtain more precise hits. The OR-operation may be implemented by determining the closest or k closest elements (second high-dimensional representation) for each search term with k being an integer number between 2 and N. For example, all OR-linked search terms may be searched in a loop and the closest or the k clos est hits may be output. Further, a combination of several of the logical operators may be possible by parsing the expressions and working on the searches one after the other or from inside out.

For example, the logical operator is an AND-operator and the combined high-dimensional representation is determined by adding and/or averaging the first high-dimensional repre sentation of the first biology-related image-based search data and the first high-dimensional representation of the second biology-related image-based search data. For example, the arithmetic mean of the first high-dimensional representation of the first biology-related im age-based search data and the first high-dimensional representation of the second biology- related image-based search data may be determined. For example, the arithmetic mean may be determined by: with yi being a first high-dimensional representation and N being the number of vectors to be averaged (e.g. number of logical combined search terms). The determination of the arithmetic mean may result in a normalized high-dimensional representation. Alternatively, the geometric mean, the harmonic mean, the quadratic mean or the medoid may be used. The medoid may be used to avoid large errors for distributions with a hole (e.g. enclosed area with no data points). The medoid may find the element, which is closest to the mean. The medoid m may be defined as: with Y being the whole embeddings (plurality of second high-dimensional representations), yi being one of the second high-dimensional representations, y being the embedding corre sponding to the search term (first high-dimensional representation) and d being a distance metric (e.g. Euclidean distance or L2-norm). For example, the element of Y being closest to the mean may be found and afterwards the k elements being closest to the medoid may be determined (e.g. by a quicksort algorithm).

As mentioned above, the biology-related image-based search data 103 may be of various types (e.g. images of biological structures comprising nucleotide sequences or protein se quences or biological structures representing a coarse-grained search term of a group of bio logical terms). A single visual recognition machine- learning algorithm may be trained to handle one type of input only. Therefore, the system 100 may be configured to select the visual language recognition machine- learning algorithm from a plurality of trained visual recognition machine- learning algorithms based on the biology-related image-based search data 103. For example, a plurality of trained visual recognition machine- learning algorithms may be stored by the one or more storage devices 120 and the system 100 may select one of the trained visual recognition machine-learning algorithms depending on the type of input received as biology-related image-based search data 103. For example, the trained visual recognition machine- learning algorithm may be selected from a plurality of trained visual recognition machine- learning algorithms by a classification algorithm (e.g. visual recogni tion machine-learning algorithm) configured to classify the biology-related image-based search data 103. The system 100 may be implemented in a microscope, may be connected to a microscope or may comprise a microscope. The microscope may be configured to obtain the biology- related image-based search data 103 and/or the plurality of biology-related image-based input data sets by taking images of one or more biological specimens. The plurality of biol ogy-related image-based input data sets may be stored by the one or more storage devices 120 and/or may be provided for the generation of the plurality of second high-dimensional representations.

More details and aspects of the system 100 are mentioned in conjunction with the proposed concept and/or the one or more examples described above or below (e.g. Figs. 2-7). The system 100 may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept and/or of one or more examples described above or below.

Fig. 2 shows a schematic illustration of a system 200 for processing biology-related data according to an embodiment. A user may start a query using an image 201 (e.g. biology- related image-based search data) as, for example, an image of a biological structure com prising a specific protein sequence or nucleotide sequence. For example, the system 200 comprises a visual model 220 (e.g. CNN), which was trained on the semantic embeddings of a textual model, which was trained on a large body of protein sequences (e.g. protein se quence database), nucleotide sequences (e.g. nucleotide sequence database), scientific pub lications (e.g. database of biology-related publications) or other texts describing the role and/or biological function of the object of interest, such as blog posts, home pages of re search groups, online articles, discussion forums or social media posts. For example, the visual model 220 has learned to predict these semantic embeddings during training as de scribed below, but other ways of training the model may be possible. The user input 201 (e.g. query text) may be first classified by a visual model 210 into the respective class (e.g. image of a biological structure comprising a protein sequence or nucleotide sequence) and the system 200 may find the correct second visual model 230 for this class from a repository of such models containing one or more visual models necessary to process the classes of input text. The query image 201 is then transformed into its respective embedding 260 (first high-dimensional representation) using a forward pass through the respective pre-trained visual model 230 (trained visual recognition machine- learning algorithm). The image data in the database 240 (e.g. stored by the one or more storage devices) or as part of a running experiment in a microscope may be transformed into their respective embeddings 250 (plu rality of second high-dimensional representations) via a forward pass through a pre-trained visual model 220. The pre-trained visual model 220 and the second visual model 230 may be the same visual model (trained visual recognition machine- learning algorithm). For ex ample, for performance reasons this part could be done prior to the user query and stored in a suitable database 255 (e.g. stored by the one or more storage devices), or, for example, along with the image data. The database 240 and the database 255 may be identical or the same, but they could be different databases as well. However, for single or small numbers of images as in a running experiment the forward pass of the images can be done on-the-fly, thus bypassing 257 the intermediate storage 255 of the visual embeddings 250. For example, the image repository 240 can represent a public or private database or it can represent the storage medium of a microscope during a running experiment. The two kinds of generated embeddings, one embedding for the query text 260 and the embeddings for the images 250, can be compared 270 in embedding space (e.g. their relative distances can be computed). Different distance metrics can be used for this comparison, such as Euclidean distance or Earth mover’s distance. Other distance metrics may be used as well (e.g. distance metrics used in clustering). For example, the closest embeddings 280 may be determined and the respective images 290 may be looked up in the repository 240 and returned to the user. The number of images to return may be pre-determined by the user or computed according to a distance threshold or other criterion. For example, the search for the one or more closest embeddings may provide the k closest elements out of the plurality of embeddings 250 (plu rality of second high-dimensional representations) with k being an integer number. For ex ample, the Euclidian distance (F2 norm) between the embedding of the search query and all elements of the plurality of embeddings 250 may be determined. The resulting distances (e.g. same number as elements in the plurality of embeddings) may be sorted and the ele ment with the smallest distance or the k elements with the k smallest distances may be out put.

More details and aspects of the system 200 are mentioned in conjunction with the proposed concept and/or the one or more examples described above or below (e.g. Figs. 1, 3-7). The system 200 may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept and/or of one or more examples described above or below. Fig. 3 shows a schematic illustration of a system 300 for processing biology-related data according to an embodiment. A user may start a query using an image 201 (e.g. biology- related image-based search data) as, for example, an image of a biological structure com prising a specific protein sequence or nucleotide sequence. Optionally, a pre-classification of the query 201 using a suitable classifier 210 (e.g. a neural network, a statistical machine learning algorithm depending on input type) may be performed. The pre-classification can be skipped 315 in some embodiments. The results of the pre-classification may be used to select a suitable model 230 which can transform the user query 201 into its related semantic embedding 260 by a pre-trained model 230 which serves as a feature extractor.

User inputs and images coming from a data source 240 are connected and processed in this semantic embedding space. The data source 240 can be a private or public data repository or an imaging device such as a microscope. The data may be of type images, text, coarse grained search terms or instrument specific data recorded by the data source. For example, a visual model 220 (e.g. CNN) may be included, which was trained on the semantic embed dings of a textual model, which was trained on a large body of protein sequences (e.g. pro tein sequence database), nucleotide sequences (e.g. nucleotide sequence database), scientific publications (e.g. database of biology-related publications) or other texts describing the role and biological function of the object of interest, such as blog posts, home pages of research groups, online articles, discussion forums or social media posts. The visual model 220 may have been pre-trained to predict these semantic embeddings during training. Both, the first visual model 220 and the input feature extractor 230 (e.g. second visual model) are trained on the same embedding space, for example. The first visual model 220 and the feature ex tractor 230 may be the same visual model (trained visual recognition machine- learning algo rithm). The query 201 is then transformed into its respective embedding 260 using a forward pass through the input feature extractor 230. The data from the data source 240 which is either a database or part of a running experiment in a microscope may be transformed into its respective embeddings 250 via a forward pass through a pre-trained model 220 (visual model). For example, for performance reasons this procedure could be done prior to the user query and the semantic embeddings stored in a suitable database 255, or, for example, along with the image data. The database 240 and the database 255 may be identical or the same, but they could be different databases as well. However, for single or small numbers of im ages as in a running experiment the forward pass of the images can be done on-the-fly, thus bypassing 257 the intermediate storage 255 of the visual embeddings. The two kinds of gen- erated embeddings, one embedding for the query 260 and the embeddings for the data source 250 can now be compared 270 in embedding space (e.g. their relative distances can be computed). Different distance metrics can be used for this comparison, such as Euclidean distance or Earth mover’s distance. Other distance metrics may be used as well. For exam ple, distance metrics used in clustering might work.

The system 300 may determine the closest embeddings 280, may look-up the respective data (e.g. images) in the repository 240 or running experiment and may return them 381. The last step can result in different downstream process steps depending on the exact pur pose of the embodiment. In some cases, it may be necessary to feed by data 383, such as coordinates of the discovered object in terms of sample and stage coordinates, to the image source (e.g. microscope) which can change the course of the running experiment. In some embodiments the respective data can be output to the user 385 who may decide to adjust the running experiment or process the data further. Other embodiments may archive the respec tive data in a database 387 for future searches. Alternatively, the respective data, still in se mantic embedding space, may be converted back to any of the input data types and may be used to query public databases 389 to retrieve scientific publications, social media entries or blog posts 390, images of that same biomolecule 393 or biological sequences as identified through a sequence alignment 395. All of the found information can be returned to the user 385 and or written to a database 387 as functional annotations of the images recorded in the currently running experiment or the repository, the retrieved data originated from.

Fig. 3 may show an example of a image-to -image search using an image query. In one em bodiment, the image repository 240 can represent a public or private database, in another embodiment it can represent the storage medium of a microscope during a running experi ment.

More details and aspects of the system 300 are mentioned in conjunction with the proposed concept and/or the one or more examples described above or below (e.g. Figs. 1-2 and 4-7). The system 300 may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept and/or of one or more examples described above or below. Fig. 4 shows a schematic illustration of a system 400 for controlling a microscope according to an embodiment. The system 400 comprises one or more processors 110 and one or more storage devices 120. The system 400 is configured to receive image-based search data 401 and configured to generate a first high-dimensional representation of the image-based search data 401 by a trained visual recognition machine- learning algorithm executed by the one or more processors 110. The first high-dimensional representation comprises at least 3 entries each having a different value (or at least 20 entries, at least 50 entries or at least 100 entries having values different from each other). Further, the system 400 is configured to obtain a plurality of second high-dimensional representations 405 of a plurality of image-based input data sets and configured to select a second high-dimensional representation 405 from the plurality of second high-dimensional representations based on a comparison of the first high-dimensional representation with each second high-dimensional representation 405 of the plurality of second high-dimensional representations performed by the one or more pro cessors 110. Additionally, the system 400 is configured to provide a control signal 411 for controlling an operation of a microscope based on the selected second high-dimensional representation 405.

The image-based search data 401 may be image data (e.g. pixel data of an image) of an im age of a specimen, which is to be analyzed. The specimen to be analyzed may be a biologi cal specimen, an integrated circuit or any other specimen, which can be imaged by a micro scope. For example, if the specimen is a biological specimen, the image-based search data 401 may be an image of a biological structure comprising a nucleotide or a nucleotide se quence, a biological structure comprising a protein or a protein sequence, a biological mole cule, a biological tissue, a biological structure with a specific behavior, and/or a biological structure with a specific biological function or a specific biological activity. For example, if the specimen is an integrated circuit, the image-based search data 401 may be an image of a sub-circuit (e.g. memory cell, converter cell, ESD protection circuit), a circuit element (e.g. transistor, capacitor or coil) or a structural element (e.g. gate, via, pad or spacer).

The plurality of second high-dimensional representations 405 may be obtained from a data base or may be generated by a visual recognition machine- learning algorithm. For example, the system 400 may be configured to generate the plurality of second high-dimensional rep resentations 405 of the plurality of image-based input data sets by the visual recognition machine- learning algorithm executed by the one or more processors 110. A microscope may be configured to take a plurality of images of a specimen. The plurality of image-based input data sets may represent the plurality of images of the specimen. The plurality of image-based input data sets may be image data of images taken by the micro scope from a specimen. For example, a plurality of images may be taken from specimen at different positions to cover to whole specimen or a region of interest of the specimen, which is too large to be taken by a single image at a desired magnification. The image data of each image of the plurality of images may represent one image-based input data set of the plurali ty of image-based input data sets. The system 400 may be configured to store the positions, the images were taken from. The positions may be stored together with the corresponding images or together with the corresponding second high-dimensional representations 405. The system 400 may comprise the microscope or the microscope may be connected to the system 400 or may comprise the system 400.

The system 400 may select a second high-dimensional representation of the plurality of sec ond high-dimensional representations, which fulfills a selection criterion (e.g. second high dimensional representation, which is closest to the first high-dimensional representation). The comparison of the first high-dimensional representation with each second high dimensional representation of the plurality of second high-dimensional representations may provide one or more second high-dimensional representations closest to the first high dimensional representation. The system 400 may be configured to select one or more second high-dimensional representations of the plurality of second high-dimensional representa tions closest to the first high-dimensional representation based on the comparison.

The system 400 may be configured to determine a microscope target position based on the selected second high-dimensional representation. The microscope target position may be the position, the image was taken from, which corresponds to the selected second high dimensional representation. For example, the microscope target position may be the position stored together with the selected second high-dimensional representation or together with the image, which corresponds to the selected second high-dimensional representation. The microscope target position may be the position at which an image was taken, which was represented by the image-based input data, which corresponds to the selected second high dimensional representation. The system 400 may be configured to provide the control signal for controlling an operation of a microscope based on the determined microscope target position. The control signal 411 may be an electrical signal provided to the microscope to control a movement, a magnifica tion, a light source selection, a filter selection and/or another microscope functionality. For example, the control signal 411 may be configured to trigger the microscope to drive to the microscope target position. For example, the optics and/or the specimen table of the micro scope may be moved to the microscope target position in response to the control signal 411. In this way, further images can be taken from the specimen at the position, which was the result of the search. For example, images with higher magnification, different light source and/or different filter could be taken of a region of interest. For example, the language- based search data 405 may represent the search for cell nuclei in a large biological specimen and the system 400 may provide a control signal 411 for driving the microscope to a posi tion of a cell nucleus. If several cell nuclei may be found, the system 400 may be configured to provide the control signal 411 so that the microscope is driven to the different positions one after the other in order to take more images at these positions.

More details and aspects of the system 400 are mentioned in conjunction with the proposed concept and/or the one or more examples described above or below (e.g. Figs. 1-3 and 5-7). The system 400 may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept and/or of one or more examples described above or below.

Fig. 5 shows a schematic illustration of a system 500 for controlling a microscope based on biology-related image-based search data according to an embodiment. The system 500 may be implemented similar to the system described in conjunction with Fig. 4. The system 500 may be able to find images which are similar to a user provided query image and may change the running experiment. The microscope 501 can move the stage back to all posi tions of similar images found.

For example, the user may start a query using an image as input 550 (e.g. biology-related image-based search data) and starts the experiment. The user input may be passed through a pre-trained visual model 220 as described above or below. A forward pass through this vis ual model 220 may create a semantic embedding of the image 260 (first high-dimensional representation). A microscope 501 may create a series of images 510 (e.g. type of series as defined above or below). The images 510 may be forward passed through the same visual model 220 as before to create the respective embeddings 250 (plurality of second high dimensional representations). The distances between these latter embeddings and the em bedding^) from the user query may be computed 270. Similar images as defined by thresh olding this distance or a pre-determined or automatically found number of search results may be found among the recorded embeddings 250. Their respective coordinates may be found 580 and passed back 590 to the microscope, which in turn can alter the experiment to record those new coordinates 595. Details on the types of coordinates and alterations to the experiment are described above or below, for example. Instead of querying only one image, the user may send multiple images for query at the same time.

In a variation of this embodiment, the query images 550 might not be entered manually by the user, but may be the result of another experiment of the same or another imaging device, which triggers the query to this experiment automatically. In another variation of this em bodiment, the query images 550 may come from a database (e.g. as the result of a search query, which in turn can have been entered manually or by an imaging or laboratory device) and trigger the query to this experiment automatically.

Fig. 5 may show an example of an image-to-image search for querying running experiment based on user-defined input images.

More details and aspects of the system 500 are mentioned in conjunction with the proposed concept and/or the one or more examples described above or below (e.g. Figs. 1-4 and 6- 11). The system 500 may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept and/or of one or more examples described above or below.

Fig. 6 shows a schematic illustration of a system for controlling a microscope according to an embodiment. The system 600 comprises one or more processors 110 coupled to one or more storage devices 120. The system 600 is configured to determine a plurality of clusters of a plurality of second high-dimensional representations 405 of a plurality of image-based input data sets by a clustering algorithm executed by the one or more processors 110. Fur ther, the system 600 is configured to determine a first high-dimensional representation of a cluster center of the cluster of the plurality of clusters and to select a second high- dimensional representation 405 from the plurality of second high-dimensional representa tions based on a comparison of the first high-dimensional representation with each or a sub set of the second high-dimensional representations 405 of the plurality of second high dimensional representations. Additionally, the system 600 is configured to provide a control signal 411 for controlling an operation of a microscope based on the selected second high dimensional representation.

A cluster of second high-dimensional representations 405 may represent a plurality of sec ond high-dimensional representations 405 comprising a small distance to each other. For example, the second high-dimensional representations 405 of a cluster may comprise a smaller distance to each other than to second high-dimensional representations 405 of other clusters and/or may comprise a smaller distance to a cluster center of the own cluster than to any other cluster center of the plurality of clusters. Each cluster of the plurality of clusters may comprise at least 5 second high-dimensional representations 405 (or at least f0, at least 20 or at least 50).

The clustering algorithm may be or may comprise a machine- learning algorithm, for exam ple, a k-means clustering algorithm, a mean shift clustering algorithm, a k-medoid clustering algorithm, a support vector machine algorithm, a random forest algorithm or a gradient boosting algorithm.

The system 600 may determine a first high-dimensional representation of a cluster center for each cluster of the plurality of clusters. The system 600 may determine a first high dimensional representation of a cluster center by calculating a linear combination of the second high-dimensional representations of the cluster, a second high-dimensional represen tations with smallest overall distance to all second high-dimensional representations of the cluster or by a non-linear combination of the second high-dimensional representations of the cluster, for example.

The system 600 may be configured to generate the plurality of second high-dimensional representations of the plurality of image-based input data sets by a visual recognition ma chine-learning algorithm executed by the one or more processors 110.

The system 600 may be configured to select one or more second high-dimensional represen tations of the plurality of second high-dimensional representations closest to the first high dimensional representation based on the comparison. The system 600 may be configured to determine a microscope target position based on the selected second high-dimensional representation. The microscope target position may be a position at which an image was taken, which was represented by the image-based input da ta, which corresponds to the selected second high-dimensional representation. The control signal may be configured to trigger the microscope to drive to the microscope target posi tion.

The system 600 may further comprise the microscope configured to take a plurality of im ages of a specimen. The plurality of image-based input data sets may represent the plurality of images of the specimen.

More details and aspects of the system 600 are mentioned in conjunction with the proposed concept and/or the one or more examples described above or below (e.g. Figs. 1-5 and Va i l). The system 600 may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept and/or of one or more examples described above or below.

Fig. 7a shows a schematic illustration of a system 700 for controlling a microscope based on biology-related image-based search data by using a clustering algorithm according to an embodiment. The system 700 may be implemented similar to the system described in con junction with Fig. 6. A microscope 501 may produce a series of images 510, the coordinates of which are stored. A visual model 220 pre-trained as described below may compute the respective embeddings 250 (e.g. latent vectors, plurality of second high-dimensional repre sentations) by means of a forward pass. The resulting set of embeddings 250 may be clus tered by a suitable clustering algorithm 740, such as k-means clustering, mean shift cluster ing or others. For each cluster the center 750 may be determined by computing a combina tion of the respective latent vectors 250. For example, a linear combination (of the second high-dimensional representations of the cluster) may be used. Other combinations, including non-linear ones, may be applied alternatively. In this way, the cluster centers 760, which are latent vectors themselves, may be obtained. By applying a suitable distance metric as de scribed above or below, an image search may be performed 770 on the acquired series of images 510 to obtain those images whose embeddings are most similar to the found cluster centers. A similarity threshold may be computed automatically, provided by the user and/or obtained and/or refined by displaying the search results to the user and let the user select the desired images. The coordinates of the refined search results may be obtained 580 and passed back 590 to the microscope, which in turn can alter the experiment to record new images at those coordinates 595. Those new images can have the same instrument settings as before or different ones regarding any hardware parameter (e.g. one or more or all pa rameters) available to the microscope (e.g. different illumination or detection settings, dif ferent objectives, zoom and others more). At all the steps 580, 590 and 595 user interaction may be possible where the user can optionally refine the search results or make decisions on which coordinates are acquired or which imaging modalities to use or which class of images to acquire and which to disregard.

Coordinates in the sense as described above may be stage positions (lateral positions), time stamps, z-positions (axial positions), illumination wavelength, detection wavelength, mirror position (e.g. as in lightsheet microscopy), iteration number in a loop, logical positions in the sample (such as wells in a multi-well plate or defined position in a multi-positioning experiment), time gates in time-gated recordings, nanosecond timestamps in a fluorescence lifetime image and/or any other hardware parameter available to the microscope along the dimension of which it can record image series.

Fig. 7a may show an example of an image-to-image search for querying running experiment by using unsupervised clustering of semantic embeddings.

More details and aspects of the system 700 are mentioned in conjunction with the proposed concept and/or the one or more examples described above or below (e.g. Figs. 1-6 and 7b- 11). The system 700 may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept and/or of one or more examples described above or below.

Fig. 7b shows a schematic illustration of a system 790 for processing biology-related data by using a clustering algorithm according to an embodiment. The system 790 may be imple mented similar to the system described in conjunction with Fig. 6 and/or Fig. 7a.

A microscope 501 may produce a series of images 510, which are passed through a pre trained visual model 220 to compute semantic embeddings 250. The latter are clustered, similarly as described in conjunction with Fig.6 and/or Fig. 7a. Any new clusters, or outli ers, as defined by an item number threshold or a distance measure threshold, may be identi fied by a suitable clustering algorithm 740, such as k-means clustering, mean shift clustering or others. For example, one out of four actions or combinations thereof may be taken after wards. The coordinates of the new clusters may be sent to the microscope to alter the cur rently running experiment and change 791 image modalities, for example, as described in conjunction with Fig. 7a. Additionally or alternatively, the images corresponding to the newly found cluster of semantic embeddings may be returned to the user 792, who in turn can alter the currently running experiment or make decisions on other actions to take. Addi tionally or alternatively, the newly found embeddings and their corresponding images and metadata may be stored in a repository 793 as annotations for future searches. Additionally or alternatively, the semantic embeddings of the newly found cluster can be converted to biological sequences, natural language or coarse-grained search terms and may be used to query public databases 794 to retrieve scientific publications, social media entries or blog posts 795, images of that same biomolecule 796 or biological sequences as identified through a sequence alignment 797. All of the found information may be returned to the user and or written to a data base as functional annotations of the images recorded in the current ly running experiment.

The system 790 may enable the identification of interesting new structures (e.g. pheno types).

According to an aspect, the clustering may be done during the recording. In this way, vari ous classes of images may be recognized, which may correspond to biological phenotypes. Examples of these images (e.g. determined by k-means of k-medoid clustering), may be presented to the user. The user may recognize which phenotypes are included in the speci men. The user may save time to search for these phenotypes manually and may get a de scriptive statistic on how often they occur in addition. Further, irrelevant classes of pheno types or experimental artifacts may be detected and omitted at the detailed recordings (e.g. with higher resolution or as time series). In this way, time for the recording and time for the following data analysis may be saved.

According to an aspect, already available data (e.g. instead of using images of a running experiment) may be analyzed by unsupervised clustering based on their stored semantic embeddings. In this way, existing classes may be detected. These classes may be added to the database as annotations and may be further used for future searches.

According to an aspect, data of a running experiment may be classified by unsupervised clustering and further processed (e.g. as in Fig. 7a).

More details and aspects of the system 790 are mentioned in conjunction with the proposed concept and/or the one or more examples described above or below (e.g. Figs. l-7a and 8- 11). The system 790 may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept and/or of one or more examples described above or below.

The system described in conjunction with one of the Figs. l-7b may comprise or may be a computer device (e.g. personal computer, laptop, tablet computer or mobile phone) with the one or more processors and one or more storage devices located in the computer device or the system may be a distributed computing system (e.g. cloud computing system with the one or more processors and one or more storage devices distributed at various locations, for example, at a local client and one or more remote server farms and/or data centers). The system may comprise a data processing system that includes a system bus to couple the var ious components of the system. The system bus may provide communication links among the various components of the system and may be implemented as a single bus, as a combi nation of busses, or in any other suitable manner. An electronic assembly may be coupled to the system bus. The electronic assembly may include any circuit or combination of circuits. In one embodiment, the electronic assembly includes a processor which can be of any type. As used herein, processor may mean any type of computational circuit, such as but not lim ited to a microprocessor, a microcontroller, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a graphics processor, a digital signal processor (DSP), multiple core processor, a field programmable gate array (FPGA) of the microscope or a microscope component (e.g. camera) or any other type of processor or processing cir cuit. Other types of circuits that may be included in electronic assembly may be a custom circuit, an application-specific integrated circuit (ASIC), or the like, such as, for example, one or more circuits (such as a communication circuit) for use in wireless devices like mo bile telephones, tablet computers, laptop computers, two-way radios, and similar electronic systems. The system includes one or more storage devices, which in turn may include one or more memory elements suitable to the particular application, such as a main memory in the form of random access memory (RAM), one or more hard drives, and/or one or more drives that handle removable media such as compact disks (CD), flash memory cards, digital video disk (DVD), and the like. The system may also include a display device, one or more speak ers, and a keyboard and/or controller, which can include a mouse, trackball, touch screen, voice-recognition device, or any other device that permits a system user to input information into and receive information from the system.

Additionally, the system may comprise a microscope connected to a computer device or a distributed computing system. The microscope may be configured to generate the biology- related image-based input data sets by taking images from one or more specimens.

The microscope may be a light microscope (e.g. diffraction limited or sub-diffraction limit microscope as, for example, a super-resolution microscope or nanoscope). The microscope may be a stand-alone microscope or a microscope system with attached components (e.g. confocal scanners, additional cameras, lasers, climate chambers, automated loading mecha nisms, liquid handling systems, optical components attached, like additional multiphoton light paths, optical tweezers and more). Other image sources may be used as well, if they can take images of objects which are related to biological sequences (e.g. proteins, nucleic acids, lipids) or other specimens, for example. For example, a microscope according to an embodiment described above or below may enable deep discovery microscopy.

More details and aspects of the system are mentioned in conjunction with the proposed con cept and/or the one or more examples described above or below (e.g. Figs. 1-11). The sys tem may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept and/or of one or more examples described above or below.

Some embodiments relate to a microscope comprising a system as described in conjunction with one or more of the Figs. l-7b. Alternatively, a microscope may be part of or connected to a system as described in conjunction with one or more of the Figs. l-7b. Fig. 8 shows a schematic illustration of a system 800 for processing data according to an embodiment. A microscope 810 configured to take images of one or more specimens (e.g. biological speci mens or integrated circuits) is connected to a computer device 820 (e.g. personal computer, laptop, tablet computer or mobile phone) configured to process data. The microscope 810 and the computer device 820 may be implemented as described in conjunction with one or more of the Figs. l-7b.

Fig. 9 shows a flow chart of a method for processing biology-related image-based search data according to an embodiment. The method 900 comprises receiving 910 biology-related image-based search data and generating 920 a first high-dimensional representation of the biology-related image-based search data by a trained visual recognition machine-learning algorithm. The first high-dimensional representation comprises at least 3 entries each having a different value. Further, the method 900 comprises obtaining 930 a plurality of second high-dimensional representations of a plurality of biology-related image-based input data sets or a plurality of biology-related language-based input data sets. Additionally, the meth od 900 comprises comparing 940 the first high-dimensional representation with each second high-dimensional representation of the plurality of second high-dimensional representations.

By using a visual recognition machine- learning algorithm an image-based search request can be mapped to a high-dimensional representation. By allowing the high-dimensional rep resentation to have entries with various different values (in contrast to one hot encoded rep resentations), semantically similar biological search terms can be mapped to similar high dimensional representations. By obtaining high-dimensional representations of a plurality of biology-related image-based input data sets or of a plurality of biology-related language- based input data sets, high-dimensional representations can be found equal or similar to the high-dimensional representation of the search request. In this way, it may be enabled to find images or text corresponding to the search request. In this way, the trained visual recogni tion machine- learning algorithm may enable a search for biology-related images among a plurality of biologic images (e.g. database of biologic images) or a search for biology- related texts among a plurality of biology related texts (e.g. scientific paper collection or library) based on an image-based search input. A search within an already existing database or images generated by a running experiment (e.g. images taken by a microscope of one or more biological specimens) may be enabled, even if the images were not labeled or tagged before.

More details and aspects of method 900 are mentioned in conjunction with the proposed concept and/or the one or more examples described above or below (e.g. Figs. l-7b). The method 900 may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept and/or of one or more examples described above or below.

Fig. 10 shows a flow chart of a method for controlling a microscope according to an embod iment. The method 1000 comprises receiving 1010 image-based search data and generating 1020 a first high-dimensional representation of the image-based search data by a trained visual recognition machine- learning algorithm. The first high-dimensional representation comprises at least 3 entries each having a different value. Further, the method 1000 com prises obtaining 1030 a plurality of second high-dimensional representations of a plurality of image-based input data sets and selecting 1040 a second high-dimensional representation from the plurality of second high-dimensional representations based on a comparison of the first high-dimensional representation with each second high-dimensional representation of the plurality of second high-dimensional representations. Additionally, the method 1000 comprises controlling 1050 an operation of a microscope based on the selected second high dimensional representation.

By using a visual recognition machine- learning algorithm an image-based search request can be mapped to a high-dimensional representation. By allowing the high-dimensional rep resentation to have entries with various different values (in contrast to one hot encoded rep resentations), semantically similar search terms can be mapped to similar high-dimensional representations. By obtaining high-dimensional representations of a plurality of image- based input data sets, high-dimensional representations can be found equal or similar to the high-dimensional representation of the search term. In this way, it may be enabled to find images corresponding to the search request. With this information, a microscope can be driven to the respective locations, the images were taken, in order to enable taking further images (e.g. with higher magnification, different light or filter) of the locations of interest. In this way, a specimen (e.g. biological specimen or integrated circuit) may be imaged at low magnification first to find locations corresponding to the search request and afterwards the locations of interest may be analyzed in more detail.

More details and aspects of method 1000 are mentioned in conjunction with the proposed concept and/or the one or more examples described above or below (e.g. Figs. l-7b). The method 1000 may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept and/or of one or more examples described above or below.

Fig. 11 shows a flow chart of another method for controlling a microscope according to an embodiment. The method 1100 comprises determining 1110 a plurality of clusters of a plu rality of second high-dimensional representations of a plurality of image-based input data sets by a clustering algorithm and determining 1120 a first high-dimensional representation of a cluster center of the cluster of the plurality of clusters. Further, the method 1100 com prises selecting 1130 a second high-dimensional representation from the plurality of second high-dimensional representations based on a comparison of the first high-dimensional repre sentation with each or a subset of the second high-dimensional representations of the plu rality of second high-dimensional representations. Additionally, the method 1100 comprises providing 1140 a control signal for controlling an operation of a microscope based on the selected second high-dimensional representation.

By identifying clusters of second high-dimensional representations, second high dimensional representations corresponding to semantically similar content can be combined to a cluster. By determining a cluster center and identifying one or more second high dimensional representations closest to the cluster center by the comparison, one or more images may be found, which represent a typical image of the cluster. For example, different clusters may comprise second high-dimensional representations corresponding to different characteristic parts (e.g. cytosol, nucleus, cytoskeleton) of a biological specimen. The sys tem may be able to provide the control signal so that the microscope moves to the position, wherein the typical image of one or more of the clusters was taken (e.g. to take more images at this position with varying microscope parameters).

More details and aspects of the method 1100 are mentioned in conjunction with the pro posed concept and/or the one or more examples described above or below (e.g. Figs. 1-10). The method 1100 may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept and/or of one or more examples described above or below.

In the following, some examples of applications and/or implementation details for one or more of the embodiments described above (e.g. in conjunction with one or more of the Figs. 1-11) are described. According to an aspect, an image-to-image search functionality in databases or running mi croscopy experiments is proposed. The type of image-to-image search may be based on se mantic embeddings of the query created by a first stage textual model. A second stage im age model may relate these semantic embeddings to images thus connecting the image do main to the text domain. Relevance of the hits may be scored according to a distance metric in semantic embedding space. This may allow retrieval not only of exact matches, but also retrieval of similar images with related semantics. In the context of biology related seman tics may mean similar biological function. An aspect may allow to search through a speci men in a running experiment and retrieve images similar to a query image or previously unknown objects in the specimen.

Biology in general and microscopy in particular may generate vast amounts of data, which often gets poorly annotated or not annotated at all. For example, it may only become appar ent in retrospect which annotations might have been useful or new biological discoveries are made not known at the time of the experiment. The emphasis may be on image data, but the proposed concept might not necessarily be restricted to image data. Images may go beyond 2D pixel maps, but rather encompass multidimensional image tensors with three spatial di mensions, a time dimension and further dimensions related to physical properties of the flu orescence dyes used or to properties of the imaging system, for example. According to an aspect, such data may be made accessible by allowing semantic searching of large bodies of image data stored in a database or as part of a running experiment in a microscope. The ex periment may be a single one-time experiment or part of a long-term experiment such as a screening campaign.

The image-to-image search may enable the search for images similar to the input query not only in a database but also in a running experiment (e.g. the current specimen), which may turn a specimen into a searchable data resource. Additionally or alternatively, the image-to- image search may enable to automatically cluster images in a running experiment and re trieve all related images from the current specimen, a future specimen or from an image repository. This may represent a knowledge discovery tool which can find rare events or previously unknown (e.g. as not specified by user) objects in the specimen. According to an aspect, image-to-image search may be used to query microscopes as image sources, optionally during a running experiment, and change this running experiment.

Other realizations of an image to image search using statistical machine learning (e.g. Sup port Vector Machines, Random Forest or Gradient Boosting) may have to rely on image features curated or engineered by a human expert. The high dimensionality of images may reduce the accuracy of such a classical machine learning approach. According to an aspect of the proposed concept, deep learning (e.g. CNNs, Capsule Networks) may be used to ex tract image features, for example, automatically allowing for a larger number of image fea tures on multiple scales to be utilized, which may increase the accuracy for recognizing im ages. Further, images may be mapped to semantic embeddings instead of one-hot encoded vectors, which may allow finding previously unseen, but similar images. Due to the high variability in morphology found in images of biological specimen, the proposed concept may have a higher hit rate than other approaches, which may be a hit or miss affair.

An example of an image-to-image search may be based on the following steps:

1. A visual model trained to predict semantic token embeddings from images may con vert a query image to its related semantic embedding.

2. The same visual model may also create the respective embeddings of a series of im ages coming from an imaging device or a database.

3. According to a distance metric in the embedding space between the semantic em beddings of the query and the image, the relevant closest hits may be searched and scored.

4. Optionally, in a running experiment the physical coordinates of the hits can be used to change the experiment and start an alternative recording of images at those coordinates. The model may be trained as described below, but can be trained in a different way also.

For example, four alternative ways to obtain semantic embeddings of the query (e.g. de scribed step 1 above) may be: a) Manual input by user.

b) Outcome of experiment with (same or other) imaging device.

c) From a database (e.g. manual query or automatic query by imaging or using another laboratory device) d) Unsupervised clustering and arithmetic combination of image embeddings produced by imaging device and model.

According to an aspect, the user may query a database with an image instead of text. All images in the database may have been transformed into embeddings using one or more pre trained visual models as described above or below (e.g. CNNs). The embeddings may be stored along with the image data in the same or a different database. The user query may be transformed into an embedding by a forward pass through the same visual model. Using a suitable distance metric, the semantically (in embedding space) closest image(s) may be retrieved and returned. Different distance metrics can be used for this comparison, such as Euclidean distance or Earth mover’s distance, but other distance metrics may be used as well. Most distance metrics used in clustering might work.

For example, any image, either provided by the user or just acquired by the microscope dur ing a running experiment can be used to discover semantically related images in the entire specimen. The transformation and similarity search may be performed in a similar fashion as described before. The data acquired by the microscope may be arranged such that the logical coordinates of each image within e.g. a mosaic (e.g. set of images covering a larger than current field of view) or e.g. physical stage coordinates may be associated with the im age data.

Image to image search may be useful for querying existing databases or data from a running experiment with any image. In the context of a running experiment any image acquired by the microscope can be used to query databases to find similar images. Through other anno tations of this image further information can be retrieved and new insight can be gained about the structure and function of the image in question. This may turn the microscope into an intelligent lab assistant which may augment the image data by semantic and functional information, thus helping with the interpretation of the data.

According to an aspect, one can find images similar to a user-provided or recorded image in the entire specimen. The searchable body of images can be recorded by the microscope by using a pre-scan. The pre-scan may cover an area or volume larger than the current field of view. The query image can be provided by the user or selected by the user from the current experiment or automatically selected by a pre-trained visual model. This may save time, because only the interesting positions may be recorded in detail with different imaging con ditions and modalities (e.g. such as more colors, different magnification, additional lifetime information and more). This may also save storage, because only the interesting images ac tually may get stored. The others may be discarded.

Alternatively or additionally, automatic clustering may be performed and the microscope may aid the user in gaining new insights by pointing out which different semantic classes are present in the specimen. By automating the pre-scan and the clustering steps, the user may save a lot of time for finding, identifying and characterizing all the objects (e.g. single cells, organs, tissues, organoids and parts thereof) manually. Moreover, bias may be re moved because the semantic embedding space may serve as an objective similarity measure, which may directly relate the images to meaningful biology due to the creation of the em beddings from biologically relevant textual data.

In effect a specimen may get converted into a searchable data resource by the proposed mi croscope.

Applications of a proposed image-to-image search may be basic biological research (e.g. helping to find relevant data and reduce experimental recording time) and/or hit validation and toxicology assays in drug discovery.

A trained language recognition machine- learning algorithm and/or a trained visual recogni tion machine-learning algorithm may be obtained by a training described in the following. A system for training machine-learning algorithms for processing biology-related data may comprise one or more processors and one or more storage devices. The system may be con figured to receive biology-related language-based input training data. Additionally, the sys tem may be configured to generate a first high-dimensional representation of the biology- related language-based input training data by a language recognition machine- learning algo rithm executed by the one or more processors. The first high-dimensional representation comprises at least three entries each having a different value. Further, the system may be configured to generate biology-related language-based output training data based on the first high-dimensional representation by the language recognition machine- learning algorithm executed by the one or more processors. In addition, the system may be configured to adjust the language recognition machine- learning algorithm based on a comparison of the biology- related language-based input training data and the biology-related language-based output training data. Additionally, the system may be configured to receive biology-related image- based input training data associated with the biology-related language-based input training data. Further, the system may be configured to generate a second high-dimensional repre sentation of the biology-related image-based input training data by a visual recognition ma chine-learning algorithm executed by the one or more processors. The second high dimensional representation comprises at least three entries each having a different value. Further, the system may be configured to adjust the visual recognition machine- learning algorithm based on a comparison of the first high-dimensional representation and the second high-dimensional representation.

The biology-related language-based input training data may be a textual input being related to a biological structure, a biological function, a biological behavior or a biological activity. For example, the biology-related language-based input training data may be a nucleotide sequence, a protein sequence, a description of a biological molecule or biological structure, a description of a behavior of a biological molecule or biological structure, and/or a descrip tion of a biological function or a biological activity. The biology-related language-based input training data may be a first biology-related language-based input training data set (e.g. sequence of input characters, for example, a nucleotide sequence or a protein sequence) of a training group. The training group may comprise a plurality of biology-related language- based input training data sets.

The biology-related language-based output training data may be of the same type as the bi ology-related language-based input training data including optionally a prediction of a next element. For example, the biology-related language-based input training data may be a bio logical sequence (e.g. a nucleotide sequence or a protein sequence) and the biology-related language-based output training data may be a biological sequence (e.g. a nucleotide se quence or a protein sequence) as well. The language recognition machine- learning algo rithm may be trained so that the biology-related language-based output training data is equal to the biology-related language-based input training data including optionally a prediction of a next element of the biological sequence. In another example, the biology-related language- based input training data may be a biological class of a coarse-grained search term and the biology-related language-based output training data may be a biological class of the coarse grained search term as well. The biology-related image-based input training data may be image training data (e.g. pixel data of a training image) of an image of a biological structure comprising a nucleotide or a nucleotide sequence, a biological structure comprising a protein or a protein sequence, a biological molecule, a biological tissue, a biological structure with a specific behavior, and/or a biological structure with a specific biological function or a specific biological activ ity. The biology-related image-based input training data may be a first biology-related im age-based input training data set of a training group. The training group may comprise a plurality of biology-related image-based input training data sets.

The biology-related language-based input training data may be a biology-related language- based input training data set (e.g. sequence of input characters, for example, a nucleotide sequence or a protein sequence) of a training group. The training group may comprise a plurality of biology-related language-based input training data sets. The system may repeat generating a first high-dimensional representation for each of a plurality of biology-related language-based input training data sets of a training group. Further, the system may generate biology-related language-based output training data for each generated first high dimensional representation. The system may adjust the language recognition machine learning algorithm based on each comparison of biology-related language-based input train ing data of the plurality of biology-related language-based input training data sets of the training group with the corresponding biology-related language-based output training data. In other words, the system may be configured to repeat generating a first high-dimensional representation, generating biology-related language-based output training data , and adjust ing the language recognition machine-learning algorithm for each biology-related language- based input training data of a training group of biology-related language-based input train ing data sets. The training group may comprise enough biology-related language-based in put training data sets so that a training target (e.g. variation of an output of a loss function below a threshold) can be fulfilled.

The plurality of all first high-dimensional representations generated during training of the language recognition machine- learning algorithm may be called latent space or semantic space. The system may repeat generating a second high-dimensional representation for each of a plurality of biology-related image-based input training data sets of a training group. Further, the system may adjust the visual recognition machine- learning algorithm based on each comparison of a first high-dimensional representation with the corresponding second high dimensional representation. In other words, the system may repeat generating a second high-dimensional representation and adjusting the visual recognition machine-learning algo rithm for each biology-related image-based input training data of a training group of biolo gy-related image-based input training data sets. The training group may comprise enough biology-related image-based input training data sets so that a training target (e.g. variation of an output of a loss function below a threshold) can be fulfilled.

For example, the system 100 uses a combination of a language recognition machine learning algorithm and a visual recognition machine- learning algorithm (e.g. also called visual-semantic model). The language recognition machine- learning algorithm and/or the visual recognition machine-learning algorithm may be deep learning algorithms and/or arti ficial intelligence algorithms.

The training may converge fast and/or may provide a well-trained algorithm for biology- related data by using the cross entropy loss function for training the language recognition machine- learning algorithm, although other loss functions could be used as well.

The visual recognition machine- learning algorithm may be trained by adjusting parameters of the visual recognition machine- learning algorithm based on the comparison of a high dimensional representation generated by the language recognition machine- learning algo rithm with a high dimensional representation generated by the visual recognition machine learning algorithm of corresponding input training data. For example, network weights of a visual recognition neural network may be adjusted based on the comparison. The adjustment of the parameters (e.g. network weights) of the visual recognition machine-learning algo rithm may be done under consideration of a loss function. For example, the comparison of the first high-dimensional representation and the second high-dimensional representation for the adjustment of the visual recognition machine- learning algorithm may be based on a co sine similarity loss function. The training may converge fast and/or may provide a well- trained algorithm for biology-related data by using the cosine similarity loss function for training the visual recognition machine- learning algorithm, although other loss functions could be used as well.

For example, the visual model may learn how to represent an image in the semantic embed ding space (e.g. as a vector). So, a measure for the distance of two vectors may be used, which may represent the prediction A (the second high-dimensional representation) and the ground-truth B (the first high-dimensional representation). For example, a measure is the cosine similarity as defined in

A B

similarity = cosf#) = -————

II All PH with the dot product of the prediction A and ground-truth B divided by the dot product of their respective magnitudes (e.g. as in L2-Norm or Euclidian norm).

More details with respect to non-training specific aspects of the system for training ma chine-learning algorithms are mentioned in conjunction with the proposed concept and/or the one or more examples described above or below (e.g. Figs. 1-11).

Embodiments may be based on using a machine- learning model or machine- learning algo rithm. Machine learning may refer to algorithms and statistical models that computer sys tems may use to perform a specific task without using explicit instructions, instead relying on models and inference. For example, in machine-learning, instead of a rule-based trans formation of data, a transformation of data may be used, that is inferred from an analysis of historical and/or training data. For example, the content of images may be analyzed using a machine- learning model or using a machine- learning algorithm. In order for the machine learning model to analyze the content of an image, the machine- learning model may be trained using training images as input and training content information as output. By train ing the machine-learning model with a large number of training images and/or training se quences (e.g. words or sentences) and associated training content information (e.g. labels or annotations), the machine- learning model“learns” to recognize the content of the images, so the content of images that are not included in the training data can be recognized using the machine- learning model. The same principle may be used for other kinds of sensor data as well: By training a machine-learning model using training sensor data and a desired output, the machine- learning model“learns” a transformation between the sensor data and the out put, which can be used to provide an output based on non-training sensor data provided to the machine- learning model.

Machine- learning models may be trained using training input data. The examples specified above use a training method called“supervised learning”. In supervised learning, the ma chine-learning model is trained using a plurality of training samples, wherein each sample may comprise a plurality of input data values, and a plurality of desired output values, i.e. each training sample is associated with a desired output value. By specifying both training samples and desired output values, the machine-learning model“learns” which output value to provide based on an input sample that is similar to the samples provided during the train ing. Apart from supervised learning, semi-supervised learning may be used. In semi- supervised learning, some of the training samples lack a corresponding desired output value. Supervised learning may be based on a supervised learning algorithm, e.g. a classification algorithm, a regression algorithm or a similarity learning algorithm. Classification algo rithms may be used when the outputs are restricted to a limited set of values, i.e. the input is classified to one of the limited set of values. Regression algorithms may be used when the outputs may have any numerical value (within a range). Similarity learning algorithms may be similar to both classification and regression algorithms, but are based on learning from examples using a similarity function that measures how similar or related two objects are. Apart from supervised or semi-supervised learning, unsupervised learning may be used to train the machine- learning model. In unsupervised learning, (only) input data might be sup plied, and an unsupervised learning algorithm may be used to find structure in the input da ta, e.g. by grouping or clustering the input data, finding commonalities in the data. Cluster ing is the assignment of input data comprising a plurality of input values into subsets (clus ters) so that input values within the same cluster are similar according to one or more (pre defined) similarity criteria, while being dissimilar to input values that are included in other clusters.

Reinforcement learning is a third group of machine- learning algorithms. In other words, reinforcement learning may be used to train the machine- learning model. In reinforcement learning, one or more software actors (called“software agents”) are trained to take actions in an environment. Based on the taken actions, a reward is calculated. Reinforcement learn ing is based on training the one or more software agents to choose the actions such, that the cumulative reward is increased, leading to software agents that become better at the task they are given (as evidenced by increasing rewards).

Furthermore, some techniques may be applied to some of the machine-learning algorithms. For example, feature learning may be used. In other words, the machine- learning model may at least partially be trained using feature learning, and/or the machine- learning algo rithm may comprise a feature learning component. Feature learning algorithms, which may be called representation learning algorithms, may preserve the information in their input, but also transform it in a way that makes it useful, often as a pre-processing step before per forming classification or predictions. Feature learning may be based on principal compo nents analysis or cluster analysis, for example.

In some examples, anomaly detection (i.e. outlier detection) may be used, which is aimed at providing an identification of input values that raise suspicions by differing significantly from the majority of input or training data. In other words, the machine- learning model may at least partially be trained using anomaly detection, and/or the machine- learning algorithm may comprise an anomaly detection component.

In some examples, the machine-learning algorithm may use a decision tree as a predictive model. In other words, the machine- learning model may be based on a decision tree. In a decision tree, observations about an item (e.g. a set of input values) may be represented by the branches of the decision tree, and an output value corresponding to the item may be rep resented by the leaves of the decision tree. Decision trees may support both discrete values and continuous values as output values. If discrete values are used, the decision tree may be denoted a classification tree, if continuous values are used, the decision tree may be denoted a regression tree.

Association rules are a further technique that may be used in machine- learning algorithms. In other words, the machine- learning model may be based on one or more association rules. Association rules are created by identifying relationships between variables in large amounts of data. The machine- learning algorithm may identify and/or utilize one or more relational rules that represent the knowledge that is derived from the data. The rules may e.g. be used to store, manipulate or apply the knowledge. Machine- learning algorithms are usually based on a machine- learning model. In other words, the term“machine-learning algorithm” may denote a set of instructions that may be used to create, train or use a machine- learning model. The term“machine- learning model” may denote a data structure and/or set of rules that represents the learned knowledge, e.g. based on the training performed by the machine-learning algorithm. In embodiments, the usage of a machine- learning algorithm may imply the usage of an underlying machine learning model (or of a plurality of underlying machine- learning models). The usage of a machine- learning model may imply that the machine- learning model and/or the data struc ture/set of rules that is the machine- learning model is trained by a machine- learning algo rithm.

For example, the machine- learning model may be an artificial neural network (ANN). ANNs are systems that are inspired by biological neural networks, such as can be found in a retina or a brain. ANNs comprise a plurality of interconnected nodes and a plurality of con nections, so-called edges, between the nodes. There are usually three types of nodes, input nodes that receiving input values, hidden nodes that are (only) connected to other nodes, and output nodes that provide output values. Each node may represent an artificial neuron. Each edge may transmit information, from one node to another. The output of a node may be de fined as a (non-linear) function of the sum of its inputs. The inputs of a node may be used in the function based on a“weight” of the edge or of the node that provides the input. The weight of nodes and/or of edges may be adjusted in the learning process. In other words, the training of an artificial neural network may comprise adjusting the weights of the nodes and/or edges of the artificial neural network, i.e. to achieve a desired output for a given in put.

Alternatively, the machine-learning model may be a support vector machine, a random for est model or a gradient boosting model. Support vector machines (i.e. support vector net works) are supervised learning models with associated learning algorithms that may be used to analyze data, e.g. in classification or regression analysis. Support vector machines may be trained by providing an input with a plurality of training input values that belong to one of two categories. The support vector machine may be trained to assign a new input value to one of the two categories. Alternatively, the machine- learning model may be a Bayesian network, which is a probabilistic directed acyclic graphical model. A Bayesian network may represent a set of random variables and their conditional dependencies using a directed acy- clic graph. Alternatively, the machine- learning model may be based on a genetic algorithm, which is a search algorithm and heuristic technique that mimics the process of natural selec tion.

As used herein the term "and/or" includes any and all combinations of one or more of the associated listed items and may be abbreviated as

Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a processor, a micropro cessor, a programmable computer or an electronic circuit. In some embodiments, some one or more of the most important method steps may be executed by such an apparatus.

Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a non- transitory storage medium such as a digital storage medium, for example a floppy disc, a DVD, a Blu-Ray, a CD, a ROM, a PROM, and EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.

Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.

Generally, embodiments of the present invention can be implemented as a computer pro gram product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may, for example, be stored on a machine readable carrier. For example, the computer pro gram may be stored on a non-transitory storage medium. Some embodiments relate to a non-transitory storage medium including machine readable instructions, when executed, to implement a method according to the proposed concept or one or more examples described above.

Other embodiments comprise the computer program for performing one of the methods de scribed herein, stored on a machine readable carrier.

In other words, an embodiment of the present invention is, therefore, a computer program having a program code for performing one of the methods described herein, when the com puter program runs on a computer.

A further embodiment of the present invention is, therefore, a storage medium (or a data carrier, or a computer-readable medium) comprising, stored thereon, the computer program for performing one of the methods described herein when it is performed by a processor. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary. A further embodiment of the present invention is an apparatus as described herein comprising a processor and the storage medium.

A further embodiment of the invention is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may, for example, be configured to be transferred via a data communication connection, for example, via the internet.

A further embodiment comprises a processing means, for example, a computer or a pro grammable logic device, configured to, or adapted to, perform one of the methods described herein.

A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.

A further embodiment according to the invention comprises an apparatus or a system con figured to transfer (for example, electronically or optically) a computer program for per forming one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver. In some embodiments, a programmable logic device (for example, a field programmable gate array) may be used to perform some or all of the functionalities of the methods de scribed herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are preferably performed by any hardware apparatus.

List of reference Signs

100 system for processing biology-related data

103 biology-related image-based search data

105 second high-dimensional representation

110 one or more processors

120 one or more storage devices

200 system for processing biology-related data

201 query, search query, biology-related image-based search data

210 visual model, classifier

220 trained visual recognition machine-learning algorithm, visual model

230 trained visual recognition machine-learning algorithm, visual model

240 database

250 embeddings, plurality of second high-dimensional representations

255 database, intermediate storage

257 bypass

260 embedding, first high-dimensional representation

270 comparison in embedding space

280 closest embedding

290 respective image

300 system for processing biology-related data

315 skipped pre-classification

381 return image corresponding to closest embedding

383 feed by of data to the image source

385 user

387 database

389 public database

390 scientific publications, social media entries or blog posts

393 image of biomolecule

395 biological sequence

400 system for controlling a microscope

401 image-based search data

405 second high-dimensional representation

411 control signal 500 system for controlling a microscope

501 microscope

510 images

550 query, search query, image-based search data

580 find respective coordinates

590 respective coordinates passed back to the microscope

595 respective coordinates, new coordinates

600 system for controlling a microscope

700 system for controlling a microscope

740 clustering algorithm

750 determining centers of clusters

760 latent vectors of cluster centers

770 applying a distance metric

790 system for processing biology-related data by using a clustering algorithm

791 change image modalities

792 user

793 repository

794 public database

795 scientific publications, social media entries or blog posts

796 image of biomolecule

797 biological sequence

800 system for training machine- learning algorithms

810 microscope

820 computer device

900 method for processing biology-related image-based search data

910 receiving biology-related image-based search data

920 generating a first high-dimensional representation

930 obtaining a plurality of second high-dimensional representations

940 comparing the first high-dimensional representation with each second high dimensional representation

1000 method for controlling a microscope

1010 receiving image-based search data

1020 generating a first high-dimensional representation

1030 obtaining a plurality of second high-dimensional representations 1040 selecting a second high-dimensional representation

1050 controlling an operation of a microscope

1100 method for controlling a microscope

1110 determining a plurality of clusters

1120 determining a first high-dimensional representation

1130 selecting a second high-dimensional representation

1140 providing a control signal