Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
EFFICIENT INFORMATION CODING IN LIVING ORGANISMS
Document Type and Number:
WIPO Patent Application WO/2023/100188
Kind Code:
A1
Abstract:
Methods of storing information in a living organism are provided. Systems and computer program products for performing the methods are also provided.

Inventors:
TULLER TAMIR (IL)
TAMO ITZHAK (IL)
AKIVA ALON (IL)
Application Number:
PCT/IL2022/051291
Publication Date:
June 08, 2023
Filing Date:
December 05, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV RAMOT (IL)
International Classes:
G11C13/00; B82Y10/00; C12N15/00; G06N3/123
Domestic Patent References:
WO2021056167A12021-04-01
Foreign References:
US20060024811A12006-02-02
US20190362814A12019-11-28
Other References:
MEISER LINDA C.; ANTKOWIAK PHILIPP L.; KOCH JULIAN; CHEN WEIDA D.; KOHLL A. XAVIER; STARK WENDELIN J.; HECKEL REINHARD; GRASS ROBE: "Reading and writing digital data in DNA", NATURE PROTOCOLS, NATURE PUBLISHING GROUP, GB, vol. 15, no. 1, 29 November 2019 (2019-11-29), GB , pages 86 - 101, XP036977351, ISSN: 1754-2189, DOI: 10.1038/s41596-019-0244-5
SONG XIN, REIF JOHN: "Nucleic Acid Databases and Molecular-Scale Computing", ACS NANO, AMERICAN CHEMICAL SOCIETY, US, vol. 13, no. 6, 25 June 2019 (2019-06-25), US , pages 6256 - 6268, XP093068651, ISSN: 1936-0851, DOI: 10.1021/acsnano.9b02562
Attorney, Agent or Firm:
KESTEN, Dov et al. (IL)
Download PDF:
Claims:
CLAIMS:

1. A method of storing information in a living organism, the method comprising: a. receiving a genomic sequence of said living organism; b. selecting a fragment of a coding region within said received genomic sequence in which to store said information, wherein said selecting comprises: i. estimating indel probability across coding regions of said genomic sequence and selecting fragments of said coding regions with a probability below a predetermined threshold for all nucleotides within said fragment; ii. receiving coding regions of orthologs of said organism and selecting fragments of said coding regions of said living organism with an intermediate nucleotide sequence identity probability across said orthologs; and iii. selecting a fragment of a coding region that was selected in both (i) and (ii); and c. inducing genetic mutations in said selected fragment of a coding region in a cell of said living organism such that said genetic mutations are convertible to said information, wherein said mutations are not substantially detrimental to the health of said living organism; thereby storing information in a living organism.

2. The method of claim 1, wherein said fragment is between 150-350 codons in length.

3. The method of claim 2, wherein said fragment is between 180 and 200 codons in length.

4. The method of any one of claims 1 to 3, wherein said living organism is a single celled organism.

5. The method of any one of claims 1 to 4, wherein said living organism is a bacterium.

6. The method of any one of claims 1 to 5, wherein said selected fragment comprises an indel probability variance below a predetermined threshold for all nucleotides within said fragment.

7. The method of claim 6, wherein said indel probability variance threshold is 0.05.

8. The method of any one of claims 1 to 7, wherein coding regions from at least 100 orthologs are received. The method of any one of claims 1 to 8, wherein an intermediate identity probability is between 0.55 and 0.75. The method of any one of claims 1 to 9, wherein said not substantially detrimental mutation is a synonymous mutation. The method of any one of claims 1 to 10, wherein said not substantially detrimental mutation is a mutation to a codon that appears within at least a predetermined threshold percentage of orthologs. The method of claim 11, wherein said predetermined threshold percentage is 5%. The method of any one of claims 1 to 12, wherein said fragment is devoid of at least 2 consecutive nucleotides which cannot be mutated without being substantially detrimental to the health of said living organism. The method of any one of claims 1 to 13, wherein said inducing genetic mutations comprises optimizing for said living organism the codon adaptation index (CAI) within said fragment. The method of claim 14, wherein an optimized CAI is a CAI with the smallest change from the unmutated fragment sequence. The method of any one of claims 1 to 15, further comprising producing genetic mutations convertible to a numeric identifier which identifies the order in which the information was stored in a plurality of fragments. The method of claim 16, wherein a first or second codon of said fragment is mutated to include information of said numeric identifier. The method of claim 16 or 17, comprising storing information in a plurality of different organisms and wherein said numeric identifier identifies the order of the information across said plurality of different organisms. The method of any one of claims 1 to 18, wherein said stored information retains at least 90% fidelity after at least 50 generations of said genetically mutated living organism. The method of any one of claims 1 to 19, where information is encoded into said mutations using an error correcting code. The method of claim 20, wherein said error correcting code is a Reed-Solomon (RS) code. The method of claim 20 or 21, wherein said information is encoded using a modulation algorithm. The method of any one of claims 20 to 22, wherein said fragment comprises mutations convertible to a numeric identifier and wherein said mutations convertible to said information and said mutations convertible to said numeric identifier are encoded separately. The method of any one of claims 1 to 23, further comprising reading said stored information, wherein said reading comprises: d. receiving by a reader, sequences of coding regions of said genetically mutated living organism or a descendant of said genetically mutated living organism; e. identifying the fragments comprising the stored information; f. correcting any indels and any point mutations within said fragments that arose since said inducing; and g. extracting said information from said induced genetic mutations. The method of claim 24, wherein said identifying fragments comprises comparison to the native genome of said living organism. The method of claim 24, wherein said fragment locations are known and said identifying comprises locating said fragments within said received sequences. The method of any one of claims 24 to 26, further comprising between step (e) and (f) a step comprising reading a numeric identifier in a plurality of fragments and ordering said fragments based on said read numeric identifiers. The method of claim 27, further comprising removing said numeric identifiers from said fragments and concatenating said fragments in an order established by said read numeric identifiers. The method of claim 27 to 28, wherein said reading a numeric identifier in a plurality of fragments comprises decoding said numeric identifiers separately from said stored information. The method of any one of claims 27 to 29, wherein said reading comprises maximum likelihood decoding which ensures all ordinal numbers up to the total number of identifiers are represented in said numeric identifiers by selecting the numeric identifier with the highest probability of being each ordinal. The method of any one of claims 24 to 30, wherein said correcting comprises employing an error correcting code decoder. The method of claim 31, wherein said error correcting code decoder is a RS decoder. A system comprising: at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program code, the program code executable by the at least one hardware processor to perform a method of any one of claims 1 to 32. A computer program product comprising a non-transitory computer-readable storage medium having program code embodied therewith, the program code executable by at least one hardware processor to perform a method of any one of claims 1 to 32.

Description:
EFFICIENT INFORMATION CODING IN LIVING ORGANISMS

CROSS REFERENCE TO RELATED APPLICATIONS

[001] This application claims the benefit of priority of U.S. Provisional Patent Application No. 63/286,073, December 5, 2021 entitled "EFFICIENT INFORMATION CODING IN LIVING ORGANISMS", the contents of which are incorporated herein by reference in its entirety.

REFERENCE TO AN ELECTRONIC SEQUENCE LISTING

[002] The contents of the electronic sequence listing (RMT-P-015-PCT.xml; Size: 5,206 bytes; and Date of Creation: December 4, 2022) is herein incorporated by reference in its entirety.

FIELD OF INVENTION

[003] The present invention is in the field of information coding in living organisms.

BACKGROUND OF THE INVENTION

[004] It is clear that DNA can be used as a storage medium, each nucleotide carrying two information bits, which can store vast amounts of data for very long periods of time and with high reliability, as over time humans remains human just as cats remain cats. However, it is also clear that it is a very complex storage medium affected by evolutionary processes which create various data corruption mechanisms as well as complex constraints over the stored information, where even a slight change in the nucleotides sequence in the DNA may profoundly affect the whole organism's behavior and possibly decrease its growth rate, increase the mutation rate and may even cause the death of the organism. To store information successfully over a living organism, all of the above processes must be considered. A new model to store information in this way is greatly needed.

SUMMARY OF THE INVENTION

[005] The present invention provides methods of storing information in a living organism. Systems and computer program products for performing the methods of the invention are also provided.

[006] According to a first aspect, there is provided a method of storing information in a living organism, the method comprising: a. receiving a genomic sequence of the living organism; b. selecting a fragment of a coding region within the received genomic sequence in which to store the information, wherein the selecting comprises: i. estimating indel probability across coding regions of the genomic sequence and selecting fragments of the coding regions with a probability below a predetermined threshold for all nucleotides within the fragment; ii. receiving coding regions of orthologs of the organism and selecting fragments of the coding regions of the living organism with an intermediate nucleotide sequence identity probability across the orthologs; and iii. selecting a fragment of a coding region that was selected in both (i) and (ii).; and c. inducing genetic mutations in the selected fragment of a coding region in a cell of the living organism such that the genetic mutations are convertible to the information, wherein the mutations are not substantially detrimental to the health of the living organism; thereby storing information in a living organism.

[007] According to some embodiments, the fragment is between 150-350 codons in length.

[008] According to some embodiments, the fragment is between 180 and 200 codons in length.

[009] According to some embodiments, the living organism is a single celled organism.

[010] According to some embodiments, the living organism is a bacterium.

[Oi l] According to some embodiments, the selected fragment comprises an indel probability variance below a predetermined threshold for all nucleotides within the fragment.

[012] According to some embodiments, the indel probability variance threshold is 0.05.

[013] According to some embodiments, coding regions from at least 100 orthologs are received.

[014] According to some embodiments, an intermediate identity probability is between 0.55 and 0.75.

[015] According to some embodiments, the not substantially detrimental mutation is a synonymous mutation. [016] According to some embodiments, the not substantially detrimental mutation is a mutation to a codon that appears within at least a predetermined threshold percentage of orthologs.

[017] According to some embodiments, the predetermined threshold percentage is 5%.

[018] According to some embodiments, the fragment is devoid of at least 2 consecutive nucleotides which cannot be mutated without being substantially detrimental to the health of the living organism.

[019] According to some embodiments, the inducing genetic mutations comprises optimizing for the living organism the codon adaptation index (CAI) within the fragment.

[020] According to some embodiments, an optimized CAI is a CAI with the smallest change from the unmutated fragment sequence.

[021] According to some embodiments, the genetic mutations also include information of a numeric identifier which identifies the order in which the information was stored in a plurality of fragments.

[022] According to some embodiments, a first or second codon of the fragment is mutated to include information of the numeric identifier.

[023] According to some embodiments, the stored information retains at least 90% fidelity after at least 50 generations of the genetically mutated living organism.

[024] According to some embodiments, information is encoded into the mutations using an error correcting code.

[025] According to some embodiments, the error correcting code is a Reed-Solomon (RS) code.

[026] According to some embodiments, the information is encoded using a modulation algorithm.

[027] According to some embodiments, the fragment comprises mutations convertible to a numeric identifier and the mutations convertible to the information and the mutations convertible to the numeric identifier are encoded separately.

[028] According to some embodiments, the method further comprises reading the stored information, wherein the reading comprises: d. receiving by a reader, sequences of coding regions of the genetically mutated living organism or a descendant of the genetically mutated living organism; e. identifying the fragments comprising the stored information; f. correcting any indels and any point mutations within the fragment that arose since the inducing; and g. extracting the information from the induced genetic mutations.

[029] According to some embodiments, the identifying fragments comprises comparison to the native genome of the living organism.

[030] According to some embodiments, the fragment locations and known and the identifying comprises locating the fragments within the received sequences.

[031] According to some embodiments, the method further comprises between step (e) and (f), a step comprising reading a numeric identifier in a plurality of fragments and ordering the fragments based on the read numeric identifiers.

[032] According to some embodiments, the method further comprises removing the numeric identifiers from the fragments and concatenating the fragments in an order established by the read numeric identifiers.

[033] According to some embodiments, the reading a numeric identifier in a plurality of fragments comprises decoding the numeric identifiers separately from the stored information.

[034] According to some embodiments, the reading comprises maximum likelihood decoding which ensures all ordinal numbers up to the total number of identifiers are represented in the numeric identifiers by selecting the numeric identifier with the highest probability of being each ordinal.

[035] According to some embodiments, the correcting comprises employing an error correcting code decoder.

[036] According to some embodiments, the error correcting code decoder is a RS decoder.

[037] According to some embodiments, the method comprises storing information in a plurality of different organisms and wherein the numeric identifier identifies the order of the information across the plurality of different organisms.

[038] According to another aspect, there is provided a system comprising: at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program code, the program code executable by the at least one hardware processor to perform a method of the invention.

[039] According to another aspect, there is provided a computer program product comprising a non-transitory computer-readable storage medium having program code embodied therewith, the program code executable by at least one hardware processor to perform a method of the invention.

[040] Further embodiments and the full scope of applicability of the present invention will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the invention, are given by way of illustration only, since various changes and modifications within the spirit and scope of the invention will become apparent to those skilled in the art from this detailed description.

BRIEF DESCRIPTION OF THE DRAWINGS

[041] Figure 1: Diagram of the Noisy Shuffling Sampling Channel.

[042] Figure 2: Diagram of the DNA writing process. Information bits are stored over multiple DNA sequences together by changing their original content. Specific nucleotides will be changed, depicted in red letters, only in a fixed area in the DNA which we call a site and is shown in green.

[043] Figure 3: Diagram of the Living Organism Channel model. DNA sequences may erase, permute, and specific nucleotides may substitute.

[044] Figure 4: Diagram of the Living Organism Channel, represented as a concatenation of the Random Subsets Channel and the Noisy Shuffling Sampling Channel.

[045] Figure 5: Diagram of the DNA reading process. Multiple DNA sequences are read, and the information stored over them is reconstructed. The read DNA sequences are corrupted by the Living Organism Channel, which may erase sequences, permute sequences and substitute nucleotides (as shown in red letters).

[046] Figure 6: Diagram of a system model of DNA storage over living organisms. An information message is written by changing the DNA contents of multiple DNA sequences of multiple microorganisms. The information is corrupted over time by the Living Organism Channel. Finally, the DNA sequences are read from a pool of unordered microorganisms, and the information message is reconstructed.

[047] Figure 7: Chart showing the Noisy Shuffling Sampling Channel Capacity Bounds, as a function of the erasure and substitution probabilities, for 180 DNA sequences of 144 bits each. The classic capacity term (in the asymptotic regime) gives the upper bound and the lower bound by the derivation of this section.

[048] Figure 8: Chart showing the Noisy Shuffling Sampling Channel Capacity Bounds Degradation, showing the normalized ratio between the difference of the higher and lower bound, relative to the higher bound.

[049] Figure 9: Charts showing the Noisy Shuffling Sampling Channel Capacity Bounds Degradation, as a function of the nucleotides substitution probability (top) and the DNA sequence erasure probability (bottom).

[050] Figure 10: Diagram of the Random Subsets Channel setting, defined over a discrete and finite alphabet and with an i.i.d state vector.

[051] Figure 11: Diagram of the Random Subsets Channel with side information at the transmitter is the relevant scenario to our DNA storage problem.

[052] Figure 12: Line graph of the lower bound for the Completely Random Subset Channel Capacity. The lower bound is given in bits per channel use and as a function of the alphabet size. Each plotted graph is for a different subset size.

[053] Figure 13: Graphs showing the Modulo Random Subsets Code Rate. The rate is given in bits per channel use and as a function of the subset size. The alphabet size is assumed to be large compared to the subset size. The lower graph shows the coding rate gap, defined as the ratio between the construction's code rate to the achievable rate in Theorem 6.

[054] Figure 14: Graphs showing the Monte Carlo evaluation of the code rate of the Linear Random Subsets Code Construction, compared to the Completely Random Subsets achievable rate (Theorem 6) and the Modulo Random Subsets Code Construction, for an alphabet size much bigger than the subsets size.

[055] Figures 15A-C: (15A) Graphs of the Gene ALAI identity (top) and indel estimations (bottom) after DNA alignment. The identity probability is calculated as the relative number of appearances of the most likely codon among all Orthologs in each location. The indel probability is calculated as the relative number of gap insertions among all Orthologs in each location. (15B) Graph of the ALAI indel estimation along the site of interest. There is a high estimation correlation along all nucleotides. (15C) Graph of ALAI identity probability along the site of interest.

[056] Figure 16: Graph of dictionary mapping sets size for a site of interest. The subsets sizes were calculated as the number of codons that follows one out of 3 different criteria in each DNA location and among all the Orthologs in which the DNA alignment was performed.

[057] Figure 17: Diagram of the Encoder. The whole information message is encoded using Reed-Solomon code and then divided into message blocks. An identifier is attached to each message block.

[058] Figure 18: Diagram of the Modulator. Each message block is translated into nucleotides, and the Linear Random Subsets encoder is performed over each message block.

[059] Figure 19: Modulator scheme for a single synthesized site. Each subsite must obey the dictionary constraint, an algebraic equation, and finally, the CAI optimal option is chosen.

[060] Figure 20: Diagram of the Demodulator. Each read site is demodulated using the Linear Random Subsets decoder and retranslated into a binary vector.

[061] Figure 21: Diagram of the Decoder. All the binary vectors are sorted using the identifiers and then decoded using a Reed-Solomon decoder.

[062] Figure 22: Graph of Encoder-Decoder simulation result, compared to theoretical bounds. Only the encoder, Noisy Shuffling Sampling Channel and decoder were simulated.

[063] Figure 23: Graph of Encoder-Decoder rate degradation compared to the achievable lower bound.

[064] Figure 24: Graph of Encoder-Decoder rate degradation compared to the achievable lower bound as a function of the substitution probability (first) and the erasure probability (second).

[065] Figure 25: Graph of Encoder-Decoder coding rates comparison. The green edged graph shows the performance of our coding scheme, and the blue edged graph the performance of the proposed scheme in Shomorony and Heckel.

[066] Figure 26: Graph of failure probability of both Encoder-Decoder schemaswhen fixing the first schema's failure rate. [067] Figure 27: Graph of failure probability of both Encoder-Decoder scheme as when fixing the second schema's failure rate.

[068] Figure 28: Graph of coding rates both Encoder-Decoder schemas, when fixing both failure probabilities.

[069] Figure 29: Flow chart of an embodiment of a method of the invention.

[070] Figure 30: Graph of the Noisy Shuffling Sampling Capacity bounds, M=64, L=144. The higher bound is given by the classic capacity and the lower bound by the derivation provided herein.

[071] Figure 31: Diagram of the Modulator scheme for a single synthesized site. Each subsite must obey the dictionary constraint, an algebraic equation, and finally, the CAI optimal option is chosen.

DETAILED DESCRIPTION OF THE INVENTION

[072] The present invention, in some embodiments, provides methods of storing information in a living organism. Systems and computer program products for performing the methods are also provided.

[073] DNA synthesis in living cells imposes different mechanisms of data corruption and requires new sophisticated analysis methods as compared to artificial gene synthesis. The invention is based on surprising findings when performing information coding over living organisms from a digital communication system perspective. Living organism were modeled as a discrete communication channel that captures the significant biological phenomena and derives the channel capacity for the model as a function of those major biological parameters. Then, computational molecular evolution approaches were used to estimate those important biological parameters. Next, a coding scheme that allows reliable communication over this channel was enacted and its performance was analyzed. The design goals of the coding scheme are efficiency (in terms of storage density), flexibility (in terms of scalability and minimization of environment dependency), and decoder simplicity (in terms of minimal side information needed). Numerical simulations were performed over actual genomic data based on microorganism models to evaluate the approach. The simulations include random channel behavior based on parameter estimations from evolutional models and genomic data. In addition, an experimental framework for evaluating the model was designed and the final method was found to have high fidelity over hundreds of generations of the organism. [074] The modem era created demands for data storage for many different applications and reasons. Therefore, the search for efficient storage media is perpetual, and DNA was not skipped over. Much theoretical research exploring the mathematical framework and analysis of artificial DNA synthesis has been done. In Shomorany and Heckel, “DNA-based storage: models and fundamental limits”, arxiv.org/abs/2001.06311, herein incorporated by reference in its entirety, the authors introduced a new communication channel called the Noisy Shuffling Sampling Channel, dealing with artificial DNA strands that suffer from erasures, substitutions, and permutations. Other works that discuss in detail coding over multiple synthesized DNA strands, show that they suffer from substitutions of nucleotides and random permutation of DNA strands. It has been shown that simple indexing is not optimal to reorder permuted DNA strands, however this was demonstrated in a channel model that does not include DNA strand erasures.

[075] By a first aspect, there is provided a method of storing information in a living cell, the method comprising: a. receiving a genomic sequence of the cell; b. selecting a fragment of a coding region within the received genomic sequence in which to store the information; and c. inducing genetic mutation in the selected fragment of a coding region in a living cell such that the genetic mutations are convertible to the information; thereby storing information in a living cell.

[076] In some embodiments, the method is an in vitro method. In some embodiments, the method is an in vivo method. In some embodiments, the cell is alive. In some embodiments, the cell is a prokaryotic cell. In some embodiments, the cell is a eukaryotic cell. In some embodiments, the cell is an organism. In some embodiments, the organism is a single cell organism. In some embodiments, the organism is a microorganism. In some embodiments, the cell is a bacterial cell. In some embodiments, the organism is a bacterium. In some embodiments, the cell is a part of an organism. In some embodiments, the cell is in culture. In some embodiments, the cell is in a laboratory. In some embodiments, the cell is in the wild.

[077] In some embodiments, a genomic sequence is a whole genome. In some embodiments, a genomic sequence is a portion of a genome. In some embodiments, the portion is at least 10, 15, 20, 25, 30, 35, 40, 45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 97, 99 or 100%. Each possibility represents a separate embodiment of the invention. In some embodiments, the genomic sequence is a coding region. In some embodiments, the genomic sequence comprises a coding region. In some embodiments, the portion is all coding regions. In some embodiments, the portion is a portion of coding regions. In some embodiments, the genomic sequence is a gene. In some embodiments, the genomic sequence is all genes. In some embodiments, the genomic sequence is a portion of all genes. In some embodiments, the genomic sequence is of the cell. In some embodiments, the genomic sequence is of the organism.

[078] In some embodiments, a fragment is at least 4, 5, 10, 20, 25, 30, 40, 50, 60, 70, 80, 90, 100, 120, 130, 140, 150, 160, 170, 180, 190, or 200 codons. Each possibility represents a separate embodiment of the invention. In some embodiments, a fragment is at least 4 codons. In some embodiments, a fragment is at least 100 codons. In some embodiments, a fragment is at least 180 codons. In some embodiments, a fragment is at least 200 codons. In some embodiments, a fragment is at most 180, 190, 200, 210, 220, 230, 240, 250, 260, 270, 280, 290, 300, 350, 400, 450, 500, 600, 700, 800, 900 or 1000 codons. Each possibility represents a separate embodiment of the invention. In some embodiments, a fragment is between 150-350 codons. In some embodiments, a fragment is between 180-350 codons. In some embodiments, a fragment is between 200-350 codons. In some embodiments, a fragment is between 150-200 codons. In some embodiments, a fragment is between 150-180 codons. In some embodiments, a fragment is between 180-200 codons.

[079] In some embodiments, selecting comprises estimating indel probability across coding regions of the genomic sequence. As used herein the term “indel” refers to an insertion of deletion of nucleic acid bases. In some embodiments, coding regions is at least 2, 3, 4, 5, 6, 7, 8, 9, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 600, 700, 800, 900 or 1000 coding regions. Each possibility represents a separate embodiment of the invention. In some embodiments, coding regions is all coding region in the genomic sequence. In some embodiments, coding region is all coding regions in the genome of the cell. In some embodiments, selecting comprises selecting a fragment of the coding regions with an indel probability below a predetermined threshold. In some embodiments, all fragments with an indel probability below the threshold are selected. In some embodiments, the probability if for the whole fragment. In some embodiments, the probability is for all nucleotides with the fragment. In some embodiments, the probability is calculated for the whole fragment. In some embodiments, the method further comprises estimating or calculating the indel probability. In some embodiments, the indel probability is calculated using equation (27). In some embodiments, the indel probability is calculated for all fragments with an intermediate identity probability. In some embodiments, the average indel probability for the whole fragment is below the threshold. In some embodiments, all nucleotides of the fragment are below the indel threshold. In some embodiments, the indel probability variance across all the nucleotides within the fragment is below a predetermined threshold.

[080] In some embodiments, the indel probability threshold is 0.00001, 0.00005, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, or 0.1. Each possibility represents a separate embodiment of the invention. In some embodiments, the probability is a percent probability. In some embodiments, the indel probability threshold is 0.05. In some embodiments, the indel variance threshold is 0.05. In some embodiments, the indel probability threshold is 5%.

[081] In some embodiments, selecting further comprises receiving coding regions of orthologs of the cell. In some embodiments, selecting further comprises receiving orthologous coding regions to the coding regions of the cell. In some embodiments, selecting further comprises receiving orthologous coding regions to the received coding regions of the cell. In some embodiments, selecting further comprises receiving coding regions of orthologs of the organism. In some embodiments, the coding regions of the ortholog correspond to the coding regions of the cell. In some embodiments, the orthogonal coding regions are received. It will be understood by a skilled artisan that if only a portion of coding regions of the cell are received then the corresponding/orthologous coding regions from the orthologs must be received. In some embodiments, an ortholog is a homolog. As used herein, “orthologs” and “orthologous” refer to the same genes in different species. In some embodiments, the different species are species that have evolved through speciation events only. In some embodiments, orthologs evolved from the same ancestral gene. In some embodiments, orthologs comprise the same function. In some embodiments, orthologs comprise at least 30, 40, 50, 55, 60, 65, 70, 75, 80, 85, 90, 92, 95, 97 or 99% identity. Each possibility represents a separate embodiment of the invention. In some embodiments, identity is on the amino acid level. In some embodiments, identity is on the nucleotide level. In some embodiments, orthologs comprise at least 30, 40, 50, 55, 60, 65, 70, 75, 80, 85, 90, 92, 95, 97 or 99% homology. Each possibility represents a separate embodiment of the invention. In some embodiments, homology is on the amino acid level. In some embodiments, homology is on the nucleotide level. In some embodiments, at least 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 125, 150, 175, 200, 250, 300, 350, 400, 450, or 500 orthologs are received. Each possibility represents a separate embodiment of the invention. In some embodiments, at least 100 orthologs are received.

[082] In some embodiments, the region comprises an average of allowed changes of at least a predetermined threshold number. In some embodiments, allowed changes are allowed codon changes. In some embodiments, the average is the average number of allowable substitute codons to which a given codon maybe mutated. In some embodiments, thet average is the average for a codon of the region. In some embodiments, the average is the average of the region. In some embodiments, the predetermined threshold is at least 4 codons. In some embodiments, the predetermined threshold is at least about 4.5 codons. In some embodiments, the region is devoid of at least 2 consecutive nucleotides which cannot be mutated. It will be understood that mutations can be made to a nucleotide only if they will result in an allowable codon change. Some nucleotides will be invariable, in that they cannot be mutated and not violate the rules for allowable codons.

[083] In some embodiments, selecting fragments comprises calculating for fragments from the cell a nucleotide sequence identity probability across the orthologs. In some embodiments, an identity probability is an identity probability of all nucleotides in a site. In some embodiments, an identity probability is the probability of substitution over a predetermined number of generations. In some embodiments, the predetermined number of generations is at least 10, 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 75, 80, 90 or 100. Each possibility represents a separate embodiment of the invention. In some embodiments, selecting fragments comprises calculating for fragments from the cell a nucleotide sequence identity probability across the orthologous sequences to a given fragment. It will be understood by a skilled artisan that for a given fragment sequence, each nucleotide base can be compared to all corresponding nucleotide bases in all the orthologs and the percentage of orthologs that have that exact base (the identity score) can be calculated. For example, for a given T nucleotide, 100 orthologous sequences are examined. A T base is found at this exact position in 75 of the orthologs and so for this base the identity probability is 75%. The average identity probability for the fragment is the average of the scores for each base. In some embodiments, the identity is calculated for all fragments. In some embodiments, the identity is calculated for all fragments with an indel probability below the threshold.

[084] In some embodiments, selecting fragments comprises selecting fragments with a nucleotide identity probability above a predetermined threshold. In some embodiments, selecting fragments comprises selecting fragments with a nucleotide identity probability below a predetermined threshold. In some embodiments, selecting fragments comprises selecting fragments with an intermediate nucleotide identity probability. In some embodiments, intermediate is above a first predetermined threshold and below a second predetermined threshold. In some embodiments, the first threshold is the lower threshold. In some embodiments, the second threshold is the upper threshold. The lower threshold is the level below which we are concerned about random mutation due to a lack of evolutionary pressure on the sequence. The upper threshold is the level above which we are concerned that an induced genetic mutation would damage the function of the protein and/or the viability of the cell. An intermediate identity probability meets all criteria as the region is sufficiently conserved that random mutations are unlikely, but not so conserved that the method does not have mutations that can be made without damaging the cell's health.

[085] In some embodiments, the lower threshold is 0.2, 0.25, 0.3, 0.35, 0.4, 0.45, 0.5, 0.55, 0.6, 0.65 or 0.7. Each possibility represents a separate embodiment of the invention. In some embodiments, the lower threshold is 0.55. In some embodiments, threshold is a percent. In some embodiments, the lower threshold is 55%. In some embodiments, the lower threshold is 0.7. In some embodiments, the upper threshold is 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.97 or 0.99. Each possibility represents a separate embodiment of the invention. In some embodiments, the upper threshold is 0.75. In some embodiments, threshold is a percent. In some embodiments, the lower threshold is 75%. In some embodiments, the upper threshold is 0.7. In some embodiments, intermediate is between 0.55-0.75. In some embodiments, intermediate is about 0.7.

[086] In some embodiments, selecting fragments comprises selecting fragments that meet both the indel and identity criteria. In some embodiments, at least one fragment is selected. In some embodiments, a plurality of fragments is selected. In some embodiments, a sufficient number of fragments are selected to encode the information.

[087] In some embodiments, the information is a message. In some embodiments, the information is test. In some embodiments, the information is encoded. In some embodiments, the information is binary information. In some embodiments, the information is transformed into bits. In some embodiments, code is used to convert the information into bits. In some embodiments, the code is ASCII encoding. In some embodiments, 1 character of the information is converted into 1 bit. In some embodiments, the information is encoding into bits. In some embodiments, two bits are encoded into a single nucleotide. For example, 00=A, 01=T, 10=C and 11=G. In some embodiments, the encoding is with an error correcting code. In some embodiments, the error correcting code is Reed-Solomon code. In some embodiments, the Reed-Solomon code is a shortened Reed-Solomon code. [088] In some embodiments, the mutations are not detrimental to the health of the cell. In some embodiments, detrimental is substantially detrimental. In some embodiments, detrimental is measurably detrimental. In some embodiments, the health of the cell is survival of the cell. In some embodiments, health of the cell is dividing time of the cell. In some embodiments, health of the cell is doubling time of the cell. In some embodiments, health of the cell is robustness of the cell. In some embodiments, health of the cell is the ability of the cell to compete with other cells in the environment. Cell heath, doubling time and survival can be measured by any method known in the art. These include measuring OD of the cells in culture, counting cells, and staining cells with a viability dye to name but a few.

[089] In some embodiments, a nondetrimental mutation is a mutation to a synonymous codon. In some embodiments, a nondetrimental mutation is a mutation that is found in a dictionary of allowable mutations. In some embodiments, the dictionary is all synonymous mutations. In some embodiments, a nondetrimental mutation is a mutation to a codon at appears in at least a predetermined threshold percentage of orthologs. In some embodiments, the dictionary is the mutations found in orthologs at least a predetermined threshold frequency. In some embodiments, the ortholog threshold is 0.00001, 0.00005, 0.0001, 0.0005, 0.001, 0.005, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, or 0.1. Each possibility represents a separate embodiment of the invention. In some embodiments, the probability is a percent probability. In some embodiments, the ortholog threshold is 0.05. In some embodiments, the ortholog threshold is 5%. Thus, for example, is a mutation is found in at least 5% of orthologs it is considered a nondetrimental mutation and so may be made as part of the encoding of the information.

[090] In some embodiments, inducing genetic mutation comprises optimizing for the cell the codon usage bias (CUB). In some embodiments, the CUB is calculated by the codon adaptation index (CAI). In some embodiments, optimizing is with the fragment. In some embodiments, the optimizing if for the genomic region. In some embodiments, the optimizing is for the genome. In some embodiments, the CUB is the cell's CUB. In some embodiments, the CUB is the organism's CUB. In some embodiments, the CAI is the cell's CAI. In some embodiments, the CAI is the organism's CAI. In some embodiments, CUB optimization is by CAI. In some embodiments, an optimized CAI is a CAI with the smallest change from the unmutated fragment sequence. Optimization of CAI when encoding greatly improves the method of the invention as it optimizes organism survival and decreases loss of message information. This is similar to the effect produced by the use of a dictionary of disallowed codon changes. Together, both of these elements help to ensure organism survival (decreases organism loss), decrease message loss/mutation and generally improve the fidelity of the message and allow for longer storage times.

[091] These differences are quantified by various “codon usage bias (CUB) indices” which represent the selective pressure acting on synonymous codons and are therefore correlated with translation efficiency. The codons are scored independently and given 3 different measurements.

[092] CAI: Codon Adaptation Index: This measurement is the most broadly used CUB analysis tool - ideally, a set of highly expressed ORFs is composed and is assumed to have higher selective pressure for translation efficiency due to their expression levels.

[093] For each group of synonymous codons (f), the frequency of each amino acid (f i ) is normalized by the highest frequency of the codons in that group, for the entire reference set:

[094] In some embodiments, the information is encoded into the mutations. In some embodiments, the mutations encode the information. In some embodiments, the information is encoded with an error correcting code. In some embodiments, the error correcting code is a Reed-Solomon (RS) code. Error correcting codes are well known in the art and any such code may be employed. In some embodiments, the information is encoding using a modulation algorithm. Modulation algorithms are well known in the art and described hereinbelow. Any such algorithm may be employed.

[095] In some embodiments, the genetic mutations also include information of an identifier. In some embodiments, the method further comprises producing genetic mutation convertible to an identifier. In some embodiments, the method further comprises encoding an identifier into the fragment. In some embodiments, the identifier is a numeric identifier. In some embodiments, the identifier is an ordinal identifier. In some embodiments, the identifier identifies the order in which the information was stored in the fragments. In some embodiments, the fragments are a plurality of fragments. In some embodiments, the mutations convertible to the information and the mutations convertible to the identifier are encoded separately. In some embodiments, the information and the identifier are encoded separately. In some embodiments, each identifier is encoded separately. In some embodiments, all identifiers are encoded together. In some embodiments, the identifier is encoded with greater fidelity than the information. In some embodiments, the identifier is encoded with a lower coding rate than the information. In some embodiments, the identifier is encoded with a higher coding rate than the information. It will be understood by a skilled artisan that the identifiers and the message are each separately protected by their own error correcting code. That is an additional RS code is used to encode the identifiers. This has the advantage of allowing the identifiers to be decoded successfully even if the message cannot be (e.g., due to mutation or indels). The use of separate codes greatly improves the method and ensures proper ordering as well as greater messaging accuracy.

[096] In some embodiments, the fragments are from a plurality of organisms. In some embodiments, the organisms are different organisms. It will be understood by a skilled artisan that the identifiers allow the ordering of the fragments by a reader into the order in which they were encoded. In some embodiments, the identifier is encoded in the same manner as the information. In some embodiments, the identifier is encoded by a different code than the information. In some embodiments, the identifiers and information have different coding matrixes. In some embodiments, the identifier is incorporated into the information. In some embodiments, the identifier is encoded with an error correcting code. In some embodiments, a first codon of the fragment is mutated to include information of the identifier. In some embodiments, a second codon of the fragment is mutated to include information of the identifier. In some embodiments, a first or second codon of the fragment is mutated to include information of the identifier. In some embodiments, the information of the identifier is encoded into the 5' end of the fragment. In some embodiments, the 5' end is within the first 1, 2, 3, 4, 5, 6, 7, 8, 9 or 10 codons. Each possibility represents a separate embodiment of the invention. In some embodiments, the identifier comprises a coding rate of at most 0.5. It will be understood that such a coding rate is intended to provide maximum protection for the identifier and a lost identifier will result in complete loss of the fragment of the message it was identifying. In some embodiments, the identifier comprises 6 bits. In some embodiments, the coding rate is 2 bits per nucleotide.

[097] In some embodiments, the stored information retains a high fidelity. In some embodiments, high is at least 50, 55, 60, 65, 70, 75, 80, 85, 90, 92, 95, 97, 99 or 100% identity. Each possibility represents a separate embodiment of the invention, n some embodiments, high is at least 90%. In some embodiments, the fidelity is maintained for at least 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 150, 200, 250, 300, 350, 400, 450, 500, 600, 700, 800, 900 or 1000 generations. Each possibility represents a separate embodiment of the invention. In some embodiments, a generation is a cell division. In some embodiments, a generation is a cell population doubling. In some embodiments, a generation is an offspring of the cell. In some embodiments, a generation is an offspring of the organism. In some embodiments, a generation is a generation of the mutated cell. In some embodiments, a generation is a generation of the mutated organism. In some embodiments, the fidelity is maintained for at least 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 15, 17, 20, 22, 25, 27, 30, 35, 40, 45 or 50 years. Each possibility represents a separate embodiment of the invention. In some embodiments, fidelity is maintained for at least 1 year. In some embodiments, fidelity is maintained for at least 5 years. In some embodiments, fidelity is maintained for at least 10 years.

[098] In some embodiments, the method further comprises reading the stored information. In some embodiments, the reading is by a different entity that performed the storing. In some embodiments, the reading is by an entity that does not know the code. In some embodiments, the reading is by an entity that knows the code. In some embodiments, the reading is by an entity possessing the passcode. In some embodiments, the reading is by an entity that knows the codeword. In some embodiments, the reading is by an entity that knows the keyword. In some embodiments, the reading is by an entity that does not know the keyword/passcode/codeword. In some embodiments, the reading is by an entity that known the locations of the fragments. In some embodiments, the reading is by an entity that does not know the location of the fragments.

[099] In some embodiments, the reading comprises receiving by a reader sequences of genomic sequences of the genetically mutated cell or a descendant of the genetically mutated cell. In some embodiments, the reading comprises receiving by a reader sequences of coding regions of the genetically mutated cell or a descendant of the genetically mutated cell. In some embodiments, the sequences are reads of whole genome sequencing. In some embodiments, the sequences are a genome of the cell. The reader must receive the stored information in the form of genomic sequences in order to readout the information. In some embodiments, the method comprises identifying the fragments comprising the stored information. In some embodiments, the method comprises correcting any indels within the fragments. In some embodiments, the reading comprises correcting any indels within the fragments. In some embodiments, the method comprises correcting any point mutation within the fragments. In some embodiments, the reading comprises correcting any point mutation within the fragments. In some embodiments, the indels are indels that arose since the inducing. In some embodiments, the indels are indels that arose since the storing. In some embodiments, the mutations are mutations that arose since the inducing. In some embodiments, the mutations are mutations that arose since the storing. In some embodiments, the reading further comprises extracting the information from the induced genetic mutations. In some embodiments, the reading further comprises extracting the information from the fragments. In some embodiments, the extracting is by means of a decoder. In some embodiments, the correcting is by means of a decoder. In some embodiments, the decoder is a decoder of the error correcting code. In some embodiments, the correcting comprises employing a RS decoder. In some embodiments, the reading is without knowing the location of the fragments. In some embodiments, the reading is with knowledge of the location of the fragments. In some embodiments, the decoder has only the coding matrix. In some embodiments, the coding matrix is the inner coding matrix.

[0100] In some embodiments, identifying fragments comprises comparison to a native genome of the cell. In some embodiments, identifying fragments comprises comparison to a non-modified genome of the cell. In some embodiments, the genome is of a cell of the same type or species as the modified cell. In some embodiments, the fragment locations are known to the reader. In some embodiments, identifying comprise locating the fragments with the received sequences.

[0101] In some embodiments, the reading further comprises reading an identifier. In some embodiments, an identifier is read in a plurality of fragments. In some embodiments, an identifier is read in all fragments. In some embodiments, the reading an identifier is performed between the identifying fragments and correcting. In some embodiments, reading the fragments comprises correcting an indel, point mutation or both. In some embodiments, identifiers are corrected by means of a decoder. In some embodiments, the reader comprises the coding matrix of the identifiers. In some embodiments, the identifiers and information have different coding matrixes. In some embodiments, the identifier is read separately from the stored information. In some embodiments, the identifier is decoded separately from the information. In some embodiments, the reading comprises maximum likelihood decoding. In some embodiments, maximum likelihood decoding ensures all ordinal numbers are represented in the identifiers. In some embodiments, all ordinal numbers are all numbers up to the total number of identifiers. In some embodiments, all ordinal numbers are all numbers up to the total number of fragments. For example, if there are 20 fragments each with an identifier than the fragments must be assigned the numbers 1 through 20 (or the ordinals 1 st through 20 th ). The maximum likelihood decoding ensures that each number is represented once in the identifiers. This is done by selecting the identifier most likely to be each ordinal number. This can include starting with the ordinal number with the lowest probability of being encoded by a particular identifier and then progressing though the ordinals by increasing probability. The probability is determined by the error correcting code decoder. The use of maximum probability decoding greatly improves the overall performance of the method as it guarantees are ordering of the fragments. Coupled with the separate coding of the identifiers to improve fidelity, these two aspects greatly improve the overall quality of the received message and allow for longer storage times then methods previously known in the art.

[0102] In some embodiments, the reading further comprises ordering the fragments based on the read identifiers. In some embodiments, the reading further comprises producing an order of the fragments based on the read identifiers. In some embodiments, extracting is in the order of the fragments. In some embodiments, the method further comprises combining all extracted information into the stored information. In some embodiments, the combining is combining in order. In some embodiments, the method further comprises removing the identifiers from the fragments. In some embodiments, the method further comprises removing the identifiers from the information. In some embodiments, the combining comprises concatenating the fragments in order. In some embodiments, the order is established by the read identifiers.

[0103] In some embodiments, the identifiers are read/decoded separately. In some embodiments, the identifiers are read/decoded together. In some embodiments, the identifiers are read/decoded together for the maximum likelihood of all identifiers, such that there must be at least one identifier for each numerical value of the fragments. In some embodiments, the decoding of the identifiers is a maximum likelihood decoding. In some embodiments, the likelihood is determined by the error correcting code decoder.

[0104] By another aspect, there is provided a system comprising at least one hardware processor; and a non-transitory computer-readable storage medium having stored thereon program code, the program code executable by the at least one hardware processor to perform a method of the invention.

[0105] By another aspect, there is provided a computer program product comprising a non- transitory computer-readable storage medium having program code embodied therewith, the program code executable by at least one hardware processor to perform a method of the invention.

[0106] The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, a mechanically encoded device such as punch-cards or raised structures in a groove having instructions recorded thereon, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.

[0107] Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.

[0108] Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.

[0109] These computer readable program instructions may be provided to a processor of a general-purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.

[0110] As used herein, the term "about" when combined with a value refers to plus and minus 10% of the reference value. For example, a length of about 1000 nanometers (nm) refers to a length of 1000 nm+- 100 nm.

[0111] It is noted that as used herein and in the appended claims, the singular forms "a," "an," and "the" include plural referents unless the context clearly dictates otherwise. Thus, for example, reference to "a polynucleotide" includes a plurality of such polynucleotides and reference to "the polypeptide" includes reference to one or more polypeptides and equivalents thereof known to those skilled in the art, and so forth. It is further noted that the claims may be drafted to exclude any optional element. As such, this statement is intended to serve as antecedent basis for use of such exclusive terminology as "solely," "only" and the like in connection with the recitation of claim elements, or use of a "negative" limitation.

[0112] In those instances where a convention analogous to "at least one of A, B, and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, and C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "A or B" will be understood to include the possibilities of "A" or "B" or "A and B".

[0113] It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable sub-combination. All combinations of the embodiments pertaining to the invention are specifically embraced by the present invention and are disclosed herein just as if each and every combination was individually and explicitly disclosed. In addition, all sub- combinations of the various embodiments and elements thereof are also specifically embraced by the present invention and are disclosed herein just as if each and every such sub-combination was individually and explicitly disclosed herein.

[0114] Additional objects, advantages, and novel features of the present invention will become apparent to one ordinarily skilled in the art upon examination of the following examples, which are not intended to be limiting. Additionally, each of the various embodiments and aspects of the present invention as delineated hereinabove and as claimed in the claims section below finds experimental support in the following examples.

[0115] Various embodiments and aspects of the present invention as delineated hereinabove and as claimed in the claims section below find experimental support in the following examples.

EXAMPLES

[0116] Generally, the nomenclature used herein and the laboratory procedures utilized in the present invention include molecular, biochemical, microbiological and recombinant DNA techniques. Such techniques are thoroughly explained in the literature. See, for example, "Molecular Cloning: A laboratory Manual" Sambrook et al., (1989); "Current Protocols in Molecular Biology" Volumes I-III Ausubel, R. M., ed. (1994); Ausubel et al., "Current Protocols in Molecular Biology", John Wiley and Sons, Baltimore, Maryland (1989); Perbal, "A Practical Guide to Molecular Cloning", John Wiley & Sons, New York (1988); Watson et al., "Recombinant DNA", Scientific American Books, New York; Birren et al. (eds) "Genome Analysis: A Laboratory Manual Series", Vols. 1-4, Cold Spring Harbor Laboratory Press, New York (1998); methodologies as set forth in U.S. Pat. Nos. 4,666,828; 4,683,202; 4,801,531; 5,192,659 and 5,272,057; "Cell Biology: A Laboratory Handbook", Volumes I- III Cellis, J. E., ed. (1994); "Culture of Animal Cells - A Manual of Basic Technique" by Freshney, Wiley-Liss, N. Y. (1994), Third Edition; "Current Protocols in Immunology" Volumes LIII Coligan J. E., ed. (1994); Stites et al. (eds), "Basic and Clinical Immunology" (8th Edition), Appleton & Lange, Norwalk, CT (1994); Mishell and Shiigi (eds), "Strategies for Protein Purification and Characterization - A Laboratory Course Manual" CSHL Press (1996); all of which are incorporated by reference. Other general references are provided throughout this document.

Example 1: The Noisy Shuffling Sampling Channel

[0117] The channel model discussed herein was initially presented in Shomorany and Heckel (referenced hereinabove) and is referred to as the Noisy Shuffling Sampling Channel. Originally, Shomorany and Heckel, applied the Noisy Shuffling Sampling Channel to the case of artificial DNA strand storage in vitro. Hereinbelow it is applied to actual in vivo living organisms, which will affect the actual parameters of the model.

[0118] Shomorany and Heckel considered the asymptotic regime of infinitely many microorganisms (DNA strands, or M which may grow to infinity, in their notations) and infinitely large DNA sequence length (DNA strand length, or L which may grow to infinity, in their notations).

[0119] The channel model is as follows. Data is written on M synthesized microorganisms (or DNA sequences) by the transmitter, where each synthesized microorganism is represented as a sequence of bits. The number of bits written on each microorganism is L. After a certain storage time, the synthesized microorganisms are read by the receiver, who wishes to reconstruct the original information stored onto all microorganisms. The biological medium, which is referred to as the Noisy Shuffling Sampling Channel, may introduce errors in the time between writing and reading. Notice that both channel inputs and output are over the binary alphabet (Fig. 1).

[0120] First, a portion of the microorganisms may not survive and thus is erased, depicted in Figure 1 as microorganism erasure. This may happen due to fitness degradation of some of the microorganisms, which occurred by the code of the invention which changed their original content. Second, no particular spatial ordering of the microorganisms is kept over time. Therefore, the surviving microorganisms are read in random order with respect to their original order at the transmitter's hands. This phenomenon is captured in Figure 1 as a permutation. Lastly, the channel imposes corruption on specific bits, meaning that some individual bits may be substituted (flipped) between the two parties. This may happen due to mutations at the single nucleotide level, depicted as nucleotide substitution in Figure 1.

[0121] The data stored over the channel is a binary vector of = M • L bits divided intoMbinary vectors of lengthLeach, which we call a message block. Each message block is stored over a different microorganism. After some storage time, the receiver reads the survived (non-erased) message blocks. Given that/V microorganisms had survived, we end up with a binary vector ofn = N * Lbits.

[0122] The channel input is x = X o . . .X M-1 , which is a sequence of m = \x | bits (denoted by small lettersx; ), or a sequence of M message blocks (denoted by capital letters^;), defined as a binary vector ofL bits, wherem = M • L . The message blockX i corresponds to the data written onto microorganism number i, i ∈ {0, . . . , M — 1} . The channel input alphabet is the binary alphabet{0,l}, meaning thatx; ∈ {0,1}, X t ∈ {0, 1} L .

[0123] The channel output isy = y o - - .y n-1 = y o - - .y n-1 , which is a sequence ofn = |y| bits (denoted by small IettersY i , or alternatively a sequence ofN message blocks (denoted by capital letters?;), each of lengthL bits. The Noisy Shuffling Sampling Channel assumes that erasures occur at the entire message block level only (for instance, caused by the extinction of a whole microorganism), therefore alson = N * L. The channel output alphabet is the binary alphabet{0,l}, meaning thaty; ∈ {0,1}, Y i ∈ {0,1} L .

[0124] Next, a mathematical formulation of the channel model is assessed. The first phenomenon captured by the model, as seen in Figure 1, is a constant erasure probability equal toP e , meaning that each message blockX i is erased with probability P e and survives with probability1 — P e , independently of all other message blocks Nis the (random) number of surviving (non-erased) message blocks, defined as the number of output message blocks Clearly, N is a sum of M idd Bernoulli random variables, each with a parameter1 — P e , and this is a binomial random variableN~B(M, 1 — P e ).

[0125] The second phenomenon captured by the model, as seen in Figure 1, is a permutation of the surviving message blocks. The microorganisms are numbered fromO toM — 1when the i th message block is stored over the i th microorganisms. Given that the number of surviving microorganisms isN, the permutation vectorπ is defined as a random injective function from the set{0, . . . , N — 1} to the set{0, . . . , M — 1}, chosen uniformly at random from the set/7 w , which is the set of all injective functions from the set{0, . . , , N — 1} to the set {0, . . . , M — 1} (alternatively, n N is the set of allN -permutations of the set{0, . . . , M — 1} ). For instance, if M = 3, N = 2 , then n N = {{(0 → 0), (1 → 1)}, {(0 → 0), (1 2)}, {(0 1), (1 0)}, {(0 1), (1 2)}, {(0 2), (1 0)}, {(0 2), (1 1)}} , andπ is one ofΠ N members. The number ofN -permutations of {0, . . . , M — 1} is (discrete probability distribution).

[0126] The last phenomenon captured by the model, as seen in Figure 1, is a constant bit substitution probability equal to P s , meaning that each output bit Y i has suffered from substitution with probability P s , independently of all other bits

S 0 ...S N-1 be the random binary vector corresponding to the individual bit substitutions pattern s i ~Ber(P s ) in an iid manner.

[0127] Finally, one can writeY i the element- wise XOR operation, meaning that each channel output message block is a corrupted version of one of the input message blocks if the latter survived.

[0128] Before quoting the main result in Shomorany and Heckel, which is the Noisy Shuffling Sampling Channel capacity in the infinite block-length regime of the channel capacity is quantified in the present storage scenario.

[0129] An(M, L) microorganisms storage codeC, is defined as a set of | C | codewords of the formx = where each codeword is a binary vector of lengthM • L bits, or equivalently asM message blocks of lengthL bits each, together with an encoding function from the possible messages to their corresponding channel inputs. A decoding function is a mapping which accepts the channel output and assigns it with one of the possible messages The corresponding coding rate is which is the total number of information bits stored relative to the available channel resources (for instance, the total number of codons).

[0130] Reliable storage is defined as a vanishing decoding error probability as the codewords' length is growing to infinity, . Reliable storage is possible whenever R < C* , where C* C* is the classic channel capacity defined as is the mutual information between the channel input and output. The supremum is taken over all possible input distributions. In Shomorany and Heckel, the authors derivedC* for the Noisy Shuffling Sampling Channel in the asymptotic regime of Their result is quoted in the following theorem.

[0131] Theorem 1. The capacity of the noisy shuffling-sampling channel is as long as particular, if then the capacity is 0 . Here, is the binary entropy function.

[0132] This result gives an upper bound on the achievable coding rate of any coding scheme over the Noisy Shuffling Sampling Channel model in the asymptotic regime, thus also an upper bound of the achievable coding rate of any (M, L) microorganisms storage code in the non-asymptotic regime of

Example 2: Writing process

[0133] A chunk of kinformation bits, generated by an information source is stored over the living organism's DNA by the transmitter. The storage will change the DNA contents of the organism, where M different synthesized DNA sequences will be written onto M microorganisms, altogether carrying the information message.

[0134] The Encoder, discussed hereinbelow, encodes the information bits intoM message blocks, binary vectors of length! bits each, using some error correction code. The overall encoder output is a binary vector of m = M • L bits. Next, the Modulator changes the DNA sequence of the i th microorganism (i ∈ {0, . . . , M — 1}) in a specific area we call a site that carries the information of message block created by the Encoder. A siteS is defined as a sequence (string) nucleotides at a given and fixed location among all the microorganisms' DNA sequences, as depicted in the box in Figure 2. It should be emphasized that one starts with a single microorganism with a DNA sequence and createsM different versions of it by changing its content among the siteS. The overall modulator output isM synthesized sites.

[0135] Therefore, the writing (or synthesis) process involves overwritingM parallel DNA sequences of a living organism, starting at a given and fixed position among all of them, with a length of3|S| nucleotides. Only specific nucleotides (underlined ones in Figure 2) are changed at each site compared to the original sequence and are bearing the information stored over the living organism in a way that will be explained hereinbelow. Example 3: Living organism channel

[0136] The synthesized microorganisms are carrying the programed information for some time. The biological medium, which is referred to as the Living Organism Channel, may introduce errors during the time between writing and reading according to evolution, as depicted in Figure 3.

[0137] The first biological phenomenon captured in the Living Organism Channel is that a portion of the microorganisms may not survive and thus are erased, depicted in Figure 3 as microorganism erasure. This may happen due to two main reasons, which are captured by the parameter P e and a dictionaryD (Fig. 3). The first reason is fitness degradation of some of the microorganisms, a consequence of the coding, which changed their original genetic content and maybe overloaded the organism with too many changes that decreased the growth rate, or fitness of the microorganism. This behavior is modeled by a globally constant microorganism erasure probability equal toP e , and later there is presented a method for estimating this erasure probability. The second reason for a microorganism erasure is disallowed changes of codons, which is defined as an accidental change of a codon crucial to the evolution of the microorganism. The simplest example may be a non-synonymous codon change, causing a protein change. From now on, it is assumed that nature holds a dictionary of such allowed changes for each organism, D, which is a mapping function from each codon location in the siteS in the organism's DNA, to a set of allowed codons. The dictionary/) dictates the constraint that each specific codon in each particular location in the DNA is allowed to be changed at a particular set of codons. Any other change (i.e., a codon change into another codon which is not a member of the set mapped by/)) will cause a microorganism's death and, finally, an erasure of the relevant DNA sequence. Hereinbelow is provided an estimate of dictionary/) and how to design a code not violating those constraints. It is also assumed that only the transmitter holds/) , but not the receiver. The reason for such an assumption is that in a practical storage application, the encoder has computation resources and are operating offline on a computer, while the decoder may be implemented in vivo over some organism and operating in real time. Therefore, an appropriate coding strategy is needed.

[0138] The second phenomenon captured by Living Organism Channel is that no particular spatial ordering of the microorganisms is kept over time. Therefore, the surviving microorganisms are read in random order with respect to their original order at the transmitting party's hands. This phenomenon is captured in Figure 3 as permutation and is caused mainly because of the current reading technologies, in which, each time, a random DNA sequence is read from all available sequences.

[0139] Lastly, the channel imposes corruption on specific nucleotides, meaning that some individual nucleotides may be substituted between the two encoding and decoding. This may happen due to mutations at the single nucleotide level, depicted as nucleotide substitution in Figure 3. It is assumed that all nucleotides may substitute with probability ofP s , meaning that the original nucleotide may substitute into any one of the three other possible nucleotides with equal probability and overall may remain unchanged with probability 1 — P s .

Hereinbelow a method for estimating this substitution probability is provided.

[0140] The Living Organism channel may be presented as the Noisy Shuffling Sampling Channel, preceded by the Random Subsets Channel, which is defined hereinbelow, constraining the synthesized microorganisms to follow the allowed codon changes according to the dictionaryD (Fig. 4). P e , P s are the parameters of the Noisy Shuffling Sampling Channel as discussed hereinabove, is the random permutation and is not an actual parameter of the model which can be estimated apriori.

Example 4: Reading process

[0141] After a certain period, the receiver wishes to read the stored information on the synthesized microorganisms. The reading process, depicted in Figure 5, starts with reading the microorganisms' sites which are known to both the transmitter and the receiver. As illustrated in Figure 5, some nucleotides substitutions may occur (depicted as underlined letters), a permutation occurs (for example, microorganism numberO 0 upon writing is read as microorganism number N — 1), and some microorganisms may be erased, if N < M.

[0142] The Demodulator retranslates each read site into an encoded (and maybe corrupted) message block (a binary vector of length L) in a way that will be explained hereinbelow. Next, the decoder sorts the permuted message blocks, applies error correction, and reconstructs thek information bits.

[0143] The overall system is depicted in Figure 6. The “Pool” terminology is used to emphasize the loss of microorganisms' spatial ordering at the receiver.

Example 5: Finite Block-Length analysis of the Noisy Shuffling Sampling Channel

[0144] Next, the Finite Block-Length capacity of the Noisy Shuffling Sampling Channel is derived. [0145] The goal here is to analyze the actual realistic case of the non -asymptotic regime. Since current synthetic biology technologies limit the number of microorganisms that can be synthesized in parallel (M), and also the length of synthesized nucleotides sequences in each microorganism (L) to the order of a few hundreds. For this purpose, the finite block length (FBL) analysis will be used. It is briefly summarized hereinbelow.

[0146] Consider a communication channel with inputx and outputy , both over some alphabets, with some probability distribution The following theorem gives a lower bound on the achievable coding rate of any coding scheme over this communication channel, under some allowed decoding error probability.

[0147] Theorem 2. For any distribution on the input alphabet, there exists a code C with | C | codewords and an average probability of error ε at most

Here, is the information density between the channel input and output, which is a function of the random variables x,y . Therefore, i(x; y) is also a random variable by itself. Also, the expectation is over the joint distribution of x, y.

[0148] The above result (called the Dependence Testing (DT) bound) provides a lower bound on the codebook size which is also a lower bound on the achievable coding rate of an (M, L) microorganisms storage codeC, which is defined by v

[0149] The storage reliability in the DT bound is quantified by the average probability of a decoding error, indicating the average fraction number of situations in which the decoder fails to reconstruct the original message. Here, the codewords' length is kept constant and not growing asymptotically; therefore, unlike Shannon's original channel capacity theorem, the metricε must be taken into account.

[0150] Next, Theorem 2 is applied to the Noisy Shuffling Sampling Channel, introduced hereinabove. The channel's inputx is a binary vector of length m = M • Lconsisting of M message blocks of lengthL bits, each one stored on a distinct microorganism, indexed by 0 to M — 1. The Nsurviving organisms (after the erasures) are permuted randomly, modeled by a random injective functionπ from the set {0, . . . , N — 1} to the set {0, . . . , M — 1}. Lastly, each bit in the N surviving messages is randomly flipped in an iid manner with probability P s . This results in the channel's outputyy, which is a binary vector of length??

[0151] To apply Theorem 2, an expression for the information density t(x; y) must be derived. First, a lower bound is derived, which is the inverse conditional channel distribution. Note that is a random variable which is a function of the random variables x,y.

[0152] Lemma 1. The conditional probability of the Noisy Shuffling Sampling

Channel satisfies

Where is the hamming distance between x n and yx n is the vector obtained by permuting x by

Proof. Recall from above that where s = is the random binary vector that corresponds to the individual bit substitutions pattern and is a vector of L Bernoulli random variables with parameter P s .Then, giveny, the conditional distribution satisfies

(i) follows from the fact that (ii) follows from the law of total probability over?r.

(iii) results from the fact that depends only on the random substitutions pattern vector ss, and in particular are iid distributed. Also given nx can be divided into a set of M — N erased message blocks and a set of the remaining N non erased messages, (iv) follows since no information is available from the first group, imposing probability equal to Next, uniformly distributed over n N (v). In (vi), we plugged the distribution of (vii) follows since by definition,

[0154] To calculate one needs to derive an expression for the distribution of the random which is a function of the random hamming distance for any TT. Intuitively, assuming the channel chose the permutationTT, the expression dominated by the term defined by the permutationTT since this permutation minimizes the hamming weight The following lemma makes this intuition more formal.

[0155] Lemma 2. The conditional distribution of the noisy shuffling sampling channel can be written as

Where T~B(NL, P s ) is a binomial random variable and A A is some nonnegative random variable.

[0156] Proof. Let be the random permutation chosen by the channel, therefore where A is a nonnegative random variable. Define the function 1{Z} , is a message block, as the sum of indicator functions elements, namely is also the hamming weight ofZ. Now, one notices that where (i) follows from the definition of the hamming weight, (ii) from the connection and (iii) from the fact that are iid Bernoulli random variables and the sum of NL iid Bernoulli is a Binomial random variable/

[0157] Now one can explicitly derive the DT bound (Theorem 2) for the Noisy Shuffling Sampling Channel.

[0158] Lemma 3. The DT bound (Theorem 2) for the Noisy Shuffling Sampling Channel is given by for codebook size

[0159] Proof. Let be the size of the code, then by Theorem 2 where (i) follows from the dependency of the information density on the random variables N, T, and (ii) from Lemma 2 and the nonnegativity of the random variable A.

[0160] From now on, a code whose average error probability satisfies the DT bound (Lemma 3) is referred to as an (M. L. ε) microorganisms storage code C FBL with | C FBL | codewords, and rate

[0161] The channel capacity of the Noisy Shuffling Sampling Channel, in the non- asymptotic regime of M microorganisms of length L and m = ML , which we denote byC* FBL , is therefore guaranteed to be bounded by R* F BL where the upper bound is given by Theorem 1 and the lower bound by Lemma 3.

[0162] The main result of all this is summarized in the following Theorem.

[0163] Theorem 3. The channel capacity of the Noisy Shuffling Sampling Channel, in the Finite Block- Length regime of M microorganisms of length L each such that m = ML , denoted by C* FBL , is bounded by

Where

[0164] Proof. It was already shown that is upper bounded by Theorem 1 and lower bounded by Lemma 3. First, the actual value was plugged in. Second, in the highest possible rate from Lemma 3 was plugged, which is the maximal value oik such that the right-hand side of the DT bound (Theorem 2) is larger thane.

Example 6: Bounds evaluation

[0165] Numerical simulations were performed to evaluate the FBL capacity of the channel. The parameters M = 180, L = 144, ε = 10 -6 were fixed and the parameters P e , P s were varied.

[0166] As can be seen in Figure 7, the upper bound is the classic capacityC*, and also the upper bound in equation (9), and the lower bound is the achievable coding rateR* FBL and also the lower bound in equation (9). [0167] Next, the normalized degradation rate, defined as the relative gap between the lower and higher bound, was simulated. Intuitively, an erasure of a message causes more information loss than a single substitution. This is reflected in Error! Reference source not found. 8, a growing e rasure probability causing a more substantial degradation in the achievable rate compared to a growing substitution probability. Therefore, for high erasure probability, the actual optimal coding rate may be significantly worse than the one implied by Theorem 1. Moreover, there are ML possible locations for substitutions errors. In contrast, there are only M possible erasures, as substitutions occur at a single nucleotide, and erasures occur for the whole microorganisms. This is another explanation for the mild gap between the two bounds along the substitutions axis, but not along the erasures axis- the actual codeword length for the erasures part in the entire error mechanism is much smaller than that of the substitutions; therefore, the degradation caused by growing erasure probability is much more significant.

Example 7: Random subsets channel

[0168] Motivated by the Living Organism Channel given hereinabove, a new communication channel called the Random Subsets Channel is provided and analyzed. The channel model assumes that nature holds a “dictionary” of allowed codon changes for each organism, which may be location dependent. In other words, in each codon in the DNA sequence of the organism, only specific codon changes are permitted by nature. The allowed changes are described by a dictionary that maps a particular location in the DNA into a set of allowed codons. Any change of a codon to a codon that is not one of the allowed changes would lead to the microorganism's death.

[0169] The simplest example for such a dictionary is the Amino Acid (AA) dictionary, allowing only synonymous (codon) changes. For instance, if the codon in the organism's DNA sequence is from the set L = {CTT, CTC, CT A, CTG], that includes all codons which translate into the AA “Leu”, then the allowed changes in that codon are changes to any of the codons from the setL.

[0170] The Random Subsets Channel generalizes the AA preservation constraint in the following ways. First, both the receiver and the transmitter may not know the dictionary (constraints or subsets) in advance. Second, the constraints can be arbitrary, as explained in more detail hereinbelow. [0171] The Random Subsets Channel: Consider the Random Subset Channel, given in Figure 10, defined by a discrete finite input alphabet/. For example, in the DNA storage setting, x is the natural codons alphabet as the set of all |X| = 64 nucleotide triplets. Upon transmitting a vector n channel usages, the channel output is the input vector x or?, where? represents a complete vector erasure. According to a probability distributionp(S), the channel, upon channel usagei, picks in an i.i.d fashion a subset The subsetS) represents the set of allowed symbols in thei th coordinate. The vector whosc i th coordinate is the is called the state vector. The channel erases the vectorx, i.e., it outputs?, if there is at least one indexi such thatx; Otherwise, the channel outputsx, namely the channel introduces no errors. Note that the state vector s may or may not be known to the encoder or the decoder in general. Information transmission is possible only if the encoder does not violate any subset constraint, anyi; therefore, we assume that the encoder knows non-causally all the channel thus also satisfies all the subsets constraints. This assumption is motivated by the DNA setting since we assume that the genomic data and constraints are known in advance at the time of encoding. Hereinabove there was presented a method to estimate the state vector for a given organism.

[0172] The encoder maps one of the possible messages, m, into the channel input x ∈ x n , and the decoder maps a channel output y into a decoded messagem. The code rate is, therefore, R bits per channel use.

[0173] Next, several settings of the Random Subsets Channel were considered and analyzed.

[0174] Channel analysis: First, the simplest scenario was considered in which both the encoder and decoder know the state vector. The channel capacity in this setting is clearly C = log K since the channel boils down to a perfectly clean channel with input and output alphabets of size/f. Indeed, the encoder and decoder can agree in advance on an injective mapping from the possible subsetsSi of allowed symbols to the set of integers {1, Therefore, in each channel usage, one can transmitlog K bits.

[0175] Next, the more challenging scenario was considered, where only the encoder knows the state vectors, as shown in Figure 11.

[0176] The capacity of a channel with side information at the encoder side is given by the well-known Gelfand-Pinsker (GP) Theorem, given next. Note that capital letter variables represent symbols: for example, S stands for a state symbol (a subset containingK unique elements), whereass stands for the state vector. [0177] Theorem 4. The capacity of a discrete, memoryless channel with a stateS which is i.i.d distributed according to the probability distributionp(S) and is available non-causally at the encoder is

Where!/ is some random variable, and X = f(U, S) is a deterministic function of U, S. Also, the channel input alphabet is X , the output alphabet is the state alphabets is the set of all subsets of/ containing/f distinct elements with and the alphabet U

[0178] Without loss of generality, it is assumed that / = {1,2, ... , |X|], and denote it by S i (j)thej th smallest element of the stateS i forj ∈ {1, . . ., K}.

[0179] Next, two interesting cases of channel states probability distributions are considered. The first case is called the Partial Random Subsets Channel, it is a direct extension of the AA dictionary. In this case, a random variableS i is independently and uniformly distributed over a set of subsets of/, each of size K This case models a genome in which mutations are DNA location independent; the allowed codon changes depend only on the codon itself and not on its location in the DNA. Mutation here means any change of codon.

[0180] In the second case, which is called the Completely Random Subsets Channel, the subsets are drawn independently and uniformly at random from all Possible subsets

(of size/f). This case models a genome in which mutations are location-dependent: the allowed changes of a codon depend both on the codon itself and its exact location in the DNA.

Example 8: Partial random subsets channel

[0181] In this channel model, the random variables is uniformly distributed over sets of size K K each, that form a partition of distinctly, and the channel state probability distribution is and zero otherwise. This alphabet partition (i.e., the support of the random variables) is unknown to the decoder. It models a genome in which the mutations are location- independent.

[0182] Next, the capacity of this channel is derived.

[0183] Theorem 5. The Partial Random Subsets channel capacity

[0184] Proof. Since the encoder does not violate the subsets constraint, and the output alphabet size is/f, then the capacity is at mostlog K. Next, it is shown that the capacity is at leastlog K.

[0185] By Theorem 4 and the fact that X = Y the capacity is

LetU be a uniform random variable (independent ofS) distributed over the set {1, . . . , K}, and define the random variable then H(U\S) = H(U) = log K . Furthermore, since the sets in the support of S are disjoint, then givenX, one can recover the value is a function of X and thus = 0. To conclude, by the random variable!/ and the function/, the channel's capacity is at least = log K, and the result follows.

[0186] Theorem 5 shows no rate loss compared with the general Random Subsets Channel, where states are known both at the decoder and encoder. Indeed, although the exact partition of the alphabet is unknown to the decoder, it can learn it from a fixed, long enough, and known pilot (training) signal sent by the encoder at the beginning of the communication. After which, the decoder with high probability will recover the partition, and the channel will become a perfectly clean channel with an alphabet of sizeK When the codeword length tends to infinity, the training signal's length is negligible, and therefore it does not incur any rate loss. Example 9: Completely random subsets channel

[0187] In this channel model, the subsets are drawn uniformly at random from all subsets of size/f over the

[0188] This channel models a genome in which the mutations are DNA location-dependent.

[0189] Here, Theorem 4 is invoked to derive an achievable coding rate for this channel, which is a lower bound on the channel's capacity. The following theorem gives an expression for the channel capacity lower bound as a constrained optimization problem.

[0190] Theorem 6. A lower bound on the Completely Random Subsets Channel capacity is given by

[0191] Proof. The channel capacity is given by Theorem 4. It is assumed here that the random variable!/ is defined over the alphabet{l, . . . , K], in such a way that X = S(!/).This assumption simplifies the following derivations and suggests that we end up with a lower bound on the capacity.

[0192] Assume some bijective mapping function between all possible setss s containing K distinct elements from X to the set of integers The optimized probability distribution according to our assumption onU, is of the form

First, notice that whenever the encoder obeys the subsets constraint, X = Y, and therefore we are left with maximizing

Notice that

Plugging in (17), (18) into the conditional entropy term one gets

[0193] As mentioned, Theorem 6 gives an expression for an achievable coding rate (capacity lower bound) for the Completely Random Subsets Channel capacity as a closed-form constrained optimization problem. When the optimal random variable U in Theorem 4 indeed holds our assumptions (defined over the alphabet {1, . ... K}, in such a way that X = S(U)), the lower bound is tight, and Theorem 6 gives the channel capacity. The lower bound in Theorem 6 can be solved numerically, as shown in Figure 12.

[0194] Notice that each graph starts at the point where the channel capacity is log K as expected since it is a perfectly clean channel. The capacity lower bound decreases as the alphabet size increases and K is fixed due to the growing number of K - subsets.

[0195] Although Theorem 6 and Figure 12 shed some light on the achievable coding rate (or maybe even on the channel's capacity), one needs to find explicit code constructions, to be able to transmit information at a rate close to the capacity. Hereinbelow, two code constructions for the Random Subsets Channel are provided. But first, two examples of simple coding strategies are examined that will provide some intuition for future constructions. Also, the first coding strategy is causal, whereas the second utilizes the non- causality of the states to achieve a large code rate.

[0196] Example 9i. Assume that Any subsetS of size 3 satisfies min(S) ∈ {1,2} and max(S) ∈ {3,4} , then given a bit b to be transmitted, the encoder sets It is clear that upon receiving x, the decoder can recover the value of b, as needed.

[0197] This coding strategy is causal, as the current bit encoding does not depend on future channel states. Compared to the channel capacity lower bound C = 1.293, the scheme's rate is R = 1 bits per channel usage.

[0198] Example 9ii. Assume thatK = 2,/ = {1,2,3}. Given two consecutive channel states and a bit b to be transmitted, the encoder sets x according to Table 1. For example, assume that S 1 = (1,2), S 2 = (l,3)then to encode the bit 0, the encoder sends the vector (1,1); otherwise, it sends the vector (2,3).Notice that there is no ambiguity in the encoded symbols (the two right columns don't share a common pair of encoded symbols). Therefore, the decoder can recover the transmitted bit. Compared to the channel capacity lower bound C = 0.723, the scheme's rate is R = 0.5 bits per channel usage.

[0199] The coding scheme is non-causal, as the encoding of the current symbol depends on the current and the following channel states. Notice that there is no coding scheme that encodes one bit of information using one channel state (as expected, since the channel capacity in this case is C = 0.723 < 1). (See Table 1) [0200] Table 1: Random Subsets Channel: simple non-causal coding strategy for an alphabet of 3 symbols and a subset size of 2 symbols.

Example 10: Code constructions for the random subsets channel

[0201] Herein is provided code constructions for the Random Subsets Channel, independent of the subset's probability distribution p(s), and therefore applicable to both Partial and Completely Random Subsets Channels. However, the focus is on the Completely Random Subsets model.

[0202] Modulo Random Subsets Code: The key idea of the construction relies on the following. If the encoder is always guaranteed that each state contains both even and odd integers, it could transmit at a rate of one bit by using the bit's parity. However, such an assumption does not hold in the model; hence the idea is to force some structure.

[0203] Encoding. Let qbe an integer that divides /and let Cbe a code of length n and rate

(20) over the alphabet{0, . . q — l}that achieves the capacity of the q —ary symmetric channel (21) Where the symbol error probability is to be determined. Let be the encoding of the information to be transmitted. The symbol to be transmitted over the channel is set to be Otherwise, if there is no such symbol, it is set to be an arbitrary element of The resulting transmitted codeword is which never violates the subsets constraint.

[0204] Decoding. Since the codewordxdoes not violate the subset constraints, the decoding of the received codeword is straightforward. The receiver first computes the vector which is then passed through a decoder of the code C.

[0205] Code rate analysis. The received vector contains an error at coordin ate, i.e., if and only if there is no symbol such The probability of this event is easily calculated. Indeed, since for each there are exactly s mod q = i then the required probability of error is

[0206] It is clear by symmetry that given that an error has occurred, the probability of any of the q — 1 other symbols is equal. To conclude, the channel boils down to a q-SC with probability of error as above. By choosing the code C to be capacity-achieving, it is clear that for large values of n the decoding error tends to zero, and the overall rate (number of bits per channel usage) is equal to with parameter Plugging (22), (21) into (20) results in the code rate of the construction.

[0207] The integer K is a free parameter of the code and should be chosen to maximize the code rate R. The following figure shows the optimal coding rate as a function of the length of the subset K, compared to the completely random channel achievable coding rate derived earlier in this section when = 64, K ∈ {2, 3, 4, 5}. Also, the coding rate gap, defined by the ratio of the construction code rate to the derived channel capacity lower bound (Theorem 6), is depicted. For all numerical evaluations, q = 2 was the optimal choice (Fig. 13).

[0208] Linear Random subsets code: This section presents another code construction for the Completely Random Subsets Channel with a higher rate than the previous construction. However, it can be achieved at the expense of higher complexity. [0209] Let the mapping be the inner code, be the outer code, and a symbol of is referred to as a super-symbol. Furthermore, we assume that is a power of a prime, and therefore there exists a finite field of order

[0210] Encoding. An information vector to be transmitted is encoded into a vector as follows. Using the outer code encode v (viewed as a vector of Q super- symbols) to a vector w Using the inner code encode each of the N super- symbols of w, to a vector of length n over X. The final codeword is of length n * N over X. The outer code is chosen to be an code over the alphabet with rate

[0211] Now is described the encoding of the inner encoding step. To be more mathematically precise, the inner is a function of the input vector and the states vector. Hence, each information vector can be encoded to several different codewords, and the precise codeword is chosen according to the state vector. The formal definition is as follows. Let H be a predetermined fixed matrix of order and rank(H) = q known both to the encoder and decoder. It is explained hereinbelow how H is chosen. Given a vector to be encoded and a state vector the inner encoding of u is a vector that satisfies

(23)

If there are several such vectors then the encoder picks one of them arbitrarily. If there are no such vectors, then the encoder arbitrarily sets be one of the vectors from the set

[0212] Decoding. The decoder first applies the inner decoder for each of the /Vchannel's output vectors and decodes it into the super-symbol using (23), which amounts to simple matrix multiplication. Note that the decoder does not need to know the channel states (subsets) Then, it applies the outer decoder to the N super- symbols outputted by the inner decoder to recover the information symbols.

[0213] Code analysis. Errors can occur only at the inner codes' encoding step, whenever there is no solution to (23). In such a case, the inner code randomly picks one of the vectors that satisfy the subset constraints. Since the states are uniformly distributed among all possible subsets, then each error is uniformly distributed among all possible super- symbols. Hence, as in the previous construction, the channel boils down to a qSC defined over the alphabet and error probability Similarly, the outer code is chosen to have a coding rate equal to the capacity of a qSC over the alphabet / with parameter

So that the outer decoder can correct all super-symbol errors that occurred by the inner encoder and perfectly reconstruct the original information message. Transmission of one super-symbol involves the transmission of q information symbols by n channel uses of the original Completely Random Subsets Channel, and finally, the overall coding rate of the above construction is bits per channel use.

[0214] To derive the coding rate of this code construction, a numerical simulation was performed that evaluates the super- symbol error probability py.

[0215] Numerical simulation: The following Monte Carlo simulation was performed for First, independently and uniformly at random each entry of the matrix H from the finite field was picked, such that rank(H) = q. Next, the information vector it to be an iid uniform random vector of q elements over was picked. Finally, n subsets (states) of size Koi x were randomly picked independently and uniformly and it was verified that (23) holds. The random selection of the subsets was repeated for a fixed H to estimate the error probability and calculated the code rate using (25). Such a simulation numerically finds the code rate for a specific H . The simulation was repeated for random selections of H, and the resulting code rates of all evaluations were averaged to finally get a numerical value for the construction's code rate.

[0216] Figure 14 depicts the simulation result for 64and variable subsets length K. In this figure, one can see that the Linear Random Subsets Code construction performs better than the Modulo Random Subsets construction from the previous section, in cost of encoding complexity. In the Linear Random Subsets Code construction, we need to go over K n possible vectors over x n and look if (23) holds in order to transmit q information symbols, where in the Modulo Random Subsets construction, we only performed simple modulo calculation of q elements. Example 11: Error probabilities

[0217] The following simple assumptions are made to calculate the error probabilities P s , P e . It is assumed that all orthologs are time distant from each other by N G generations. That means that the number of generations passed between each pair of orthologs is N G . Such an assumption simplifies the statistical measures calculations. Also, it is assumed that interested lies in the error probabilities after M G generations, which is the number of generations passed along the storage time (time elapsed between encoding and decoding). Finally, it is assumed that each generation may cause a nucleotide substitution with constant probability, independent of past generations.

[0218] Following these assumptions, after performing the DNA alignment, the identity probabilities which are the identity probabilities of all nucleotides in the site of interest S (the site in which the code will operate), remain unchanged over N G generations. Averaging all identity probabilities over all nucleotides in the site of interest S, one gets

According to this model assumptions, should equal the number of even number of substitutions after generations. Denote by X the number of substitutions after N G generations. X is the sum of iid Bernoulli trials with parameter p , where p is the probability of substitution in a single generation. The probability of an even or odd value of X is

Identity is equivalent to an even value of X. Therefore one can Finally, we are interested in the probability of substitution event after M generations, which equals to an odd number of successes in iid Bernoulli trials with parameter Recall that one wants the average identity probability so that

(26) gives the substitution probability for DNA storage along with M G generations. An average identity probability of Pi d is assumed, calculated over all nucleotides in the site of interest and is based on alignment of orthologs all separated by N G generations.

[0219] An erasure event is assumed to occur at the whole site level, meaning the entire site S is erased or not. As mentioned, the actual biological phenomena causing an erasure can be an actual extinction of a species. Still, it can also be an indel, in which only a short sequence of nucleotides may be deleted.

[0220] The following simple estimate for an erasure probability is used. Given that all indel probabilities estimated from the alignment, are correlative along with all nucleotides in the site of interest S (as defined in the sequel), the maximal indel probability of one of the nucleotides in the site is the erasure probability,

By correlation in the indel probability along the whole site, we mean a small variance of the indel estimations over the entire site of interest. The reason behind such a definition is that all the codons in the site have about the same probability of suffering from an indel, therefore an indel will be likely to occur among the whole site, an event similar to an erasure of the site.

[0221] Focusing on the gene ALAI in S. Cerevisiae as a proof of principle. The estimated identity and indel probabilities along the whole gene are given in Figure 15A. The site of interest S is defined to include codons number 521-841, or nucleotides 1561-2521. First, one can see a correlative indel probability along that site, with a maximal value of

[0222] It is clear that an indel is not very likely to occur (as all probabilities are small), but if it does, then the whole site is expected to suffer it, as all nucleotides seem to be correlative.

[0223] Next, the average identity probability along all the site's nucleotides is As depicted in the Figure 15C, each third nucleotide is more likely to substitute, as most codons obey the Amino Acid dictionary constraint. Codons may suffer from synonymous changes only. In most cases, only the third nucleotide may substitute. [0224] One generation of S. Cerevisiae lasts for approximately 90 minutes. If it is assumed that all orthologs in the alignment are separated by 1 million years or N G = 5.84 • 10 9 generations and the storage time is 1 thousand years or M G = 5.84 • 10 6 generations, then the substitution probability according to the model is

[0225] The allowed codons changes dictionary D is defined as a mapping between each codon's location in the site of interest to a subset of codons. This subset imposes the constraint in which one can change the original codon into a member of this subset only; otherwise, the whole site will be erased (microorganism death).

[0226] The following estimation for D is suggested. The codons distribution k E S is used, where for each DNA location there is a vector of 64 coordinates, estimated from the DNA alignment as the distribution function of all 64 possible codons in the current DNA location. Then, the current DNA location k E S is mapped into a subset of codons, including only codons with associated probability value higher than a threshold T . Therefore, D is defined as the mapping function between each DNA location to a subset of codons D (fc) given by

This way, the set of allowed changes in each DNA location is chosen such that each member of this set appears in at least T percent of all aligned orthologs, and a change cause by the storage is justified in the sense that evolution also performed such a change in at least T percent of the orthologs.

[0227] An interesting measure is the average subset size mapped by D along the whole site of interest S, given by

Figure 16 compares the size of each subset of allowed changes along the site of interest (codons number 521-841 in ALAI) as a function of T in (28). Also, the case in which the criteria for building D is based on the AA preservation (only synonymous changes are allowed) is depicted. A higher threshold is more conservative, as it allows fewer changes, and one can see in Figure 16 that, indeed, the average subset size is smaller.

[0228] Other definitions for the dictionary D may be considered, involving other biological aspects such as information encoded in the mRNA, affecting gene expression and fitness. A fusion between the method suggested above (based on DNA alignment) and Amino Acid preservation is introduced herein.

Example 12: Coding scheme

[0229] There is now provided a more detailed explanation of the scheme presented in Figure 6.

[0230] Encoder: The encoder scheme is summarized in Figure 17. The Encoder input is a chunk of k information bits, and the outputs are M message blocks of length L bits each. Each message block will be encoded later into a synthesized site.

[0231] The encoder generates M identifiers, which is the set {0, . . . , M — 1}. Each identifier is a binary vector of length bits and is encoded solely into an encoded identifier of length bits using Reed-Solomon (RS) code with message length and codeword length all in bits- The actual symbol length for that RS code is an integer such that

[0232] The information bits are encoded using another RS code with message length k and codeword length The actual symbol length for that RS code is an integer m such that

[0233] Finally, the information RS codeword is divided intoM blocks, each block contains bi ts - and an appropriate encoded identifier is attached at the begging of each block to finally create M message blocks of length L bits each. We note that the identifiers' role is to handle the channel permutation, and the RS codes' role is to address both erasures and substitutions, as we explain herein. The total coding rate of the encoder is

[0234] Modulator: The modulator scheme is summarized in Figure 18. The Modulator inputs are M message blocks of length L bits each, and outputs are M synthesized sites, which are vectors of nucleotides.

[0235] The modulator's goal is to generate the synthesized sites in a way that will store the information, will not violate the allowed codons changes dictionary constraints (abbreviated as£)-constraints), and will optimize the Codon Adaptation Index (CAI) measure.

[0236] As depicted in Error! Reference source not found., the modulator first translates each p air of bits into a single nucleotide using the mapping Then it takes each vector of nucleotides, which is nucleotides long, and generates a synthesized site, using the Linear Random Subsets Encoder, defined hereinabove.

[0237] The modulator holds the original site S, and the allowed codons changes dictionary D. The dictionary is either given to it or estimated using DNA alignment as discussed above. Based on the dictionary D, the modulator calculates the mean subset length along the whole site S as in (29), and finally rounding the result to evaluate an estimate for K, which is the subset size. Together with the codons alphabet with = 64, and the assumption of Completely Random Subsets Channel, the modulator determines the channel coding achievable rate according to the empirical value of the Linear Random Subsets Code (LRSC) rate in Error! Reference source not found..

[0238] The actual LRSC setting here is as follows. First, the outer block code is the information Reed-Solomon code, presented earlier as a part of the encoder. From now on, only the inner code is discussed.

[0239] The alphabet used in the inner code is the codon's alphabet X , with Instead of performing operations over /, the Galois field which the code will operate over is the nucleotide field, which under the mapping is equivalent to

GF (4) with the operations as in the following Table 2.

[0240] Table 2

[0241] Each codon is equivalent to three symbols from GF (4), so that the D -constraints (given over X ) can be translated to GF(4). From now on, only the nucleotide field GF(4) is related to.

[0242] Next, the modulator chooses the integers and the inner coding matrix H, which define the inner code are chosen such that the coding rate following (24) does not exceed the achievable channel rate found empirically as described above, and such that the value of is realizable in terms of computing complexity (as inner encoding will demand going over vectors and check whether (23) holds). The inner coding matrix H is a matrix with elements in GF(4) = {4, C, G, T] . [0243] The modulator informs the receiver with the actual H it uses for encoding. Aside from H, the receiver has no other side information.

[0244] The resulting site length, or actual number of nucleotides used to modulate the information,

[0245] The modulating process is the same for all synthesized sites (microorganisms), so from now on, only one of them is focused upon.

[0246] As explained in hereinabove, the modulator (Linear Random Subset Encoder) will take nucleotides (2 bits) and will encode them into a subsite, which we define as a sequence of n LRSC nucleotides. Here, to handle both the D -constraints and CAI optimization, the following algorithm is employed. The original site S is divided into subsites each one is nucleotides long, such that S . Also, the message string m to be stored over the current synthesized site is divided into sub-messages each one is nucleotides long, such that

Each sub-message will be encoded into each sub-site according to the procedure described next.

[0247] The modulator scheme for one synthesized site is depicted schematically in Figure 19.

[0248] The i th sub-message m(i) will be encoded in its corresponding subsite S(i) in the following way. First, all D -constraints equivalents of the sub-site S(i), denoted by the set of vectors in are generated. In other words, is a set of nucleotide vectors, each vector of length n LRSC nucleotides, where each vector is not violating the D -constraints with respect to S(i) . Then, only the vectors u G {A, C, G, T} nLRSC which realize are kept, where His the LRSC inner matrix, and gathered into the set of nucleotide vectors of vectors in contains only subsites which hold both D -constraints and the algebraic equation defined by (30), and therefore is a valid encoding following the Linear Random Subsets Coding method. [0249] Instead of picking one of the possible solutions (members of ) in a random manner, the following optimality criteria are proposed.

[0250] A single vector from with the smallest CAI change with respect to the original subsite is chosen. From a mathematical perspective, CAI preservation is defined as follows. First, codon's frequencies are calculated as the number of repetitions of that codon c over the whole genome, and then a weight metric for each codon c is calculated as

AA C is the set of all synonymous codons with respect to c. The CAI of some sub-site vector is defined as the geometric mean of the weight metrics of all codons in the subsite,

Finally, the optimality criteria from which a single subsite from is chosen is

[0251] It is noted that this is a greedy approach to the CAI preservation problem of the whole site S . One needs to solve over all the subsites together a computationally comprehensive problem to solve.

[0252] Finally, the synthesized site that was chosen for the storage of the message block m, is the concatenation of all synthesized sub-sites chosen as explained (splitting the original site into sub-sites and finding a synthesized sub-site holding the D -constraints, algebraic constraint, and CAI criteria for its relevant sub-message).

[0253] Demodulator: The demodulator scheme is summarized in Figure 20. The Demodulator inputs are N read sites, and outputs are N message blocks of length L bits each.

[0254] As seen in Error! Reference source not found., the demodulator first uses an LRSC d ecoder. The LRSC Decoder accepts M read sites and extracts nucleotides it believes that are coded over each one of the sites, according to the shared LRSC inner matrix H and the algebraic structure it dictates. Namely, for some read site are the read sub-sites and m(i) are the extracted sub-messages. The decoded message string for some read site is and finally, each message string nucleotides is translated into L bits to form the demodulated message blocks.

[0255] Decoder: The decoder scheme is summarized in Figure 21. The Decoder inputs are N erroneous message blocks, and output is the estimated original information chunk of k information bits.

[0256] The decoder first sorts the messages according to their original order using the encoded identifiers. Each encoded identifier is read from the first bits of each message block and corrected using its protecting RS code and additional information of the expected set of all numbers in the set {0, . . . , M — 1} , using an optimal Maximum Likelihood approach. That is, instead of a regular RS decoder operating over each identifier, we decode all identifiers together using a Maximum Likelihood algorithm which allocates each expected identifier in the set {0, . . . , M — 1} with the most likely channel output identifier.

[0257] Once the messages are sorted, the identifiers are removed from each message block, and all message blocks are concatenated in the order recovered by the identifiers. Then, an RS decoder with message length k and block length is used to correct both message block erasures (correspond to a burst of RS symbols erasures) and nucleotides substitutions (correspond to RS symbol substitution).

Example 13: Encoder-Decoder performance evaluation

[0258] To test the coding scheme performance, a simulation was performed comparing the coding scheme of the invention's performance to the bounds derived in Figures 7-9. The simulation includes only the encoder, channel, and decoder, excluding the modulator and demodulator. The channel simulated was the Noisy Shuffling Sampling Channel.

[0259] One may recall that the higher bound on the coding rate is the classical capacity. The lower bound is a lower bound on achievability for coding under the Finite Block length regime.

[0260] The simulation fixed parameters are and the varying parameters are P e , P s . The performance metric is the code rate, defined by the ratio between the number of information bits k to the total codeword length m = M • L. It is optimized to the highest possible value resulting in no decoding errors. Here, C FBL defines the finite block-length lower bound and by C* the channel capacity in “DNA-Based Storage: Models and Fundamental Limits ”, Ilan Shomorony and Reinhard Heckel, IEEE transactions on information theory, vol. 67, no. 6, June 2021, herein incorporated by reference in its entirety (or (1)). The result is shown in Figure 22.

[0261] Next, the normalized degradation rate was simulated, which is defined as the relative gap between the schema coding rate and lower bound, The results are shown in Figure 23.

[0262] As can be seen in Figure 24, the encoder-decoder schema is close to the achievable lower bound C FBL with a gap of less than 10%, except for the case of very high substitution rates, namely P s > 0.05. Such high substitution rates are not relevant to our DNA storage problem.

[0263] Next, the performance of the coding scheme of the invention was compared with a coding scheme suggested in Shomorony and Heckel (and, for example, used in “Robust chemical preservation of digital information on DNA in silica with error-correcting codes”, Grass et al., Angew Chem Int Ed Engl. 2015 Feb 16;54(8):2552-5, herein incorporated by reference in its entirety). Again, the modulating part is not simulated, and the simulation comparison goal is comparing the error correction codes.

[0264] In the scheme suggested in Shomorony and Heckel, the entire information is first protected using an outer Reed-Solomon code. Then, the outer encoded information is partitioned into message blocks, and each message block is attached with an identifier. Finally, each message block is protected independently with an inner Reed-Solomon code. Both schemes have a similar first coding step (RS code protecting the whole information chunk) but a different second step (identifiers coding).

[0265] The following simulation was performed. The fixed parameters are M = 180, L = 144, and the varying parameters are P e , P s . The coding rates are (the former stands for the coding scheme of the invention, the latter to the one in Shomorony and Heckel), which were optimized to the highest possible rates resulting in no decoding errors (Fig. 25). The ratio given in Table 3.

[0266] Table 3: Encoder-Decoder code rate: the ratio between the coding rate of the coding scheme of the invention and the coding rate of the proposed scheme in Shomorony and Heckel.

[0267] We can see that the scheme of the invention is superior to the schema in Shomorony and Heckel, except for the case of very high substitution rates, namely P s > 0.1 (which are less interesting in DNA storage).

[0268] Next, the theoretical computation of the probability of a decoding failure event is considered in both coding schemes. For simplicity, one can assume iid random codewords and a simplified model of iid substitutions errors only (P e = 0). A constant bit error probability is denoted by p , resulting in a constant symbol error probability of q = 1 — (1 — p) m , where m is the relevant Reed-Solomon symbol size.

[0269] Starting with the first code (of the invention), failure will occur when the decoder is fed with more erroneous symbols than it can fix. Thus,

Where all parameters refer to the Reed-Solomon code of the first schema, namely m 1 is the symbol size, is the information size in symbols, and is the codeword length. is the symbol error probability.

[0270] As for the second code, which consists of an outer and inner code, the event of decoding failure will occur when the outer code is fed with more erroneous symbols than it can fix. An inner code block decoding failure causes an incorrect outer code symbol. The probability of an inner code failure event is where all parameters refer to the inner Reed-Solomon code of the second schema, namely m 2 is the symbol size, K 2 is the information size in symbols, and N 2 is the codeword length. is the symbol error probability. [0271] Inner code decoding failure is causing an error of outer code symbols (each inner codeword, protecting a single message block, consists of outer code symbols). Thus, the event of an outer code decoding failure is where all parameters refer to the outer Reed-Solomon code of the second schema, and P = is given above. One can assume that the symbol size is the same as the first schema, m 1 .

[0272] Now one can numerically compare for the relevant parameters achieving the same coding rates in both schemas. First, one fixes , finding the coding rate achieving this failure probability, fixes the same rate for the second schema, and calculates (Fig. 26). One sees that the first schema has a much better performance compared to the second, except for the case of bit error rate greater than 0.09.

[0273] Second, one can fix find the coding rate achieving this failure probability, fix the same rate for the first schema, and calculate (Fig. 27). Again, one sees that the first schema has a much better performance compared to the second (all failure probabilities of the first schema are numerically zero except the two last simulated bit error probabilities), again except in the case of bit error rate greater than 0.09.

[0274] Last, one can fix , and find the rates achieving this failure probability for each schema (Fig. 28). Again, one sees that the first schema can achieve higher coding rate for all bit error rates, except of the case of bit error rate greater than 0.09.

[0275] The overall method of encoding information in a living organism is summarized in Figure 29. At step 101, the genome of an organism into which the information is going to be stored is received. Next, at step 102, regions within the coding regions of the genome are selected which meet the desired criteria. These criteria include: the regions being cumulatively of a sufficient length to encode the full message with identifiers that give the order to the various fragments of the information (102a); a low indel probability (102b); and an intermediate nucleotide sequence identity (102c). Optionally, the regions are selected such that they are devoid of two consecutive nucleotides which cannot be mutated without substantially diminishing the health of the organism (102d). Mutations are then generated in the regions selected in step 102 (step 103). The mutations are convertible to the information. That is the mutations encode the information. As used herein, the terms “convertible to” and “encoding” are used synonymously and interchangeably. Generally, bits of data are converted into nucleotides. The mutations generated are not substantially detrimental to the health of the organism. Detrimental mutations would lead to the death of the organism overtime and thereby erasure of part or all of the information. Identifiers can be optionally included which identify the order of the fragments that produce the whole information (step 103a).

[0276] Figure 29 also includes optional steps related to decoding the information at a later time point. The decoder does not necessarily have the information of the location of the fragments but does have the coding matrix used for encoding the information in the first place (the mutations made). At step 104, the Reader receives the sequences of the coding regions of the muted organism's descendants. This can be done by sequencing the genome of the organism via methods such as deep sequencing, whole exosome sequencing, next generation sequencing and the like. The fragments are identified (105) At step 106, the Reader corrects any error introduced into the fragments, including indels and point mutations. Optionally, the fragments are put in order based on the identifier sequence if present. The identifiers are removed, and the fragments are concatenated (step 106a). Finally, at step 107 the information is extracted from the genomic fragment.

Example 14: Numerical simulation

[0277] In order to evaluate the results, a numerical simulation was performed. First, a set of DNA sites optimal for storage needs to be chosen. For that, unsuitable genes were filtered out. Following definitions provided hereinabove, the criteria for a suitable gene (or, more precisely, a site) for DNA storage are as follows.

[0278] First, a long sequence of codons with a low and correlative indel probability is found. Mathematically, one is looking for a site S such that (27) gives a value smaller than some threshold and when |S| is big enough. The reason for that is to minimize the chance of microorganism erasure or an indel in some part of the site of interest. A representative example is the site in the ALAI gene defined in Example 11, following Error! Reference s ource not found.A-B, in which all codons are associated with indel probability lower than a threshold. Also, the site length is long enough (about 300 codons).

[0279] Next, a mid-range average identity probability is selected. Mathematically, one is looking for a site S such that (26) gives a value smaller than some high threshold and bigger than some low threshold. The reason for that is the following. On the one hand, we would prefer a high identity probability in order to avoid mutations. On the other hand, we would prefer a low identity so that changes made by our storage would not be too harmful to the creature. Again, Error! Reference source not found. 15A gives a good idea for a site of i nterest, with a mid-range identity probability of about 0.7.

[0280] The last criterion at this point is that the size of the average subset of allowed codons changes is large enough. Mathematically, it is required that (29) gives a value higher than some threshold when the dictionary D is chosen according to (28). This way, a change of codon into another codon is one that appeared in enough orthologs, which is an indicator that evolution prefers such a change. Figure 16 is an example of the size of the subset along the site of interest. For a threshold of 0.05, meaning allowing a change into a codon that appeared in at least 5% of aligned orthologs, one sees that the average subset size is about 4.5, which we found to be a relatively large subset size, allowing coding using the Linear Random Subsets code construction designed hereinabove.

[0281] The site filtering constraints are now summarized. A site S with at least |S| = 200codons, with indel probability (27) not exceeding 0.05, average identity probability (26) in the range 0.55-0.75, and average subset size (29) (where the dictionary is chosen according to (28)) of at least four codons. Also, site sequences with more than two consecutive DNA locations in which no changes at all are possible ((28) results in a value smaller than 1) are not allowed in order to avoid situations of no possible encoding using the Linear Random Subsets method. Imposing these constraints over 3960 different S. cerevisiae genes with full alignment data, came up with 40 possible sites in 38 different genes.

[0282] Next, it was decided to store over M = 64 synthesized sites. With a maximal site length of |S| = 200codons, the message block size was fixed in bits (number of bits stored in each synthesized site), a quantity defined by L. The relation between the site length |S|, the message block length L, and the Linear Random Subsets Code (LRSC) inner matrix as the whole message block is to be encoded over the whole site using LRSC, operating over the nucleotide Galois Field GF(4), modulating each nucleotide from the message block (out of all into nucleotides. It was decided to work with , as a larger n LRSC ended up too complexity demanding for this simula tion, and was the maximal integer allowing coding without failures among all tested sites. Fixing L = 144bits came up with |S| = 540 nucleotides or 180 codons. r [m02e8-3u] T TIh->e T LR DSOC" i •nner mat .ri •x used a i •n m the si •mu ilatti-on i •sH u

ID NO: 6-7). At first glance, it seems like a non-ideal matrix for encoding using the LRSC, as the first column is the zero column, which actually prevents the first nucleotide in each subsite from participating in the linear equations (recall from above that a subsite is defined as a sequence of n LRSC nucleotides which participate in the same algebraic equation for inner encoding). But, it was found empirically that such a choice is actually advantageous, as sometimes a codon appears with a very small subset size allowed for changes (28), in which case it should be kept out of the algebraic equation.

[0284] Each identifier was chosen to be k id = log M = 6 bits long, with a coding rate of r id = 0.5, such that the encoded identifier length is n id = 12 bits. The Reed-Solomon symbol length is chosen as the minimal possible, m RS = 3 bits. The reason for that code rate is to allow maximal protection to the identifiers, as a decoding error of it will be a loss of the entire message block.

[0285] The code-word length of the information Reed-Solomon code is, therefore, n = M(L — n id ) = 8448bits, as it covers all the message blocks together, without the identifiers. For such code- word length, the symbol length we chose was m RS = 12bits, and a shortened Reed-Solomon is used. It should be noted that the symbol length could be shorter (10 bits), which is better in terms of code rate performance, but m RS = 12 was chosen as it enables more flexibility for bigger code-word length (up to (2 mRS — 1)m RS = 49140 bits). The number of information bits is k = r • n, where r is the code rate of the encoder-decoder part of the coding scheme. In order to determine r, one first calculates the bounds in (9) for M = 64, L = 144, ε = 10 -6 . The results are given in Figure 30.

[0286] Note that, in comparison to f, the gap between the bounds is higher, a consequence of the lower value of M here.

[0287] Now, for given error probabilities P e , P s , one can adjust rin a way that will end up without any simulated decoding failures, with a good starting point of r which equals the lower bound value. The actual metric of such an experiment is the amount of erroneous Reed-Solomon symbols at the decoder, compared to the decoding capability of the code, which equals d = n RS — k RS , where n RS , k RS are the code- word length and the message length of the code, measured in symbols.

[0288] At this point, the error probabilities are fixed toP s = 0.005, P e = 0.1. These are relatively high error rates for a real biological environment. It was chosen to focus on such high error rates for both maximal data protection and as a preparation for an in vivo experiment, which includes a mutant strain simulating a high mutation rate.

[0289] After fixing P s = 0.005, P e = 0.1, the experimental test was evaluated to find the highest value of r without any decoding failures. The value found was r = 0.655 • (M = 64, L = 144, ε = 10 -6 ), where is the lower bound in (9). Such a value seems like a big gap from the achievability bound, but the following reason reveals why. Note that the Random Subset Channel plays a role here, as an outer error correcting code was not designed for the Linear Random Subset code, as the construction demands. Actually, the Reed Solomon code protecting the information from message block erasures and substitutions, is also protecting it from cases in which the Linear Random Subsets encoder fails to decode. As mentioned, the actual Linear Random Subsets code parameters were fixed average subset size of about 4 legal codon changes, as suggested by Figure 14.

[0290] Lastly, the allowed codon changes dictionary!) used for the Linear Random Subsets code was employed. Three different dictionaries were designed, each for different reason.

[0291] The first dictionary is the most conservative one, allowing only synonymous changes of codons. Also, a codon change is allowed only if the codon has appeared in at least 5% among all aligned orthologs in the same location, meaning it is favorable by evolution in some sense. The second dictionary is the most liberal one, allowing both synonymous and non-synonymous changes of codons. For synonymous changes, the minimal appearances threshold was set to 5% as before, and for non-synonymous changes, the threshold was set to 25%. The reason for that is to allow such a change, which changes the protein ingredient of the microorganism, only if evolution favorites such a change in many cases. The third dictionary, which is a middle ground, is designed exactly as the second dictionary, just with a higher threshold of synonymous changes equals 0.08. The reason for that is to balance the allowed changes to the point of highly favorable changes of evolution but still allow both synonymous and non-synonymous changes of codons. All simulations were performed.

[0292] The abstract of an article summarizing this work was taken as the information message, it was transformed into bits using ASCII encoding (with 1 byte per character), and it was truncated to the message length in bits, which in this case is k = r • n = 0.65 • Then, the encoder took the information bits, protected it with a shortened Reed-Solomon code (with message length of bits, code-word length of n = 8448 and symbol size m = 12 bits), split the code-word into M = 64 chunks of bits each, and attached each chunk with an encoded identifier of length bits (which is itself a Reed-Solomon code-word, generated by a shortened Reed-Solomon encoder with message length of k id = log M = 6 bits and symbol size of m RS = 3 bits), to generate all M = 64 message blocks of length L = 144 bits each. The modulator operated overall M = 64 message blocks and generated synthesized sites of length 540 nucleotides

(or 180 codons) each, one site per message block. Each modulation operated according to the (SEQ ID NO: 6-7), sub-sites A/cn LRSC = 15nucleotides, each sub-site modulating q LRSC = 2 information nucleotides (or 4 bits). Also, the legal codon changes dictionary used is one of the three dictionaries defined above. The actual Linear Random Subsets encoder was implemented according to Figure 31, minimizing the Codon Adaptation Index.

[0293] Next, the Living Organism Channel was simulated. The subset constraints of the Random Subsets Channel are always obeyed, so the actual channel simulated is the Noisy Shuffling Sampling Channel. First, each synthesized site is erased with a probability of P e = 0.1, in an iid manner. Then, all sites are permuted randomly. Finally, each nucleotide suffers from substitution with a probability of P s = 0.005. The demodulator accepts N < M sites, performing Linear Random Subsets decoding with the LRSC inner matrix H , thus transforming each read site into a sequence of and finally transforms them into Lbits. The decoder is fed with Nmessage blocks, and its first mission is to sort them back in their original order. For that purpose, it decodes the identifiers in a Maximum Likelihood fashion, as explained hereinabove, exploiting the information that it expects to get all integers in the range {0, . . . , M — 1} and thus map each identifier to the closet integer in {0, . . . , M — 1} in the sense of hamming distance. After the messages are sorted, the identifiers are disregarded, zero bits are inserted instead of erased message blocks, and a Reed-Solomon decoder is evaluated over all the corrupted data. At this point, the stored data was checked to see if it is reconstructed successfully. [0294] The following tables summarize the results for all three dictionaries and 40 possible coding sites that passed the initial filtering described hereinabove. All sites in each gene are 540 nucleotides (180 AA) long, starting at the nucleotide given in the second column. Code rate is the actual number of information bits stored over one codon, while the achievable rate is the product of the lower bound of the Random Subsets channel capacity and the lower bound of the Noisy Shuffling Sampling channel capacity. The number of Reed-Solomon symbol errors indicating how much corrupted Reed-Solomon symbols were corrupted at the receiver, and the error correction capability of the code is given too. The average modulator failure rate meaning the fractional number of cases in which no encoding was possible for some sub-site using the Linear Random Subsets method. The CAI (codon adaptation index) is given before and after the writing process, and also the average number of cases in which an amino acid was changed (non- synonymous change).

[0295] Table 4: Simulation results, first dictionary (allowing only synonymous changes with at least 5% appearance among all aligned orthologs).

[0296] Table 5: simulation results, second dictionary (allowing synonymous changes with at least 5% appearance and non-synonymous changes with at least 25%appearance among all aligned orthologs).

[0297] Table 6: simulation results, third dictionary (allowing synonymous changes with at least 8% appearance and non-synonymous changes with at least 25%appearance among all aligned Orthologs).

[0298] The Table 6 includes only the sites with non-zero code rates in Table 5.

[0299] The achievable rate in all tables above refers to the term which is the product of the achievable Finite Block-Length coding rate of the Noisy Shuffling Sampling channel, and the achievable coding rate of the Completely Random Subsets channel, with appropriate parameters (M = 64, L = 144, ε = 10 -6 are fixed, and K is calculated according to the actual dictionary estimated for each site, with observed values of K = 3,4 in all cases).

[0300] As one can see, the sites in genes CDC48, RPT3,RPT2 and ALA' can be used with all dictionary options, all with code rate very close to the achievable rate, except for RPT3 in which further parameters optimization (such as the higher value of n LRSC ) can be performed.

[0301] Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.