Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MACHINE LEARNING MODEL TRAINING USING AN ANALOG PROCESSOR
Document Type and Number:
WIPO Patent Application WO/2022/115704
Kind Code:
A1
Abstract:
Described herein are techniques of training a machine learning model and performing inference using an analog processor. Some embodiments mitigate the loss in performance of a machine learning model resulting from a lower precision of an analog processor by using an adaptive block floating-point representation of numbers for the analog processor. Some embodiments mitigate the loss in performance of a machine learning model due to noise that is present when using an analog processor. The techniques involve training the machine learning model such that it is robust to noise.

Inventors:
NAIR LAKSHMI (US)
WIDEMANN DAVID (US)
WALTER DAVID (US)
BUNANDAR DARIUS (US)
LAZOVICH TOMO (US)
LEVKOVA LUDMILA (US)
DRONEN NICHOLAS (US)
FORSYTHE MARTIN (US)
BASUMALLIK AYON (US)
HARRIS NICHOLAS (US)
Application Number:
PCT/US2021/061013
Publication Date:
June 02, 2022
Filing Date:
November 29, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
LIGHTMATTER INC (US)
NAIR LAKSHMI (US)
WIDEMANN DAVID (US)
WALTER DAVID (US)
International Classes:
G06N3/04; G06N3/063; G06N3/067; G06N3/08
Foreign References:
US20200272795A12020-08-27
US197262631194P
US199262631605P
Attorney, Agent or Firm:
ALAM, Saad et al. (US)
Download PDF:
Claims:
What is claimed is: CLAIMS 1. A system comprising: circuitry comprising an analog processor; wherein the circuitry is configured to train a machine learning model, the training comprising performing one or more matrix operations to learn parameters of the machine learning model using the analog processor. 2. The system of claim 1, wherein performing a matrix operation of the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises: determining a scaling factor for a first portion of a first matrix involved in the matrix operation; scaling the first portion of the first matrix using the scaling factor for the first portion of the first matrix to obtain a scaled first portion of the first matrix; programming the analog processor using the scaled first portion of the first matrix; performing, by the analog processor programmed using the scaled first portion of the first matrix, the matrix operation to generate a first output; and determining a result of the matrix operation using the first output generated by the analog processor. 3. The system of claim 2, wherein performing the matrix operation further comprises: determining a scaling factor for a first portion of a second matrix involved in the matrix operation; scaling the first portion of the second matrix using the scaling factor for the first portion of the second matrix to obtain a scaled first portion of the second matrix; programming the analog processor using the scaled first portion of the second matrix; and performing, by the analog processor programmed using the scaled first portion of the second matrix, the matrix operation to generate the first output.

4. The system of claim 2, wherein performing the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises: determining a scaling factor for a second portion of the first matrix; scaling the second portion of the first matrix using the scaling factor for the second portion of the first matrix to obtain a scaled second portion of the first matrix; programming the analog processor using the scaled second portion of the first matrix; performing, by the analog processor programmed using the scaled second portion of the first matrix, the matrix operation to generate a second output; and determining the result of the matrix operation using the second output generated by the analog processor. 5. The system of claim 4, wherein the first scaling factor is different from the second scaling factor. 6. The system of claim 2, wherein the first portion of the first matrix is a first vector of the first matrix. 7. The system of claim 2, wherein determining the result of the matrix operation using the first output generated by the analog processor comprises: determining an output scaling factor for the first output generated by the analog processor using the first scaling factor; scaling the first output using the output scaling factor to obtain a scaled first output; and determining the result of the matrix operation using the scaled first output. 8. The system of claim 2, wherein: determining the first scaling factor for the first portion of the first matrix comprises determining a maximum absolute value of the first portion of the first matrix; and scaling the first portion of the first matrix using the first scaling factor comprises scaling values of the first portion of the first matrix using the maximum absolute value of the first portion of the first matrix. 9. The system of claim 2, wherein: the analog processor is configured to operate using a fixed-point representation of values; and programming the analog processor using the scaling of the first portion of the first matrix comprises converting values of the scaled first portion of the first matrix into the fixed-point representation. 10. The system of claim 9, wherein: the circuitry further comprises a digital controller configured to operate using a floating-point representation of values; and a dynamic range of the floating-point representation is greater than a dynamic range of the fixed point representation. 11. The system of claim 1, wherein performing the one or more matrix operations to learn parameters of the machine learning model using the analog processor comprises amplifying or attenuating at least one analog signal used to perform a matrix operation of the one or more matrix operations. 12. The system of claim 11, wherein amplifying or attenuating the at least one analog signal used to perform the matrix operation comprises: programming the analog processor using multiple copies of a matrix involved in the matrix operation. 13. The system of claim 11, wherein amplifying or attenuating the at least one analog signal used to perform the matrix operation comprises: distributing a zero pad among different portions of a matrix involved in the matrix operation; and programming of the analog processor using the matrix with the zero pad distributed among different portions of the matrix. 14. The system of claim 1, wherein performing the matrix operation comprises performing the matrix operation between a matrix of parameters of the machine learning model and/or a matrix of inputs to the machine learning model.

15. The system of claim 1, wherein performing the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises performing the one or more matrix operations to determine outputs of the machine learning model for a set of inputs. 16. The system of claim 1, wherein performing the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises: performing the one or more matrix operations using the analog processor to determine a gradient of a loss function; and updating parameters of the machine learning model using the gradient of the loss function. 17. The system of claim 1, wherein the training comprises performing a plurality of iterations, wherein performing each of at least some of the plurality of iterations comprises: determining updated parameters of the machine learning model; and setting parameters of the machine learning model to an average of the updated parameters and parameters set at one or more previous iterations of the plurality of iterations. 18. The system of claim 1, wherein the matrix operation is a matrix multiplication. 19. The system of claim 1, wherein the analog processor is a photonic processor, wherein performing the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises processing light using the photonic processor. 20. The system of claim 1, wherein the circuitry further comprises a digital controller. 21. The system of claim 1, wherein the machine learning model is a neural network. 22. A method comprising: training a machine learning model using a system comprising an analog processor, the training comprising performing one or more matrix operations to learn parameters of the machine learning model using the analog processor.

23. The method of claim 22, wherein performing a matrix operation of the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises: determining a scaling factor for a first portion of a first matrix involved in the matrix operation; scaling the first portion of the first matrix using the scaling factor for the first portion of the first matrix to obtain a scaled first portion of the first matrix; programming the analog processor using the scaled first portion of the first matrix; performing, by the analog processor programmed using the scaled first portion of the first matrix, the matrix operation to generate a first output; and determining a result of the matrix operation using the first output generated by the analog processor. 24. The method of claim 23, wherein performing the matrix operation further comprises: determining a scaling factor for a first portion of a second matrix involved in the matrix operation; scaling the first portion of the second matrix using the scaling factor for the first portion of the second matrix to obtain a scaled first portion of the second matrix; programming the analog processor using the scaled first portion of the second matrix; and performing, by the analog processor programmed using the scaled first portion of the second matrix, the matrix operation to generate the first output. 25. A non-transitory computer-readable storage medium storing instructions that, when executed by circuitry including an analog processor, cause the circuitry to perform: training a machine learning model, the training comprising performing one or more matrix operations to learn parameters of the machine learning model using the analog processor. 26. The non-transitory computer-readable storage medium of claim 25, wherein performing a matrix operation of the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises: determining a scaling factor for a first portion of a first matrix involved in the matrix operation; scaling the first portion of the first matrix using the scaling factor for the first portion of the first matrix to obtain a scaled first portion of the first matrix; programming the analog processor using the scaled first portion of the first matrix; performing, by the analog processor programmed using the scaled first portion of the first matrix, the matrix operation to generate a first output; and determining a result of the matrix operation using the first output generated by the analog processor.

Description:
MACHINE LEARNING MODEL TRAINING USING AN ANALOG PROCESSOR RELATED APPLICATIONS [0001] This application claims priority under 35 U.S.C. 119(e) to U.S. Provisional Pat. App. Serial No. 63/119,472 entitled “ANALOG LINEAR PHOTONIC PROCESSOR THAT INTERFACES WITH FLOATING-POINT TENSORS,” filed on November 30, 2020, and U.S. Provisional Pat. App. Serial No.63/160,592 entitled “ROBUST MACHINE LEARNING TRAINING WITH ANALOG LINEAR PROCESSOR,” filed on March 12, 2021, each of which is incorporated by reference in its entirety. FIELD [0002] This application relates to techniques of using an analog processor for training a machine learning model, and for using an analog processor for performing inference with a machine learning model. In particular, the techniques utilize an analog processor to perform matrix operations involved in the training and inference. BACKGROUND [0003] Matrix operations may be used in training a machine learning model to learn parameters of the machine learning model. For example, a system may perform stochastic gradient descent to learn weights of a neural network. Performing stochastic gradient descent may involve a forward pass in which the system determines outputs of the neural network for a set of inputs. Each input may be a matrix of input values, and the weights of each layer of the neural network may be stored in a respective matrix. The system may perform a forward pass to determine an output of the neural network for an input by performing a series of matrix multiplications to obtain an output. The system may perform matrix operations to update weights of the neural network using outputs of the neural network for the set of inputs. SUMMARY [0004] Described herein are techniques of training a machine learning model and performing inference using an analog processor. Some embodiments mitigate the loss in performance of a machine learning model resulting from a lower precision of an analog processor by using an adaptive block floating-point representation of numbers for the analog processor. Some embodiments mitigate the loss in performance of a machine learning model due to noise that is present when using an analog processor. The techniques involve training the machine learning model such that it is robust to noise. [0005] According to some embodiments, a system is provided. The system comprises: circuitry comprising an analog processor; wherein the circuitry is configured to train a machine learning model, the training comprising performing one or more matrix operations to learn parameters of the machine learning model using the analog processor. According to some embodiments, the circuitry further comprises a digital controller. According to some embodiments, the system comprises a hybrid analog-digital processor that includes the circuitry. According to some embodiments, the machine learning model is a neural network. [0006] According to some embodiments, performing a matrix operation of the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises: determining a scaling factor for a first portion of a first matrix involved in the matrix operation; scaling the first portion of the first matrix using the scaling factor for the first portion of the first matrix to obtain a scaled first portion of the first matrix; programming the analog processor using the scaled first portion of the first matrix; performing, by the analog processor programmed using the scaled first portion of the first matrix, the matrix operation to generate a first output; and determining a result of the matrix operation using the first output generated by the analog processor. [0007] According to some embodiments, performing the matrix operation further comprises: determining a scaling factor for a first portion of a second matrix involved in the matrix operation; scaling the first portion of the second matrix using the scaling factor for the first portion of the second matrix to obtain a scaled first portion of the second matrix programming the analog processor using the scaled first portion of the second matrix; and performing, by the analog processor programmed using the scaled first portion of the second matrix, the matrix operation to generate the first output. [0008] According to some embodiments, performing the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises: determining a scaling factor for a second portion of the first matrix; scaling the second portion of the first matrix using the scaling factor for the second portion of the first matrix to obtain a scaled second portion of the first matrix; programming the analog processor using the scaled second portion of the first matrix; performing, by the analog processor programmed using the scaled second portion of the first matrix, the matrix operation to generate a second output; and determining the result of the matrix operation using the second output generated by the analog processor. [0009] According to some embodiments, the first scaling factor is different from the second scaling factor. According to some embodiments, the first portion of the first matrix is a first vector of the first matrix. [0010] According to some embodiments, determining the result of the matrix operation using the first output generated by the analog processor comprises: determining an output scaling factor for the first output generated by the analog processor using the first scaling factor; scaling the first output using the output scaling factor to obtain a scaled first output; and determining the result of the matrix operation using the scaled first output. [0011] According to some embodiments: determining the first scaling factor for the first portion of the first matrix comprises determining a maximum absolute value of the first portion of the first matrix; and scaling the first portion of the first matrix using the first scaling factor comprises scaling values of the first portion of the first matrix using the maximum absolute value of the first portion of the first matrix. [0012] According to some embodiments, the analog processor is configured to operate using a fixed-point representation of values; and programming the analog processor using the scaling of the first portion of the first matrix comprises converting values of the scaled first portion of the first matrix into the fixed-point representation. [0013] According to some embodiments, the circuitry further comprises a digital controller configured to operate using a floating-point representation of values; and a dynamic range of the floating-point representation is greater than a dynamic range of the fixed point representation. [0014] According to some embodiments, performing the one or more matrix operations to learn parameters of the machine learning model using the analog processor comprises amplifying or attenuating at least one analog signal used to perform a matrix operation of the one or more matrix operations. [0015] According to some embodiments, amplifying or attenuating the at least one analog signal used to perform the matrix operation comprises: programming the analog processor using multiple copies of a matrix involved in the matrix operation. According to some embodiments, amplifying or attenuating the at least one analog signal used to perform the matrix operation comprises: distributing a zero pad among different portions of a matrix involved in the matrix operation; and programming of the analog processor using the matrix with the zero pad distributed among different portions of the matrix. [0016] According to some embodiments, performing the matrix operation comprises performing the matrix operation between a matrix of parameters of the machine learning model and/or a matrix of inputs to the machine learning model. According to some embodiments, performing the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises performing the one or more matrix operations to determine outputs of the machine learning model for a set of inputs. [0017] According to some embodiments, performing the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises: performing the one or more matrix operations using the analog processor to determine a gradient of a loss function; and updating parameters of the machine learning model using the gradient of the loss function. [0018] According to some embodiments, the training comprises performing a plurality of iterations, wherein performing each of at least some of the plurality of iterations comprises: determining updated parameters of the machine learning model; and setting parameters of the machine learning model to an average of the updated parameters and parameters set at one or more previous iterations of the plurality of iterations. [0019] According to some embodiments, the matrix operation is a matrix multiplication. According to some embodiments, the matrix operation may involve a matrix obtained from a tensor. According to some embodiments, the matrix may be obtained by reshaping the tensor and copying a reshaped tensor into the matrix. According to some embodiments, the matrix may be a portion of the tensor. [0020] According to some embodiments, the analog processor is a photonic processor, wherein performing the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises processing light using the photonic processor. [0021] According to some embodiments, a method is provided. The method comprises: using a system comprising an analog processor to perform: training a machine learning model, the training comprising performing one or more matrix operations to learn parameters of the machine learning model using the analog processor. [0022] According to some embodiments, performing a matrix operation of the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises: determining a scaling factor for a first portion of a first matrix involved in the matrix operation; scaling the first portion of the first matrix using the scaling factor for the first portion of the first matrix to obtain a scaled first portion of the first matrix; programming the analog processor using the scaled first portion of the first matrix; performing, by the analog processor programmed using the scaled first portion of the first matrix, the matrix operation to generate a first output; and determining a result of the matrix operation using the first output generated by the analog processor. [0023] According to some embodiments, performing the matrix operation further comprises: determining a scaling factor for a first portion of a second matrix involved in the matrix operation; scaling the first portion of the second matrix using the scaling factor for the first portion of the second matrix to obtain a scaled first portion of the second matrix; programming the analog processor using the scaled first portion of the second matrix; and performing, by the analog processor programmed using the scaled first portion of the second matrix, the matrix operation to generate the first output. [0024] According to some embodiments, a non-transitory computer-readable storage medium storing instructions is provided. The instructions, when executed by a system, cause the system to perform: training a machine learning model, the training comprising performing one or more matrix operations to learn parameters of the machine learning model using an analog processor of the system. [0025] According to some embodiments, performing a matrix operation of the one or more matrix operations to learn the parameters of the machine learning model using the analog processor comprises: determining a scaling factor for a first portion of a first matrix involved in the matrix operation; scaling the first portion of the first matrix using the scaling factor for the first portion of the first matrix to obtain a scaled first portion of the first matrix; programming the analog processor using the scaled first portion of the first matrix; performing, by the analog processor programmed using the scaled first portion of the first matrix, the matrix operation to generate a first output; and determining a result of the matrix operation using the first output generated by the analog processor. BRIEF DESCRIPTION OF THE DRAWINGS [0026] Various aspects and embodiments will be described herein with reference to the following figures. It should be appreciated that the figures are not necessarily drawn to scale. Items appearing in multiple figures are indicated by the same or a similar reference number in all the figures in which they appear. [0027] FIG. 1A shows a block diagram of a training system, according to some embodiments of the technology described herein. [0028] FIG. 1B shows a diagram illustrating interaction among components of the hybrid analog-digital processor (also referred to as “hybrid processor”) of FIG.1A, according to some embodiments of the technology described herein. [0029] FIG.1C shows a diagram illustrating an environment in which the training system may be used, according to some embodiments of the technology described herein. [0030] FIG. 2 shows a diagram illustrating effects of overamplification, according to some embodiments of the technology described herein. [0031] FIG. 3A illustrates an example matrix multiplication operation that may be performed by the hybrid processor 100, according to some embodiments of the technology described herein. [0032] FIG. 3B illustrates an example tiling that may be used by the hybrid processor 100 to perform the matrix multiplication operation of FIG.3A, according to some embodiments of the technology described herein. [0033] FIG. 4 shows a flowchart of an example process 400 of training a machine learning model using an analog processor, according to some embodiments of the technology described herein. [0034] FIG. 5 shows a flowchart of an example process of performing a matrix operation between two matrices, according to some embodiments of the technology described herein [0035] FIG.6 shows a flowchart of an example process of performing a matrix operation using an analog processor, according to some embodiments of the technology described herein. [0036] FIG. 7 shows a flowchart of an example process of using tiling to perform a matrix operation, according to some embodiments of the technology described herein. [0037] FIG. 8 shows a diagram illustrating performance of an example matrix operation by performing process described herein with reference to, according to some embodiments of the technology described herein. [0038] FIG. 9 shows a diagram illustrating an example technique of performing overamplification, according to some embodiments of the technology described herein. [0039] FIG. 10 shows a diagram illustrating amplification by copying a matrix, according to some embodiments of the technology described herein. [0040] FIG.11A shows a diagram illustrating a technique of maintaining overamplification by distributing zero pads among different tiles of a matrix, according to some embodiments of the technology described herein. [0041] FIG. 11B shows a diagram illustrating a technique of performing overamplification by using a copy of a matrix as a pad, according to some embodiments of the technology described herein. [0042] FIG. 12 shows a flowchart of an example process of performing quantization aware training (QAT) of a neural network, according to some embodiments of the technology described herein. [0043] FIG. 13 shows a flowchart of an example process of injecting noise into layer(s) of a neural network during training of the neural network, according to some embodiments of the technology described herein. [0044] FIG.14 shows a diagram illustrating injection of noise into a layer of a neural network, according to some embodiments of the technology described herein. [0045] FIG. 15 shows a flowchart of an example process of updating parameters of a machine learning model during training, according to some embodiments of the technology described herein. [0046] FIG. 16 illustrates an example processor, according to some embodiments of the technology described herein. [0047] FIG.17 shows graphs illustrating accuracy versus gain factor of various neural network models trained according to some embodiments of the technology described herein. [0048] FIG. 18 shows a block diagram of an example computer system that may be used to implement some embodiments of the technology described herein. DETAILED DESCRIPTION [0049] Described herein are techniques of training a machine learning model using an analog processor. In particular, the techniques involve using an analog processor to perform matrix operations involved in training a machine learning model. By using an analog processor to perform the matrix operations, the techniques may train a machine learning model more efficiently than training the machine learning model in only the digital domain. [0050] Analog processors can typically perform certain matrix operations such as matrix multiplications faster and with more energy efficiency than processors that perform the matrix operations only in the digital domain. Accordingly, a system may achieve better speed and energy efficiency by performing the matrix operations in the analog domain. By encoding information in the analog domain, a system may use less energy per compute operation than a digital-only counterpart. Such matrix operations are a large portion of the computations involved in training a machine learning model and performing inference using the machine learning model. For example, such matrix operations may be used to train a neural network and/or perform inference with the trained neural network. Accordingly, the inventors have recognized that training a machine learning model and performing inference with a machine learning model may be performed more efficiently using an analog processor. However, one challenge with using an analog processor to perform matrix operations for training a machine learning model and/or performing inference with a machine learning model is that the analog processor may be limited to a lower bit precision than a digital processor. For example, a digital processor may use a floating-point representation with 32 bits (e.g., float32 representation) to represent a number whereas an analog processor may use a fixed-point representation of 8 or fewer bits. Thus, analog processors are typically not suitable for applications where training a machine learning model requires high-precision computing in which more than 8 bits are used to represent numbers. Furthermore, a system that uses an analog processor may be more susceptible to noise in computations due to: (1) a digital-to-analog converter (DAC) that may need to be used to convert digital inputs to analog signals for use by the analog processor; (2) analog components performing operations within the analog processor; and/or (3) an analog- to-digital converter (ADC) that may need to be used to convert analog signals output by the analog processor into a digital domain. [0051] Accordingly, the inventors have developed techniques of using an analog processor to train a machine learning model and to perform inference with a machine learning model that mitigate the loss in performance due to the lower precision of the analog processor. The techniques allow an analog processor to be used for training a machine learning model and performing inference with a machine learning model in applications requiring high precision computing. In particular, the techniques allow use of an analog processor to perform matrix operations (e.g., matrix multiplication) involved in training and inference. This allows a system to realize speed and energy efficiency improvements provided by an analog processor when training a machine learning model and/or performing inference. The techniques utilize a novel adaptive block floating-point (ABFP) number representation for matrices when operating in the analog domain. In an ABFP representation, a matrix is represented as: (1) scaled portions of the matrix obtained using corresponding scaling factors; and (2) the scaling factors. As an illustrative example, a matrix may be divided into multiple vectors (e.g., row vectors or column vectors). In this example, each of the vectors may have an associated scaling factor that is shared by all the values in the vector. The ABFP representation of the matrix may comprise vectors scaled using their scaling factors, and the scaling factors. [0052] Use of the ABFP representation in a matrix operation involves scaling a matrix or portion thereof such that its values are normalized to a range (e.g., [-1, 1]), and then performing matrix operations in the analog domain using the scaled matrix or portion thereof. An output of the matrix operation performed in the analog domain may then be scaled using scaling factors from ABFP representations of one or more matrices involved in the matrix operation to obtain the output of the matrix operation. By scaling matrix portions using the ABFP representation for matrices in a matrix operation, techniques described herein may reduce loss in precision due to differences in precisions of values in a matrix and also reduce quantization error. Accordingly, some embodiments allow use of an analog processor in training and/or inference of a machine learning model in applications requiring high precision computing. [0053] The inventors have further developed techniques of training a machine learning model that mitigate effects of noise on performance of the machine learning model. The techniques incorporate noise into outputs during training such that parameters learned during training are more robust to noise due to use of an analog processor. Some embodiments mitigate the effect of noise using quantization-aware training (QAT) in which an analog processor may be used to perform forward pass operations (e.g., matrix operations). Thus, outputs of the forward pass operations may incorporate noise (e.g., from the analog processor and/or an ADC). The system may then update parameters of a machine learning model based on the outputs. In this manner, QAT may provide parameters learned during training that are robust to noise in the data. Some embodiments mitigate the effect of noise by injecting noise representative of noise introduced as a result of using an analog processor into outputs of a machine learning model. For example, some embodiments inject noise representative of noise introduced as a result of using an analog processor into outputs of layers of a neural network during training. Such embodiments utilize a differential noise model that is obtained using a difference between layer outputs determined using an analog processor and those determined using a processor that operates only in the digital domain. [0054] In some embodiments, a matrix operation may be a matrix multiplication. The matrix multiplication may be a segment of another operation. In some cases, training of a machine learning model may involve tensor operations with tensors of order greater than 2. Some embodiments may perform such tensor operations by performing matrix operations using matrices obtained from a tensor. For example, a system may obtain matrix slices from a tensor, and perform the matrix operation one matrix slice at a time. Accordingly, matrix operations described herein may be used to perform tensor operations such as tensor multiplications. [0055] Some embodiments described herein address all the above-described issues that the inventors have recognized with conventional systems. However, it should be appreciated that not every embodiment described herein addresses every one of these issues. It should also be appreciated that embodiments of the technology described herein may be used for purposes other than addressing the above-discussed issues. Example embodiments are described herein using a neural network as an example machine learning model. However, some embodiments may be used to train and perform inference with other machine learning models. For example, some embodiments may be used to train and perform inference with a support vector machine (SVM), a logistic regression model, a linear regression model, or other suitable machine learning model to a target device. Some embodiments may be used for any machine learning model in which training and/or inference involve performance of a matrix operation. [0056] FIG. 1A shows a block diagram of a training system 101, according to some embodiments of the technology described herein. The training system 101 may be configured to train a machine learning model 112 to obtain a trained machine learning model 114. [0057] The training system 101 may comprise a computer system. The training system 101 may include components in addition to those shown in FIG. 1A. For example, the training system 101 may include additional processor(s) and storage hardware in addition to the hybrid analog-digital processor 100 and the datastore 110. In some embodiments, the training system 101 may be a computing device. For example, the training system 101 may be a desktop, laptop, smartphone, tablet, or other computing device. In some embodiments, the training system 101 may include multiple computing devices. In some embodiments, the training system 101 may be implemented on a server. [0058] As shown in FIG. 1A, the training system 101 includes a hybrid analog-digital processor 100 and a datastore 110. A hybrid analog-digital processor may also be referred to herein as a “hybrid processor”. The hybrid processor 100 includes a digital controller 102, a digital-to-analog converter (DAC) 104, and an analog processor 106, analog-to-digital converter (ADC) 108. The components 102, 104, 106, 108 of the hybrid processor 100 and optionally other components, may be collectively referred to herein as “circuitry”. In some embodiments, the components 102, 104, 106, 108 may be formed on a common chip. In some embodiments, the components 102, 104, 106, 108 may be on different chips bonded together. In some embodiments, the components 102, 104, 106, 108 may be connected together via electrical bonds (e.g., wire bonds or flip-chip bump bonds). In some embodiments, the components 102, 104, 106, 108 may be implemented with chips in the same technology node. In some embodiments, the components 102, 104, 106, 108 may be implemented with chips in different technology nodes. [0059] In some embodiments, the training system 101 may include a host central processing unit (CPU). In some embodiments, the training system 101 may include a dynamic random- access memory (DRAM) unit. In some embodiments, the host CPU may be configured to communicate with the hybrid processor 100 using a communication protocol. For example, the host CPU may communicate with the hybrid processor 100 using peripheral component interconnect express (PCI-e), joint test action group (JTAG), universal seral bus (USB), and/or another suitable protocol. In some embodiments, the hybrid processor 100 may include a DRAM controller that allows the hybrid processor 100 direct memory access from the DRAM unit to memory of the hybrid processor 100. For example, the hybrid processor 100 may include a double data rate (DDR) unit or a high-bandwidth memory unit for access to the DRAM unit. In some embodiments, the host CPU may be configured to broker DRAM memory access between the hybrid processor 100 and the DRAM unit. [0060] The digital controller 102 may be configured to control operation of the hybrid processor 100. The digital controller 102 may comprise a digital processor and memory. The memory may be configured to store software instructions that can be executed by the digital processor. The digital controller 102 may be configured to perform various operations by executing software instructions stored in the memory. In some embodiments, the digital controller 102 may be configured to perform operations involved in training the machine learning model 112 and/or performing inference using the trained machine learning model 114. Example operations of the digital controller 102 are described herein with reference to FIG. 1B. [0061] The DAC 104 is a system that converts a digital signal into an analog signal. The DAC 104 may be used by the hybrid processor 100 to convert digital signals into analog signals for use by the analog processor 106. The DAC 104 may be any suitable type of DAC. In some embodiments, the DAC 104 may be a resistive ladder DAC, switched-capacitor DAC, switched resister DAC, binary-weighted DAC, a thermometer-coded DAC, a successive approximation DAC, an oversampling DAC, an interpolating DAC, and/or a hybrid DAC. In some embodiments, the DAC 104 may be configured to use the DAC 140 to program the analog processor 106. The digital controller 102 may provide a digital signals as input to the DAC 104 to obtain a corresponding analog signal, and set analog components of the analog processor 106 using the analog signal. [0062] The analog processor 106 includes various analog components. The analog components may include an analog mixer that mixes an input analog signal with an analog signal encoded into the analog processor 106. The analog components may include amplitude modulator(s), current steering circuit(s), amplifier(s), attenuator(s), and/or other analog components. In some embodiments, the analog processor 106 may include metal-oxide-semiconductor (CMOS) components, radio frequency (RF) components, microwave components, and/or other types of analog components. In some embodiments, the analog processor 106 may comprise a photonic processor. Example photonic processors are described herein. In some embodiments, the analog processor 106 may include a combination of photonic and analog electronic components. [0063] The analog processor 106 may be configured to perform one or more matrix operations. The matrix operation(s) may include a matrix multiplication. The analog components may include analog components designed to perform a matrix multiplication. In some embodiments, the analog processor 106 may be configured to perform matrix operations for training of the machine learning model 112 (e.g., a neural network). For example, the analog processor 106 may perform matrix operations for performing forward pass and backpropagation operations of a stochastic gradient training technique. In some embodiments, the analog processor 106 may be configured to perform matrix operations for performing inference using the trained machine learning model 114. For example, the analog processor 106 may perform matrix operations for performing inference. [0064] The ADC 108 is a system that converts an analog signal into a digital signal. The ADC 108 may be used by the hybrid processor 100 to convert analog signals output by the analog processor 106 into digital signals. The ADC 108 may be any suitable type of ADC. In some embodiments, the ADC 108 may be a parallel comparator ADC, a flash ADC, a successive- approximation ADC, a Wilkinson ADC, an integrating ADC, a sigma-delta ADC, a pipelined ADC, a cyclic ADC, a time-interleaved ADC, or other suitable ADC. [0065] The datastore 110 may be storage hardware for use by the hybrid processor 100 in storing information. In some embodiments, the datastore 110 may include a hard drive (e.g., a solid state hard drive and/or a hard disk drive). In some embodiments, at least a portion of the datastore 110 may be external to the hybrid processor 100. For example, the at least the portion of the datastore 110 may be storage hardware of a remote database server from which the hybrid processor 102 may obtain data. The hybrid processor 100 may be configured to access information from the remote storage hardware through a communication network (e.g., the Internet, a local area connection (LAN), or other suitable communication network). In some embodiments, the datastore 110 may include cloud-based storage resources. [0066] As shown in FIG. 1A, the datastore 110 stores training data. The training data may include sample inputs and/or sample outputs for use in training a machine learning model (e.g., a neural network). In some embodiments, the sample outputs may be target labels corresponding to the sample inputs. The sample inputs and target labels may be used by the hybrid processor 100 in performing a supervised learning technique. In some embodiments, the training data may include sample inputs without sample outputs. In such embodiments, the sample inputs may be used by the hybrid processor 100 to perform non-supervised learning technique. [0067] The training system 101 may be configured to train the machine learning model 112 using the hybrid processor 100. The training system 101 may be configured to train the machine learning model 112 using the training data stored in the datastore 110. The training system 101 may be configured to perform training to obtain a trained machine learning model 114 with learned parameters 114A. In some embodiments, the training system 101 may be configured to train the machine learning model 112 using a supervised learning technique. For example, the training system 101 may perform gradient descent (e.g., stochastic gradient descent, batch gradient descent, mini-batch gradient descent, etc.) to learn the parameters 114. In some embodiments, the training system 101 may be configured to train the machine learning model 114 using an unsupervised learning technique. For example, the training system 101 may use a clustering algorithm to train the machine learning model 112. In some embodiments, the training system 101 may be configured to train the machine learning model 112 using a semi- supervised learning technique. For example, the training system 101 may determine a set of classes using clustering, label sample inputs with the determined set of classes, and then use a supervised learning technique to train the machine learning model 112 using the labeled sample inputs. [0068] In some embodiments, the machine learning model 112 may be a neural network. For example, the machine learning model 112 may be a convolutional neural network (CNN), a recurrent neural network (RNN), a transformer neural network, a recommendation system, a graph neural network, or any other suitable neural network. In some embodiments, the machine learning model 112 may be a support vector machine (SVM), a logistic regression model, a linear regression model, or other suitable machine learning model. [0069] The machine learning model 112 includes parameters 112A that are to be learned during training to obtain the parameters 114A of the trained machine learning model 114. For example, the parameters 112A may be weights and/or coefficients of a neural network that are learned during training. In another example, the parameters 112A may be parameters indicating one or more hyperplanes of an SVM. In another example, the parameters 112A may be coefficients of a regression model. The parameters 112A may be iteratively updated during training (e.g., during performance of a gradient descent training algorithm). In some embodiments, the parameters 112A may be randomized values. In some embodiments, the parameters 112A may be learned from previously performing training. For example, the machine learning model 112 with parameters 112A may have been obtained by training another machine learning model. In such embodiments, the training system 101 may be configured to perform training to tune the machine learning model 112 to obtain the machine learning model 114. For example, the parameters 112A, 114A may include weights and/or biases of different layers of neural networks 112, 114. [0070] The hybrid processor 100 may be configured to perform matrix operations for training the machine learning model 112 and/or for using the trained machine learning model 114 to perform inference. In some embodiments, the matrix operations may be operations that are performed as part of a training technique (e.g., a gradient descent training technique). The matrix operations may include matrix operations to determine outputs of the machine learning model 112 for inputs, and matrix operations to determine one or more gradients of a loss function with respect to parameters of the machine learning model 112 (e.g., neural network weights) based on the determined outputs. [0071] The hybrid processor 100 may be configured to use the analog processor 106 to perform one or more matrix operations. Use of the analog processor 106 to perform the matrix operations may accelerate computation and require less power to perform. To perform a matrix operation using the analog processor 106, the digital controller 102 may program the analog processor 106 using matrices involved in a matrix operation. The digital controller 102 may program the analog processor 106 using the DAC 104. Programming the analog processor 106 may involve setting certain characteristics of the analog processor 106 according to the matrices involved in the matrix operation. In one example, the analog processor 106 may include multiple electronic amplifiers (e.g., voltage amplifiers, current amplifiers, power amplifiers, transimpedance amplifiers, transconductance amplifiers, operational amplifiers, transistor amplifiers, and/or other amplifiers). In this example, programming the analog processor 106 may involve setting gains of the electronic amplifiers based on the matrices. In another example, the analog processor 106 may include multiple electronic attenuators (e.g., voltage attenuators, current attenuators, power attenuators, and/or other attenuators). In this example, programming the analog processor 106 may involve settings the attenuations of the electronic attenuators based on the matrices. In another example, the analog processor 106 may include multiple electronic phase shifters. In this example, programming the analog processor 106 may involve setting the phase shifts of the electronic phase shifters based on the matrices. In another example, the analog processor 106 may include an array of memory devices (e.g., flash or ReRAM). In this example, programming the analog processor 106 may involve setting conductances and/or resistances of each of the memory cells. The analog processor 106 may perform the matrix operation to obtain an output. The digital controller 102 may obtain a digital version of the output through the ADC 108. [0072] The hybrid processor 100 may be configured to use the analog processor 106 to perform matrix operations by using an ABFP representation for matrices involved in an operation. The hybrid processor 100 may be configured to determine, for each matrix involved in an operation, scaling factor(s) for one or more portions of the matrix (“matrix portion(s)”). In some embodiments, a portion of a matrix may be the entire matrix. In some embodiments, a portion of a matrix may be a submatrix within the matrix. The hybrid processor 100 may be configured to scale a matrix portion using its scaling factor to obtain a scaled matrix or matrix portion. For example, values of the scaled matrix portion may be normalized within a range (e.g., [-1, 1]). The hybrid processor 100 may program the analog processor using the scaled matrix portion. In some embodiments, the hybrid processor 100 may be configured to program the analog processor 106 using the scaled matrix portion by programming the scaled matrix portion into a fixed-point representation used by the analog processor 106. In some embodiments, the fixed- point representation may be asymmetric around zero, with a 1-to-1 correspondence to integer values from to − 1, where B is the bit precision. In some embodiments, the representations may be symmetric around zero, with a 1-to-1 correspondence to integer bit values from − 1 to − 1. The analog processor 106 may be configured to perform the matrix operation using the scaled matrix portion to generate an output. The hybrid processor 100 may be configured to determine an output scaling factor for the output generated by the analog processor 106. In some embodiments, the hybrid processor 100 may be configured to determine the output scaling factor based on the scaling factor determined for the corresponding input. For example, the hybrid processor 100 may determine the output scaling factor to be an inverse of the input scaling factor. The hybrid processor 100 may be configured to scale the output using the output scaling factor to obtain a scaled output. The hybrid processor 100 may be configured to determine a result of the matrix operation using the scaled output. [0073] FIG. 1B shows a diagram illustrating interaction among components of the hybrid analog-digital processor 100 of FIG. 1A, according to some embodiments of the technology described herein. FIG. 1B illustrates functional components of each of the components 102, 104, 106, 108 of the hybrid processor 100. [0074] As shown in FIG.1B, the digital controller 102 includes an input generation component 102A, a scaling component 102B, and an accumulation component 102C. [0075] The input generation component 102A may be configured to generate inputs to a matrix operation to be performed by the hybrid processor 100. In some embodiments, the input generation component 102A may be configured to generate inputs to a matrix operation by determining one or more matrices involved in the matrix operation. For example, the input generation component 102A may determine two matrices to be multiplied in a matrix multiplication operation. In some embodiments, the input generation component 102A may be configured to divide matrices involved in a matrix operation into multiple portions such that the result of the matrix operation may be obtained by performing multiple operations using the multiple portions. In such embodiments, the input generation component 102A may be configured to generate input to a matrix operation by extracting a portion of a matrix for an operation. For example, the input generation component 102A may extract a vector (e.g., a row, column, or portion thereof) from a matrix. In another example, the input generation component 102A may extract a portion of an input vector for a matrix operation. To illustrate, the input generation component 102A may obtain a matrix of input values (also referred to as “input vector”) for a layer of a neural network, and a matrix of weights (also referred to as “weight matrix”) for the layer of the neural network. A matrix multiplication may need to be performed between the input matrix and the weight matrix. In this example, the input generation component 102A may: (1) divide the weight matrix into multiple smaller weight matrices; and (2) divide the input vector into multiple vectors corresponding to the multiple weight matrices. The matrix operation between the input vector and the weight matrix may then be performed by: (1) performing the matrix operation between each of the multiple weight matrices and the corresponding vectors; and (2) accumulating the outputs. [0076] In some embodiments, the input generation component 102A may be configured to obtain one or more matrices from a tensor for use in performing matrix operations. For example, the input generation component 102A may divide a tensor of input values and/or a tensor of weight values. The input generation component 102A may be configured to perform reshaping or data copying to obtain the matrices. For example, for a convolution operation between a weight kernel tensor and an input tensor, the input generation component 102A may generate a matrix using the weight kernel tensor, in which column values of the matrix correspond to a kernel of a particular output channel. The input generation component 102A may generate a matrix using the input tensor, in which each row of the matrix includes values from the input tensor that will be multiplied and summed with the kernel of a particular output channel stored in columns of the matrix generated using the weight kernel tensor. A matrix operation may then be performed between the matrices obtained from weight kernel tensor and the input tensor. [0077] The scaling component 102B of the digital controller 102 may be configured to scale matrices (e.g., vectors) involved in a matrix operation. The matrices may be provided by the input generation component 102A. For example, the scaling component 102B may scale a matrix or portion thereof provided by the input generation component 102A. In some embodiments, the scaling component 102B may be configured to scale each portion of a matrix. For example, the scaling component 102B may separately scale vectors (e.g., row vectors or column vectors) of the matrix. The scaling component 102B may be configured to scale a portion of a matrix by: (1) determining a scaling factor for the portion of the matrix; and (2) scaling the portion of the matrix using the scaling factor to obtain a scaled portion of the matrix. In some embodiments, the scaling component 102B may be configured to scale a portion of a matrix by dividing values in the portion of the matrix by the scaling factor. In some embodiments, the scaling component 102B may be configured to scale a portion of a matrix by multiplying values in the portion of the matrix by the scaling factor. [0078] The scaling component 102B may be configured to determine a scaling factor for a portion of a matrix using various techniques. In some embodiments, the scaling component 102B may be configured to determine a scaling factor for a portion of a matrix to be a maximum absolute value of the portion of the matrix. The scaling component 102B may then divide each value in the portion of the matrix by the maximum absolute value to obtain scaled values in the range [-1, 1]. In some embodiments, the scaling component 102B may be configured to determine a scaling factor for a portion of a matrix to be a norm of the portion of the matrix. For example, the scaling component 102B may determine a 2-norm of a vector. In some embodiments, the scaling component 102B may be configured to determine a scaling factor as a whole power of 2. For example, the scaling component 102B may determine a logarithmic value of a maximum absolute value of the portion of the matrix to be the scaling factor. In such embodiments, the scaling component 102B may further be configured to round, ceil, or floor a logarithmic value to obtain the scaling factor. In some embodiments, the scaling component 102B may be configured to determine the scaling factor statistically. In such embodiments, the scaling component 102B may pass sample inputs through a machine learning model, collect statistics on the outputs, and determine the scaling factor based on the statistics. For example, the scaling component 102B may determine a maximum output of the machine learning model based on the outputs, and use the maximum output as the scaling factor. In some embodiments, the scaling component 102B may be configured to determine a scaling factor by performing a machine learning training technique (e.g., backpropagation or stochastic gradient descent). The scaling component 102B may be configured to store scaling factors determined for portions of matrices. For example, the scaling component 102B may store scaling factors determined for respective rows of weight matrices of a neural network. [0079] The scaling component 102B may be configured to limit scaled values of a scaled portion of a matrix to be within a desired range. For example, the scaling component 102B may limit scaled values of a scaled portion of a matrix to between [-1, 1]. In some embodiments, the scaling component 102B may be configured to limit scaled values to a desired range by clamping or clipping. For example, the scaling component 102B may apply the following clamping function to the scaled values: clamp(x) = min(max(x, -1), 1) to set the scaled values between [-1, 1]. In some embodiments, the scaling component 102B may be configured to determine scaling factor for a portion of a matrix that is less than the maximum absolute value of the portion of the matrix. In some such embodiments, the scaling component 102B may be configured to saturate scaled values. For example, the scaling component 102B may saturate a scaled value at a maximum of 1 and a minimum of -1. [0080] The scaling component 102B may be configured to determine a scaling factor at different times. In some embodiments, the scaling component 102B may be configured to determine a scaling factor dynamically at runtime when a matrix is being loaded onto the analog processor. For example, the scaling component 102B may determine a scaling factor for an input vector for a neural network at runtime when the input vector is received. In some embodiments, the scaling component 102B may be configured to determine a scaling factor prior to runtime. The scaling component 102B may determine the scaling factor and store it in the datastore 110. For example, weight matrices of a neural network may be static for a period of time after training (e.g., until they are to be retrained or otherwise updated). The scaling component 102B may determine scaling factor(s) to be used for matrix operations involving the matrices, and store the determined scaling factor(s) for use when performing matrix operations involving the weight matrices. In some embodiments, the scaling component 102B may be configured to store scaled matrix portions. For example, the scaling component 102B may store scaled portions of weight matrices of a neural network such that they do not need to be scaled during runtime. [0081] The scaling component 102B may be configured to amplify or attenuate one or more analog signals for a matrix operation. Amplification may also be referred to herein as “overamplification”. Typically, the number of bits required to represent an output of a matrix operation increases as the size of one or more matrices involved in the matrix operation increases. For example, the number of bits required to represent an output of a matrix multiplication operation increases as the size of the matrices being multiplied increases. The precision of the hybrid processor 100 may be limited to a certain number of bits. For example, the ADC 108 of the hybrid processor may have a bit precision limited to a certain number of bits (e.g., 4, 6, 8, 10, 12, 14). As the number of bits required to represent an output of a matrix operation increases more information is lost from the output of the matrix operation because a fewer number of significant bits can be captured by the number of bits. The scaling component 102B may be configured to increase a gain of an analog signal such that a larger number of lower significant bits may be captured in an output, at the expense of losing information in more significant bits. This effectively increases the precision of an output of the matrix operation because the lower significant bits may carry more information for training the machine learning model 112 than the higher significant bits. Techniques of amplifying or attenuating analog signals for a matrix operation are described herein. [0082] FIG.2 shows a diagram 200 illustrating effects of overamplification, according to some embodiments of the technology described herein. The diagram 200 illustrates the bits of values that would be captured for different levels of overamplification. In the example of FIG.2, there is a constant precision of 8 bits available to represent a 22 bit output. When no amplification is performed (“Gain 1”), the output captures the 8 most significant bits b 1 -b 8 of the output as indicated by the set of highlighted blocks 202. When the analog signal is amplified by a factor of 2 (“Gain 2”), the output captures the bits b 2 -b 9 of the output as indicated by the set of highlighted blocks 204. When the analog signal is amplified by a factor of 4 (“Gain 4”), the output captures the bits b 3 -b 10 of the output as indicated by the set of highlighted blocks 206. When the analog signal is amplified by a factor of 8 (“Gain 8”), the output captures the bits b 4 - b 11 of the output as indicated by the set of highlighted blocks 208. As can be understood from FIG. 2, increasing the gain allows the output to capture additional lower significant bits at the expense of higher significant bits. [0083] The accumulation component 102C may be configured to determine an output of a matrix operation between two matrices by accumulating outputs of multiple matrix operations performed using the analog processor 106. In some embodiments, the accumulation component 102C may be configured to accumulate outputs by compiling multiple vectors in an output matrix. For example, the accumulation component 102C may store output vectors obtained from the analog processor (e.g., through the ADC 108) in columns or rows of an output matrix. To illustrate, the hybrid processor 100 may use the analog processor to perform a matrix multiplication between a weight matrix and an input matrix to obtain an output matrix. In this example, the accumulation component 102C may store the output vectors in an output matrix. In some embodiments, the accumulation component 102C may be configured to accumulate outputs by summing the output matrix with an accumulation matrix. The final output of a matrix operation may be obtained after all the output matrices have been accumulated by the accumulation component 102C. [0084] In some embodiments, the hybrid processor 100 may be configured to determine an output of a matrix operation using tiling. Tiling may divide a matrix operation into multiple operations between smaller matrices. Tiling may allow reduction in size of the hybrid processor 100 by reducing the size of the analog processor 106. As an illustrative example, the hybrid processor 100 may use tiling to divide a matrix multiplication between two matrices into multiple multiplications between portions of each matrix. The hybrid processor 100 may be configured to perform the multiple operations in multiple passes. In such embodiments, the accumulation component 102C may be configured to combine results obtained from operations performed using tiling into an output matrix. [0085] FIG. 3A illustrates an example matrix multiplication operation that may be performed by the hybrid processor 100, according to some embodiments of the technology described herein. For example, the matrix multiplication may be performed as part of training a neural network to determine an output of the neural network for an input. In the example of FIG. 3A, the matrix A may store the weights of a layer, and the matrix B may be an input matrix provided to the layer. The system may perform matrix multiplication between matrix A and matrix B to obtain output matrix C. The output matrix C may be an output of a layer of the neural network. In another example, the system may perform a convolution operation between a kernel matrix and an input matrix to obtain an output matrix. [0086] FIG. 3B illustrates an example tiling that may be used by the hybrid processor 100 to perform the matrix multiplication operation of FIG. 3A, according to some embodiments of the technology described herein. In FIG. 3B, the hybrid processor 100 divides the matrix A into four tiles—A1, A2, A3, and A4. In this example, each tile of A has two rows and two columns (though other numbers of rows and columns are also possible). The hybrid processor 100 divides the matrix B into tile rows B1 and B2, and matrix C is segmented into rows C1 and C2. The row C1 and C2 are given by the following expressions: (1) C1 = A1 ∗ B1 + A2 ∗ B2 (2) C2 = A3 ∗ B1 + A4 ∗ B2 [0087] In equation 1 above, the hybrid processor 100 may perform the multiplication of A1 ∗ B1 separately from the multiplication of A2 ∗ B2. The accumulation component 102C may subsequently accumulate the results to obtain C1. Similarly, in equation 2, the hybrid processor 100 may perform the multiplication of A3 ∗ B1 separately from the multiplication of A4 ∗ B2. The accumulation component 102C may subsequently accumulate the results to obtain C2. [0088] The DAC 104 may be configured to convert digital signals provided by the digital controller 102 into analog signals for use by the analog processor 106. In some embodiments, the digital controller 102 may be configured to use the DAC 104 to program a matrix into the analog processor 106. The digital controller 102 may be configured to input the matrix into the DAC 104 to obtain one or more analog signals for the matrix. The analog processor 106 may be configured to perform a matrix operation using the analog signal(s) for the matrix. In some embodiments, the DAC 104 may be configured to program a matrix using a fixed point representation of numbers used by the analog processor 106. [0089] The analog processor 106 may be configured to perform matrix operations on matrices programmed into the analog processor 106 (e.g., through the DAC 104) by the digital controller 102. As shown in FIG. 1B, in some embodiments, the matrix operations may include matrix operations for training the machine learning model 112 using gradient descent. For example, the matrix operations include forward pass matrix operations 106A to determine layer outputs of the neural network for a set of inputs (e.g., for an iteration of a gradient descent learning technique). The matrix operations further include backpropagation matrix operations 106B to determine one or more gradients. The gradient(s) may be used to update parameters (e.g., weights) of the neural network (e.g., in an iteration of a gradient descent learning technique). Examples of forward pass matrix operations 106A and backpropagation matrix operations 106B are described herein. [0090] In some embodiments, the analog processor 106 may be configured to perform a matrix operation in multiple passes using matrix portions (e.g., portions of an input matrix and/or a weight matrix) determined by the digital controller 102. The analog processor 106 may be programmed using scaled matrix portions, and perform the matrix operations. For example, the analog processor 106 may be programmed with a scaled portion(s) of an input matrix (e.g., a scaled vector from the input matrix), and scaled portion(s) of a weight matrix (e.g., multiple scaled rows of the weight matrix). The programmed analog processor 106 may perform the matrix operation between the scaled portions of the input matrix and the weight matrix to generate an output. The output may be provided to the ADC 108 to be converted back into a digital floating-point representation (e.g., to be accumulated by accumulation component 102C to generate an output). [0091] In some embodiments, a matrix operation may be repeated multiple times, and the results may be averaged to reduce the amount of noise present within the analog processor. In some embodiments, the matrix operations may be performed between certain bit precisions of the input matrix and the weight matrix. For example, an input matrix can be divided into two input matrices, one for the most significant bits in the fixed-point representation and another for the least significant bits in the fixed-point representation. A weight matrix may also be divided into two weight matrices, the first with the most significant bit portion and the second with the least significant bit portion. Multiplication between the original weight and input matrix may then be performed by performing a multiplications between: (1) the most- significant weight matrix and the most-significant input matrix; (2) the most-significant weight matrix and the least-significant input matrix; (3) the least-significant weight matrix and the most-significant input matrix; and (4) the least-significant weight matrix and the least- significant input matrix. The resulting output matrix can be reconstructed by taking into account the output bit significance. [0092] The ADC 108 may be configured to receive an analog output of the analog processor 106, and convert the analog output into a digital signal. In some embodiments, the ADC 108 may include logical units and circuits that are configured to convert a values from a fixed- point representation to a digital floating-point representation used by the digital controller 102. For example, the logical units and circuits of the ADC 108 may convert a matrix from a fixed point representation of the analog processor 106 to a 16 bit floating-point representation (“float16” or “FP16”), a 32 bit floating-point representation (“float32” or “FP32”), a 64 bit floating-point representation (“float32” or “FP32”), a 16 bit brain floating-point format (“bfloat16”), a 32 bit brain floating-point format (“bfloat32”), or another suitable floating- point format. In some embodiments, the logical units and circuits may be configured to convert values from a first fixed-point representation to a second fixed-point representation. The first and second fixed-point representations may have different bit widths. In some embodiments, the logical units and circuits may be configured to convert a value into unums (e.g., posits and/or valids). [0093] FIG.1C shows a diagram illustrating an environment in which the training system may be used, according to some embodiments of the technology described herein. The environment includes the training system 101, a communication network 112, and a device 118. [0094] In the example of FIG. 1C, the training system 101 comprises a server. For example, the training system 101 may train the machine learning model 112 to obtain the trained machine learning model 114, and transmit the trained machine learning model 114 through the communication network 116 to the device 118. The trained machine learning model 114 may be trained using techniques described herein. [0095] In the example of FIG. 1C, the device 118 is a mobile device. In some embodiments, the device 118 may be any suitable computing device. For example, the device 118 may be a laptop, computer, smartwatch, sensor, smart glasses, tablet, or other suitable computing device. The device 118 may be configured to obtain the machine learning model 114 from the training system 101. The device 118 may be configured to store the learned parameters 114A in memory of the device. The device 118 may be configured to use the learned parameters 114A to perform inference. To illustrate, the device 118 may receive a trained neural network with learned parameters (e.g., weights and biases). The device 118 may use the trained neural network to make an inference. For example, the device 118 may use the trained neural network to enhance images captured by a camera of the device 118, perform object detection in images of the device 118, predict a medical diagnosis using medical image data obtained by the device 118, predict a sensed value of a sensor of the device 118, or other inference using the neural network. [0096] Although in the example embodiment of FIG. 1C the training system 101 is shown separate from the device 118, in some embodiments, the training system 101 may be a component of the device 118. In such embodiments, the training system 101 does not need to transmit a trained machine learning model through the communication network 116. The device 118 may be configured to use the machine learning model 114 trained by the training system 101. [0097] In some embodiments, the device 118 may include the hybrid processor 100 described herein in reference to FIGs. 1A-1B. The device 118 may be configured to use the hybrid processor 100 to perform inference. The hybrid processor 100 may be configured to perform matrix operation(s) involved in performing inference using the analog processor 106. Example techniques of using the analog processor 106 to perform inference are described herein. The device 118 may be configured to use the hybrid processor 100 to determine an output of the neural network for an input by performing matrix operations to obtain an output of the neural network. [0098] FIG. 4 shows a flowchart of an example process 400 of training a machine learning model using an analog processor, according to some embodiments of the technology described herein. The process 400 may be performed by the training system 101 described herein with reference to FIGs. 1A-1B. The training system 101 may be configured to perform the process 400 using the hybrid processor 100. For example, the process 400 may be performed to train a neural network. In some embodiments, the process 400 may be performed as part of a supervised learning technique (e.g., gradient descent). [0099] Process 400 begins at block 402, where the system obtains training data including sample inputs for the machine learning model and corresponding outputs. In some embodiments, the corresponding outputs may be target outputs of the machine learning model for their respective inputs. The outputs may be used in process 400 to train parameters of the machine learning model. In some embodiments, the outputs may be pre-determined. For example, the machine learning model may be a neural network for image enhancement. In this example, the inputs may be input images that are to be enhanced by the neural network. The outputs may be target enhanced images corresponding to the input images. In another example, the machine learning model may be a neural network for determining a medical diagnosis of whether a subject has a medical condition. In this example, the inputs may be information about medical subjects, and the outputs may be diagnosis results of the medical conditions made by clinicians for the subjects. In some embodiments, the system may be configured to determine the outputs corresponding to the sample inputs. For example, the system may use a clustering technique to cluster the sample inputs into multiple different clusters, and then label each of the sample inputs based on which of the clusters the sample input belongs to. In this example, the label of each sample input may be the output corresponding to the sample input. [0100] The sample inputs may be matrices storing feature values. For example, the matrices may be images storing pixel values. In some embodiments, the matrices may be vectors storing feature values. For example, each vector may store multiple feature values (e.g., subject information, pixel values, or other feature values). In some embodiments, the outputs may be matrices. For example, the outputs may be output images. In another example, the outputs may be output vectors. In some embodiments, the outputs may be single values. For example, each output may be a classification or an output value on a continuous scale (e.g., a probability value, or sensor output value). [0101] Next, process 400 proceeds to block 404, where the system selects one or more sample inputs of the obtained sample inputs. In some embodiments, the system may be configured to randomly select the sample input(s). In some embodiments, the system may be configured to select a single sample. For example, the system may be using a stochastic gradient descent technique in which the system updates parameters of the model for each sample input. In some embodiments, the system may be configured to select multiple sample inputs. For example, the system may be using a batch or mini-batch gradient descent technique in which the system updates parameters of the model using multiple sample inputs in each iteration. [0102] Next, process 400 proceeds to block 406, where the system performs forward pass matrix operations using the input(s) using an analog processor (e.g., analog processor 106 described herein with reference to FIGs. 1A-1C). In the example of a neural network, the neural network may include multiple layers. For a layer l of the neural network, a forward pass matrix operations may include matrix operations to determine an output y l of the layer (“layer output”) for an input x l to the layer (“layer input”) may be given by equation 3 below. In equation 1 above, w l is a weight matrix that is multiplied by input matrix x l . A bias tensor b l is added to the result of the matrix multiplication. The output y l is then fed to a nonlinear function to produce an input to a subsequent layer, or an output of the neural network. The system may be configured to perform the matrix operation of equation 3 multiple times for multiple layers of the neural network to obtain an output of the neural network. The system may be configured to perform a forward pass for each of the sample input(s) selected at block 404. The system may be configured to perform the forward pass matrix operations using an analog processor (e.g., analog processor 106). Techniques of performing a matrix operation are described herein with reference to FIG. 5 and FIG. 6. [0103] The matrix operations of block 406 may be performed by an analog processor (e.g., to perform the operations more efficiently). For example, the matrix operation given by equation 1 may be performed in the analog domain using the analog processor. Example techniques of how the system may perform the forward pass matrix operations using an analog processor are described herein with reference to FIGs. 5-8. For example, the system may be configured to use ABFP for the matrices involved in the operation to perform the matrix operations using the analog processor. Equation 4 below illustrates how the matrix operation of equations 1 may be performed using an analog processor. [0104] In equations 4 above Q out indicates an output of a matrix operation in the analog domain, Q weight indicates a weight matrix in the analog domain, and Q in indicates an input matrix in the analog domain. The scaling factor(s) in each equation may be obtained using scaling factor(s) of ABFP representations of the matrices involved in the matrix operation. [0105] Next, process 400 proceeds to block 408, where the system performs backpropagation matrix operations (e.g., matrix multiplications) using an analog processor to obtain a gradient. Continuing with the example above, a neural network may include multiple layers. The backpropagation matrix operations may include operations to determine one or more gradients using outputs use outputs of the neural network obtained by performing the forward pass matrix operations at block 406. The hybrid processor 100 may use the gradient of a loss function with respect to an output of the layer to compute the gradient of the loss function with respect to weights, input, and/or bias. The gradient of the loss function with respect to weight may be determined by performing the matrix operation of equation 5 below, and the gradient of the loss function with respect to input may be determined by performing the matrix operation of equation 6 below. In equations 5 and 6 above, is the gradient of the loss function with respect to a layer output, which may be determined using an output determined for the layer. In some embodiments, the system may be configured to determine a gradient for a sample input selected at block 404. In some embodiments, the system may be configured to determine an average gradient of multiple sample inputs selected at block 404. Techniques of performing a matrix operation are described herein with reference to FIG. 5 and FIG. 6. [0106] The matrix operations of block 406 may be performed by an analog processor (e.g., to perform the operations more efficiently). For example, the matrix operations given by equations 3 and 4 may be performed in the analog domain using the analog processor. Example techniques of how the system may perform the forward pass matrix operations using an analog processor are described herein with reference to FIGs. 5-8. For example, the system may be configured to use ABFP for the matrices involved in the operation to perform the matrix operations using the analog processor. Equations 7 and 8 below illustrate how the respective matrix operations of equations 3 and 4 may be performed using an analog processor. In equations 7 and 8 above Q out indicates an output of a matrix operation in the analog domain, Q weight indicates a weight matrix in the analog domain, and Q in indicates an input matrix in the analog domain. The scaling factor(s) in each equation may be obtained using scaling factor(s) of ABFP representations of the matrices involved in the matrix operation. [0107] Next, process 400 proceeds to block 410, where the system updates parameters of the machine learning model using the determined gradient. In some embodiments, the system may be configured to update the parameters of the machine learning model by adding or subtracting a proportion of the gradient. For example, the system may update weights and/or biases of the neural network using the gradient. In some embodiments, the system may be configured to determine updated parameters of the machine learning model as an average of parameters determined in one or more previous iterations. An example process of determining updated parameters is described in process 1100 described herein with reference to FIG. 11. [0108] FIG. 5 shows a flowchart of an example process 500 of performing a matrix operation between two matrices, according to some embodiments of the technology described herein. In some embodiments, the matrix operation may be a matrix multiplication. Process 500 may be performed by training system 101 described herein with reference to FIGs.1A-1C. Process 500 may be performed as part of the acts performed at block 406 and/or block 408 of process 400 described herein with reference to FIG. 4. In some embodiments, process 500 may be performed to perform inference (e.g., using hybrid processor 100). For example, the process 500 may be performed to determine an output of a trained neural network for an input to the neural network. [0109] Process 500 begins at block 502, where the system obtains a first and second matrix. As an illustrative example, the matrices may consist of a weight matrix for a neural network layer and an input vector for the neural network layer (e.g., to perform a forward pass matrix operation at block 406). In another example, the matrices may consist of a gradient matrix of a loss function with respect to an output of a layer, and an input vector for the layer (e.g., to perform a backpropagation matrix operation at block 408). In another example, the matrices may consist of a weight matrix for a layer of a neural network and a gradient matrix of a loss function with respect to output of the layer (e.g., to perform a backpropagation matrix operation at block 408). In some embodiments, the matrices may be portions of other matrices. For example, the system may be configured to obtain tiles of the matrices as described herein in reference to FIGs. 3A-3B. To illustrate, the first matrix may be a tile obtained from a weight matrix of a neural network layer, and the second matrix may be an input matrix corresponding to the tile. [0110] Next, process 500 proceeds to block 504, where the system obtains a vector from the second matrix. In some embodiments, the system may be configured to obtain the vector by obtaining a column of the second matrix. For example, the system may obtain a vector corresponding to a tile of a weight matrix. [0111] Next, process 500 proceeds to block 506, where the system performs the matrix operation between the first matrix and the vector using an analog processor. For example, the system may perform a matrix multiplication between the first matrix and the vector. In this example, the output of the matrix multiplication may be a row of an output matrix or a portion thereof. An example technique by which the system performs the matrix operation using the analog processor is described in process 600 described herein with reference to FIG. 6. [0112] Next, process 500 proceeds to block 508, where the system determines whether the matrix operation between the first and second matrix has been completed. In some embodiments, the system may be configured to determine whether the first and second matrix has been completed by determining whether all vectors of the second matrix have been multiplied by the first matrix. For example, the system may determine whether the first matrix has been multiplied by all columns of the second matrix. If the system determines that the matrix operation is complete, then process 500 ends. If the system determines that the matrix operation is not complete, then process 500 proceeds to block 504, where the system obtains another vector from the second matrix. [0113] FIG. 6 shows a flowchart of an example process 600 of performing a matrix operation using an analog processor, according to some embodiments of the technology described herein. Process 600 may be performed by training system 101 described herein with reference to FIGs. 1A-1C. For example, process 600 may be performed by the hybrid processor 100 of the training system 101. Process 600 may be performed at block 506 of process 500 described herein with reference to FIG. 5. Process 600 may be performed at blocks 406 and 408 of process 400 described herein with reference to FIG. 4. In some embodiments, process 600 may be performed to perform inference (e.g., using hybrid processor 100). For example, the process 600 may be performed to determine an output of a trained neural network for an input to the neural network. [0114] Process 600 begins at block 602, where the system obtains one or more matrices. For example, the matrices may consist of a matrix and a vector as described at block 506 of process 500. To illustrate, a first matrix be a weight matrix or portion thereof for a neural network layer, and a second matrix may be an input vector or portion thereof for the neural network later. In another example, the first matrix may be a weight matrix or portion thereof, and a second matrix may be a column vector or portion thereof from an input matrix. [0115] Next, process 600 proceeds to block 604, where the system determines a scaling factor for one or more portions of each matrix involved in the matrix operation (e.g., each matrix and/or vector). In some embodiments, the system may be configured to determine a single scaling factor for the entire matrix. For example, the system may determine a single scaling factor for an entire weight matrix. In another example, the matrix may be a vector, and the system may determine a scaling factor for the vector. In some embodiments, the system may be configured to determine different scaling factors for different portions of the matrix. For example, the system may determine a scaling factor for each row or column of the matrix. Example techniques of determining a scaling factor for a portion of a matrix are described herein in reference to scaling component 102B of FIG. 1B. [0116] Next, process 600 proceeds to block 606, where the system determines, for each matrix, scaled matrix portion(s) using the determine scaling factor(s). In some embodiments, the system may be configured to determine: (1) scaled portion(s) of a matrix (e.g., a weight matrix) using scaling factor(s) determined for the matrix; and (2) a scaled vector using a scaling factor determined for the vector. In one example, if the system determined a scaling factor for an entire matrix, the system may scale the entire matrix using the scaling factor. In another example, if the system determined a scaling factor for each row or column of a matrix, the system may scale each row or column using its respective scaling factor. Example techniques of scaling a portion of a matrix using its scaling factor are described herein in reference to scaling component 102B of FIG. 1B. [0117] Next, process 600 proceeds to block 608, where the system programs an analog processor to using the scaled matrix portion(s). In some embodiments, for each matrix, the system may be configured to program scaled portion(s) of the matrix into the analog processor. The system may be configured to program the scaled portion(s) of the matrix into the analog processor using a DAC (e.g., DAC 104 described herein with reference to FIGs. 1A-1C). In some embodiments, the system may be configured to program the scaled portion(s) of the matrix into a fixed-point representation. For example, prior to being programmed into the analog processor, the numbers of a matrix may be stored using a floating-point representation used by digital controller 102. After being programmed into the analog processor, the numbers may be stored in a fixed-point representation used by the analog processor 106. In some embodiments, the dynamic range of the fixed-point representation may be less than that of the floating-point representation. [0118] Next, process 600 proceeds to block 610, where the system performs the analog processor programmed using the scaled matrix portion(s). The analog processor may be configured to perform the matrix operation (e.g., matrix multiplication) using analog signals representing the scaled matrix portion(s) to generate an output. In some embodiments, the system may be configured to provide the output of the analog processor to an ADC (e.g., ADC 108) to be converted into a digital format (e.g., a floating-point representation). [0119] Next, process 600 proceeds to block 612, where the system determines an output scaling factor. The system may be configured to determine the output scaling factor to perform an inverse of the scaling performed at block 606. In some embodiments, the system may be configured to determine an output scaling factor using input scaling factor(s). For example, the system may determine an output scaling factor as a product of input scaling factor(s). In some embodiments, the system may be configured to determine an output scaling factor for each portion of an output matrix (e.g., each row of an output matrix). For example, if at block 606 the system had scaled each row using a respective scaling factor, the system may determine an output scaling factor for each row using its respective scaling factor. In this example, the system may determine an output scaling factor for each row by multiplying the input scaling factor by a scaling factor of a vector that the row was multiplied with to obtain the output scaling factor for the row. [0120] Next, process 600 proceeds to block 614, where the system determines a scaled output using the output scaling factor(s) determined at block 614. For example, the scaled output may be a scaled output vector obtained by multiplying each value in an output vector with a respective output scaling factor. In another example, the scaled output may be a scaled output matrix obtained by multiplying each row with a respective output scaling factor. In some embodiments, the system may be configured to accumulate the scaled output to generate an output of a matrix operation. For example, the system may add the scaled output to another matrix in which matrix operation outputs are being accumulated. In another example, the system may sum an output matrix with a bias term. [0121] FIG.7 shows a flowchart of an example process 700 of using tiling to perform a matrix operation, according to some embodiments of the technology described herein. Process 700 may be performed by the training system 101 described herein with reference to FIGs.1A-1C. In some embodiments, process 700 may be performed as part of process 600 described herein with reference to FIG.6, process 500 described herein with reference to FIG.5, and/or process 400 described herein with reference to FIG. 4. [0122] Process 700 begins at block 702, where the system obtains a first and second matrix that are involved in a matrix operation. In some embodiments, the matrix operation may be a matrix multiplication. For example, the first matrix may be a weight matrix for a neural network layer and the second matrix may be an input matrix for the neural network layer. In another example, the first matrix may be a gradient of a loss function with respect to an output of a neural network layer, and the second matrix may be a weight matrix for the neural network layer. In yet another example, the first matrix may be a gradient of a loss function with respect to an output of a neural network layer, and the second matrix may be an input matrix for the neural network layer. [0123] Next, process 700 proceeds to block 704, where the system divides the first matrix into multiple tiles. For example, the system may divide a weight matrix into multiple tiles. An example technique for dividing a matrix into tiles is described herein with reference to FIGs. 3A-3B. [0124] Next, process 700 proceeds to block 706, where the system obtains a tile of the multiple tiles. After selecting a tile at block 706, process 700 proceeds to block 708, where the system obtains corresponding portions of the second matrix. In some embodiments, the corresponding portion(s) of the second matrix may be one or more vectors of the second matrix. For example, the corresponding portion(s) may be one or more column vectors from the second matrix. The column vector(s) may be those that align with the tile matrix for a matrix multiplication. [0125] Next, process 700 proceeds to block 708, where the system performs one or more matrix operations using the tile and the portion(s) of the second matrix. In some embodiments, the system may be configured to perform process 600 described herein with reference to FIG.6 to perform the matrix operation. In embodiments in which the portion(s) of the second matrix are vector(s) (e.g., column vector(s)) from the second matrix, the system may perform the matrix multiplication in multiple passes. In each pass, the system may perform a matrix multiplication between the tile and a vector (e.g., by programming an analog processor with a scaled tile and scaled vector to obtain an output of the matrix operation.) In some embodiments, the system may be configured to perform the operation in a single pass. For example, the system may program the tile and the portion(s) of the second matrix into an analog processor and obtain an output of the matrix operation performed by the analog processor. [0126] Next, process 700 proceeds to block 712, where the system determines whether all the tiles of the first matrix have been completed. The system may be configured to determine whether all the tiles have been completed by determining whether the matrix operations (e.g., multiplications) for each tile have been completed. If the system determines that the tiles have not been completed, then process 700 proceeds to block 706, where the system obtains another tile. [0127] If the system determines that the tiles have been completed, then process 700 proceeds to block 714, where the system determines an output of the matrix operation between the weight matrix and an input matrix. In some embodiments, the system may be configured to accumulate results of matrix operation(s) performed for the tiles into an output matrix. The system may be configured to initialize an output matrix. For example, for a multiplication of a 4x4 matrix with a 4x2 matrix, the system may initialize 4x2 matrix. In this example, the system may accumulate an output of each matrix operation in the 4x2 matrix (e.g., by adding the output of the matrix operation with a corresponding portion of the output matrix). [0128] FIG. 8 shows a diagram 800 illustrating performance of an example matrix operation by performing process 600 described herein with reference to FIG. 6, according to some embodiments of the technology described herein. In the example of FIG. 8, the analog processor is a photonic processor. In some embodiments, a different type of analog processor may be used instead of a photonic processor in the diagram 800 illustrated by FIG. 8. [0129] The diagram 800 shows a matrix operation in which the matrix 802 is to be multiplied by a matrix 804. The matrix 802 is divided into multiple tiles labeled A (1,1) , A (1,2) , A (1,3) , A (2,1) , A (2,2) , A (2,3) . The diagram 800 shows a multiplication performed between the tile matrix A (1,1) from matrix 802 and a corresponding column vector B (1,1) from the matrix 804. At block 806, a scaling factor (also referred to as “scale”) is determined for the tile A (1,1) , and at block 808 a scale is determined for the input vector B (1,1) . Although the embodiment of FIG. 8 shows that a single scale is determined for the tile at block 806, in some embodiments the system may determine multiple scales for the tile matrix. For example, the system may determine a scale for each row of the tile. Next, at block 810 the tile matrix is normalized using the scale determined at block 806, and the input vector is normalized using the scale determined at block 808. The tile matrix may be normalized by determining a scaled tile matrix using the scale obtained at block 806 as described at block 606 of process 600. Similarly, the input vector may be normalized by determined a scaled input vector using the scale obtained at block 808 as described at block 606 of process 600. [0130] The normalized input vector is programmed into the photonic processor as illustrated at reference 814, and the normalized tiled matrix is programmed into the photonic processor as illustrated at reference 816. The tile matrix and the input vector may be programmed into the photonic processor using a fixed-point representation. The tile matrix and input vector may be programmed into the photonic processor using a DAC. The photonic processor performs a multiplication between the normalized tile matrix and input vector to obtain the output vector 818. The output vector 818 may be obtained by inputting an analog output of the photonic processor into an ADC to obtain the output vector 818 represented using a floating-point representation. Output scaling factors are then used to determine the unnormalized output vector 820 from the output vector 818 (e.g., as described at blocks 612-614 of process 600). The unnormalized output vector 820 may then be accumulated into an output matrix for the matrix operation between matrix 802 and matrix 804. For example, the vector 820 may be stored in a portion of a column of the output matrix. The process illustrated by diagram 800 may be repeated for each tile of matrix 802 and corresponding portion(s) of matrix 804 until the multiplication is completed. [0131] FIG. 9 shows a diagram 900 illustrating an example technique of performing overamplification, according to some embodiments of the technology described herein. Process 900 may be performed by training system 101 described herein with reference to FIGs.1A-1C. Process 900 may be performed as part of process 600 described herein with reference to FIG. 6. For example, process 900 may be performed as part of programming an analog processor at block 608 of process 600. As described herein, overamplification may allow the system to capture lower significant bits of an output of an operation that would otherwise not be captured. For example, an analog processor of the system may use a fixed-bit representation of numbers that is limited to a constant number of bits. In this example, the overamplification may allow the analog processor to capture additional lower significant bits in the fixed-bit representation. [0132] Process 900 begins at block 902, where the system obtains a matrix. The system may be configured to obtain a matrix. For example, the system may obtain a matrix as described at blocks 602-606 of process 600 described herein with reference to FIG. 6. The matrix may be a scaled matrix or portion thereof (e.g., a tile or vector). In some embodiments, the system may be configured to obtain a matrix without any scaling applied to the matrix. [0133] Next, process 900 proceeds to block 904, where the system applies amplification to the matrix to obtain an amplified matrix. In some embodiments, the system may be configured to apply amplification to a matrix by multiplying the matrix by a gain factor prior to programming the analog processor. For example, the system may multiply the matrix by a gain factor of 2, 4, 8, 16, 32, 64, 128, or another exponent of 2. To illustrate, the system may be limited to b bits for representation of a number output by the analog processor (e.g., through an ADC). A gain factor of 1 results in obtaining b bits of the output starting from the most significant bit, a gain factor of 2 results in obtaining b bits of the output starting from the 2 nd most significant bit, and a gain factor of 3 results in obtaining b bits of the output starting from the 3 rd most significant bit. In this manner, the system may increase lower significant bits captured in an output at the expense of higher significant bits. In some embodiments, a distribution of outputs of a machine learning model (e.g., layer outputs and inference outputs of a neural network) may not reach one or more of the most significant bits. Thus, in such embodiments, capturing lower significant bit(s) at the expense of high significant bit(s) during training of a machine learning model and/or inference may improve the performance of the machine learning model. Accordingly, overamplification may be used to capture additional lower significant bit(s). [0134] In some embodiments, the system may be configured to apply amplification by: (1) obtaining a copy of the matrix; and (2) appending the copy of the matrix to the matrix. FIG.10 shows a diagram 1000 illustrating amplification by copying a matrix, according to some embodiments of the technology described herein. In the example of FIG. 10, the matrix tile 1002A of the matrix 1002 is the matrix that is to be loaded into an analog processor (e.g., a photonic processor) to perform a matrix operation. As shown in FIG. 10, the system copies the tile 1002A column-wise to obtain an amplified matrix. The amplified matrix 1004 is programmed into the analog processor. In the example of FIG. 10, the tile 1002A is to be multiplied by the vector tile 1006. The system makes a copy of the vector tile 1006 row-wise to obtain an amplified vector tile. [0135] In some embodiments, the system may be configured to apply amplification by distributing a zero pad among different portions of a matrix. The size of an analog processor may be large relative to a size of the matrix. The matrix may thus be padded to fill the input of the analog processor. FIG. 11A shows a diagram illustrating a technique of maintaining overamplification by distributing zero pads among different tiles of a matrix, according to some embodiments of the technology described herein. As shown in FIG. 11A, the matrix 1100 is divided into tiles 1100A, 1100B, 1100C, 1100D, 1100E, 1100F. The system distributes zeroes of a zero pad 1102 among the tiles 1100A, 1100B, 1100C, 1100D, 1100E, 1100F. The system may be configured to distribute the zero pad 1102 among the tiles 1100A, 1100B, 1100C, 1100D, 1100E, 1100F instead of appending the zero pad to the end of matrix 1100 to obtain an amplified matrix. [0136] FIG. 11B shows a diagram illustrating a technique of performing overamplification by using a copy of a matrix as a pad, according to some embodiments of the technology described herein. In the example of FIG. 11B, instead of using a zero pad, the system uses a copy of the matrix 1110 as the copy pad 1112 to obtain an amplification of the matrix. The system may be configured to determine the amplification factor based on how many copies the system copies. [0137] In some embodiments, the system may be configured to use any combination of one or more of the overamplification techniques described herein. For example, the system may apply a gain factor in addition to copying a matrix. In another example, the system may apply a gain factor in addition to distributing a zero pad among matrix tiles. In another example, the system may copy a matrix in addition to distributing a zero pad among matrix tiles. In some embodiments, the system may be configured to perform overamplification by repeating an operation multiple times. In such embodiments, the system may be configured to accumulate results of the multiple operations and average the results. In some embodiments, the system may be configured to average the results using a digital accumulator. In some embodiments, the system may be configured to average the results using an analog accumulator (e.g., a capacitor). [0138] Returning again to FIG. 9, after obtaining the amplified matrix at block 904, process 900 proceeds to block 906, where the system programs the analog processor using the amplified matrix. After programming the analog processor using the amplified matrix, process 900 proceeds to block 908, where the system performs the matrix operation using the analog processor programmed using the amplified matrix. The system may be configured to obtain an analog output, and provide the analog output to an ADC to obtain a digital representation of the output. [0139] FIG. 12 shows a flowchart of an example process 1200 of performing quantization aware training (QAT) of a neural network, according to some embodiments of the technology described herein. Process 1200 may be performed by training system 101 described herein with reference to FIGs. 1A-1C. [0140] Process 1200 begins at block 1202, where the system obtains a neural network and training data. In some embodiments, the system may be configured to obtain a previously trained neural network. For example, the parameters of the neural network may have been previously learned by previously performing training using a set of training data. In some embodiments, the system may be configured to obtain an untrained neural network in which the parameters have not been learned. For example, the parameters of the neural network may be initialized to random values. The system may be configured to obtain training data including sample inputs and corresponding outputs. The outputs may be target outputs for the sample inputs that are to be used to perform a supervised learning technique (e.g., gradient descent). [0141] Next, process 1200 proceeds to block 1204, where the system performs forward pass operations using an analog processor (e.g., analog processor 106 of hybrid processor 100 in the training system 101). In some embodiments, the forward pass operations may include matrix operations (e.g., matrix multiplications) to determine layer outputs of the neural network. The outputs generated by the analog processor may be provided to an ADC (e.g., ADC 108) to obtain outputs of the forward pass operations. By using the analog processor to perform the forward pass operations, the system obtains layer outputs that incorporate effects of quantization resulting from use of the analog processor. For example, quantization may be caused because the analog processor 106 uses a fewer number of bits than the digital controller 102 of the hybrid processor 100 and/or due to noise of the ADC 108. Using the analog processor to perform the forward pass operations incorporates effects of quantization into the outputs that are subsequently used to set parameters of the neural network. [0142] In some embodiments, the system may be configured to add noise to outputs of the forward pass operations. For example, the system may add noise in order to train parameters of the neural network to be more robust against noise. Adding noise may further regularize parameters of the neural network by preventing the parameters from overfitting to a training data set. In some embodiments, the system may be configured to add noise to outputs of matrix operations (e.g., matrix multiplications). In some embodiments, the system may be configured to add noise by: (1) obtaining a noise sample from a distribution modeling noise; and (2) adding the noise sample to an output. For example, the system may obtain a noise sample from a Gaussian distribution with mean 0 and a standard deviation commensurate to noise expected from an analog processor. An expected standard deviation may be determined from previously collected outputs of the analog processor (e.g., by determining a standard deviation of previous outputs). In another example, the system may obtain a noise sample from a noise distribution model of analog processor generated using previously collected data. For example, the system may obtain a noise model of a distribution of a difference between target output values of matrix operations and values generated by the analog processor. In another example, the system may obtain a noise model by deriving the noise model from an information-theoretic quantity involving target output values of matrix operations and output values generated by an analog processor. [0143] A distribution of noise values may be obtained by aggregating noise values in various manners. The noise values may be aggregated from previously collected data. In some embodiments, noise values may be aggregated from all output element values. For example, the distribution may include a single distribution for an output tensor. In some embodiments, a distribution may include aggregation of noise values for respective channels of an output tensor. For example, the distribution may include a single distribution for a single channel of the output tensor. Such embodiments may be suitable for a convolutional neural network in which one channel may have a very different range of values relative to another channel. In some embodiments, the distribution may include aggregations of noise values for respective output elements of an output tensor. For example, the distribution may include a single distribution for each element of the output tensor. [0144] In some embodiments, a noise distribution model may be stored in a digital processor as an empirical probability distribution. For example, the noise distribution model may be a histogram of noise samples obtained from an empirical distribution of values obtained using an analog processor. In this example, the system may obtain a noise sample according to the histogram of noise samples. In some embodiments, the noise distribution model may be stored as parameters of a prior distribution using a Bayesian approach. The system may obtain a noise sample from the posterior distribution. The Bayesian approach may provide faster noise sampling and require fewer memory storage resources than an empirical probability distribution. For example, the prior distribution may be sampled using an accelerated sampling algorithm. In some embodiments, the system may obtain a noise sample by comparing an output value of a digital processor against an output value of an analog processor. [0145] Next, process 1200 proceeds to block 1206, where the system performs backpropagation operations in the digital domain. Example backpropagation operations are described herein. The system may be configured to use digital representations (e.g., floating- point representations) of the outputs generated by the analog processor at block 1204 through an ADC. The system may be configured to use the outputs to determine a gradient of a loss function, and then use the gradient to update parameters (e.g., weights and/or biases) of the neural network. The system determines updates to the parameters using outputs obtained using the analog processor, and thus incorporates quantization effects into training. The system may thus make the neural network more robust to effects (e.g., noise) caused by quantization. [0146] Next, process 1200 proceeds to block 1208, where the system determines whether training is complete. In some embodiments, the system may be configured to determine whether training is complete based on whether the system has performed a threshold number of iterations. In some embodiments, the system may be configured to determine whether training is complete based on whether a loss function meets a threshold value of the loss function. In some embodiments, the system may be configured to determine whether training is complete based on whether a certain number of input samples have been processed. If the system determines that training is not complete, then process 1200 proceeds to block 1204. For example, the system may obtain one or more input samples and corresponding output(s), and perform the steps at block 1204 and 1206. If the system determines that training is complete, then process 1200 ends. [0147] After process 1200 ends, the trained neural network may be used to perform inference. In some embodiments, the inference may be performed using an analog processor. As the neural network was trained to incorporate quantization effects of the analog processor, the inference may be more robust to effects of quantization when using the analog processor for inference. [0148] FIG. 13 shows a flowchart of an example process 1300 of injecting noise into layer(s) of a neural network during training of the neural network, according to some embodiments of the technology described herein. Process 1300 may be performed by training system 101 described herein with reference to FIGs. 1A-1C. In some embodiments, process 1300 may be performed as part of process 400 described herein with reference to FIG. 4. For example, process 1300 may be performed as part of performing the forward pass matrix operations at block 406 of process 400. [0149] Prior to beginning process 1300, the system performing process 1300 may obtain a neural network. The neural network may have parameters (e.g., weights). In some embodiments, the neural network may be a previously trained neural network. The parameters may have been learned from a previously performed training. For example, the neural network may have been previously trained by the system by performing process 1300. In such embodiments, the process 1300 may be performed as a technique of differential noise finetuning (DNF). In another example, the neural network may have been previously trained using another training technique. The system may perform process 1300 to further train the previously trained neural network. For example, the system may perform process 1300 to further train the neural network to be robust to quantization error that would be present when using hybrid process 100 to perform inference using the neural network. In some embodiments, the neural network may be an untrained neural network. For example, the parameters of the neural network may be initialized to random values that need to be learned by performing process 1300. [0150] Process 1300 begins at block 1302, where the system performing process 1300 obtains training data comprising multiple sample inputs. In some embodiments, the system may be configured to obtain the sample inputs by: (1) obtaining sets of input data; and (2) generating the sample inputs using the sets of input data. In some embodiments, a sample input may be a set of input features generated by the system. The system may be configured to preprocess input data to generate the set of input features. As an illustrative example, the input data may be an image. The system may be configured to generate a sample input for the image by: (1) obtaining pixel values of the image; and (2) storing the pixel values in a data structure to obtain the sample input. For example, the data structure may be a matrix, vector, tensor, or other type of data structure. In some embodiments, the system may be configured to preprocess input data by normalizing the input data. In some embodiments, the system may be configured to preprocess input data by encoding categorical parameters (e.g., one-hot encoding the categorical parameters). [0151] In some embodiments, the system may be configured to obtain outputs for the sample inputs. The outputs may be used as target outputs corresponding to the sample inputs to use during training (e.g., to perform a supervised learning technique). Continuing with the example of input data consisting of an input image, the system may obtain an output image corresponding to the input image. The output image may represent a target enhancement of the input image that is to be generated by the neural network. In some embodiments, the system may be configured to obtain labels comprising target classifications for respective sets of input data. For example, the input data may be diagnostic scans of patients and the labels may be disease diagnoses for the patients (e.g., determined from diagnosis by clinicians using other techniques). [0152] After obtaining the training data at block 1302, process 1300 proceeds to block 1304, where the system uses a sample input of the training data to determine layer output(s) of one or more layers of the neural network. In some embodiments, the system may be configured to determine a layer output of a layer of the neural network using an input to the layer and parameters (e.g., weights and/or biases) associated with the layer. For example, the system may determine an output of the first layer using the output from a previous layer, and weights associated with the first layer. In another example, the system may determine a layer output by convolving an input matrix with a convolution kernel to obtain the layer output. In some embodiments, the system may be configured to perform matrix operations to determine an output of a neural network layer. Example matrix operations for determining an output of a neural network layer are described herein. [0153] In some embodiments, the system may be configured to determine a layer output matrix using an input matrix and a parameter matrix using tiling. Tiling may divide a matrix operation into multiple operations between smaller matrices. The system may be configured to use tiling to perform the multiplication operation in multiple passes. In each pass, the system may perform an operation over a tile of a matrix. In some embodiments, the system may perform tiling to simulate computation that would be performed on a target device. For example, a target device may use tiling due to resource constraints. As an example, the processor of the target device may not be sufficiently large to perform a multiplication between large matrices (e.g., with thousands of rows and/or columns) in one pass. Tiling may allow the target device to perform matrix operations using a smaller processor. [0154] Although the example of FIG. 13 is described using a sample input, in some embodiments, the system may be configured to determine the layer output(s) using multiple sample inputs. For example, the system may use a mini-batch of sample inputs. The system may be configured to perform the steps at blocks 1304-1312 using the multiple sample inputs. [0155] Next, process 1300 proceeds to block 1306, where the system obtains one or more noise samples from a quantization noise model for a target device. In some embodiments, the target device may be a device including an analog processor. For example, the target device may be a device including hybrid processor 100. The target device may be configured to use the hybrid processor 100 to perform inference using a neural network trained by performing process 1300. In some embodiments, the system may be configured to obtain a noise sample from a quantization noise model by randomly sampling the quantization noise model. [0156] In some embodiments, the quantization noise model may be a distribution of error values (e.g., empirically determined error values) and the system may randomly sample error values according to the distribution (e.g., based on probabilities of different error values). The quantization noise model for the target device may be obtained by determining a difference between target neural network layer outputs, and neural network layer outputs determined by the target device. The target neural network layer outputs may be those determined using a floating-point representation (e.g., float32) of numbers. For example, the target neural network layer outputs may be determined by a digital processor, without using an analog processor. The behavior of the analog processor may be simulated by the digital processor. The neural network layer outputs determined by the target device may be determined by the target device using an analog processor to perform matrix operations for determining the layer outputs. For example, the layer outputs may be determined by performing process 500 described herein with reference to FIG. 5 and/or process 600 described herein with reference to FIG. 6. The layer outputs determined by the target device may use a fixed-point representation of numbers during operations (e.g., for matrix operations performed by an analog processor). [0157] In some embodiments, the quantization noise model for the target device may include noise models for respective layers of the neural network. The system may be configured to obtain a noise sample for a layer by: (1) accessing a noise model for the layer; and (2) obtaining a noise sample from the noise model for the layer. In some embodiments, the quantization noise model for the target device may be a single noise model for all the layers of the neural network. [0158] A noise sample for a layer output may include multiple values. For example, the noise sample may include a noise sample for each output value. To illustrate, the noise sample may include a noise sample value for each output value in an output matrix of a neural network layer. In some embodiments, the noise sample may be a matrix having the same dimensions as the output matrix. For example, for a 100 x 100 output matrix, the noise sample may be a 100 x 100 matrix of noise values. [0159] After obtaining the noise sample(s) at block 1306, process 1300 proceeds to block 1308, where the system injects the noise sample(s) into one or more layer outputs. The system may be configured to inject a noise sample for a layer (e.g., obtained from a quantization noise model for the layer) into the corresponding layer output of the layer. In some embodiments, the system may be configured to additively inject a noise sample into a layer output. For example, a layer output matrix may be summed with a noise sample matrix to obtain a layer output injected with the noise sample. In some embodiments, the system may be configured to multiplicatively inject a noise sample into a layer output. The system may be configured to perform element-wise multiplication between a layer output matrix and a noise sample matrix to obtain a layer output injected with the noise sample. In some embodiments, the system may be configured to inject a noise sample into a layer output per matrix. For example, the system may add a noise matrix to an output matrix, or perform element-wise multiplication between the noise matrix and an output matrix. In some embodiments, the system may be configured to inject a noise sample into a layer output using tiling. The noise sample may include one or more noise matrices for tiles of an output matrix. The system may be configured to inject each of the noise matrices into a respective tile of the output matrix. In this manner, the system may simulate tiling that may be performed by a target device that is to employ the trained neural network. In some embodiments, the system may add the noise sample to a layer output by performing perturbations of an information- theoretic quantity involving a layer output matrix. For example, the system may perform perturbation using entropy or mutual information. [0160] After injecting the noise sample(s) into layer output(s) at block 1308, process 1300 proceeds to block 1310, where the system determines an output of the neural network for the sample input using the layer output(s) injected with the noise sample(s). In some embodiments, the system may be configured to determine the output of the neural network by using the layer output(s) injected with the noise sample(s) to compute outputs of subsequent layers. For example, a layer output injected with a noise sample may be used to determine the layer output of a subsequent layer. The output may thus reflect a simulated effect of quantization error on the neural network. In some embodiments, the output may reflect a simulated effect of clipping error on the neural network. [0161] Next, process 1300 proceeds to block 1312, where the system updates parameters of the neural network using the output obtained at block 1310. In some embodiments, the system may be configured to determine an update to the parameters of the neural network by determining a difference between the output and an expected output (e.g., a label from the training data). For example, the system may determine a gradient of a loss function with respect to the parameters using the difference. In some embodiments, the system may be configured to update parameters by performing backpropagation operations to determine a gradient, and updating the parameters based on the gradient. Example backpropagation operations are described herein. [0162] Next, process 1300 proceeds to block 1314, where the system determines whether the training has converged. In some embodiments, the system may be configured to determine whether the training has converged based on a loss function or gradient thereof. For example, the system may determine that the training has converged when the gradient of the loss function is less than a threshold value. In another example, the system may determine that the training has converged when the loss function is less than a threshold value. In some embodiments, the system may be configured to determine whether the training has converged by determining whether the system has performed a threshold number of iterations. For example, the system may determine that the training has converged when the system has performed a maximum number of iterations of blocks 1304 to 1312. [0163] If at block 1314, the system determines that the training has not converged, then process 1300 proceeds to block 1320, where the system selects sample input(s) from the training data. In some embodiments, the system may be configured to select the sample input randomly. After selecting the next sample input, process 1300 proceeds to block 1304 where the system determines layer output(s) of layer(s) of the neural network. [0164] In some embodiments, the system may be configured to inject noise for some sample inputs of the training data and not inject noise for some sample inputs of the training data. For example, each sample input may be a mini-batch and the system may perform noise injection for some mini-batches and not perform noise injection for other mini-batches. In this example, the system may mask some of the mini-batches from noise injection. In some embodiments, the training data may include a first plurality of sample inputs and a second plurality of inputs that is a duplicate of the first plurality of sample inputs. The system may be configured to perform noise injection (e.g., as performed at block 1308) for the first plurality of sample inputs and not the second plurality of inputs. [0165] If at block 1314, the system determines that the training has converged, then process 1300 proceeds to block 1316, where the system obtains a trained neural network. The system may be configured to store parameters of the trained neural network. In some embodiments, the system may be configured to provide the trained neural network to a target device (e.g., target device 104). The system may be configured to provide the trained neural network to the target device by transmitting the trained parameters to the target device. The target device may be configured to use the trained neural network for inference using input data received by the target device. In some embodiments, the target device may use an analog processor to perform inference. [0166] FIG. 14 shows a diagram 1400 illustrating injection of noise into a layer of a neural network, according to some embodiments of the technology described herein. The injection of noise into a layer of a neural network illustrated by diagram 1400 may be performed as part of performing process 1300 described herein with reference to FIG.13. The diagram 1400 depicts layer outputs 1402 generated by a target device, and target layer outputs 1404. The device layer outputs 1402 may be generated using an analog processor (e.g., analog processor 106). For example, the layer outputs 1402 may be generated by performing process 500 described herein with reference to FIG. 5 and process 600 described herein with reference to FIG. 6. The target layer outputs 1404 by performing matrix operations exclusively in a digital domain using a floating-point representation of numbers (e.g., float32). For a layer 1405, the system determines differences 1410 between device outputs 1406 of the layer and target outputs 1408 of the layer. The differences 1410 may provide a noise model of the device as depicted by the bar graph 1412 of diagram 1400. To inject noise into the layer 1405 during training, a sampler 1414 may obtain a noise sample from the noise model, and inject it into an output of the layer 1405 as indicated by reference 1418. The noise sample may be injected as described at block 1308 of process 1300. The layer output injected with the noise sample may then be used as input to a subsequent layer of the neural network. [0167] FIG. 15 shows a flowchart of an example process 1500 of updating parameters of a machine learning model during training, according to some embodiments of the technology described herein. For example, process 1500 may be performed to update weights and/or biases of a neural network during training. Process 1500 may be performed by training system 101 described herein with reference to FIGs. 1A-1C. In some embodiments, process 1500 may be performed at block 410 of process 400 described herein with reference to FIG. 4 to update parameters of a machine learning model, at block 1206 of process 1200 described herein with reference to FIG. 12, and/or at block 1312 of process 1300 described herein with reference to FIG. 13. Process 1500 may be performed in an iteration of training technique. For example, process 1500 may be performed as part of an iteration of a gradient descent learning technique. [0168] In some embodiments, the process 1500 may be performed for certain iterations of a training process, but not all iterations. In such embodiments, the process 1500 may be performed after a threshold number of iterations have been performed. For example, the process 1500 may be performed for iterations after 5, 10, 15, 20, 25, 50, 75, 100, 150, 200, 500, 750, or 1000 previous iterations have been performed. In some embodiments, the process 1500 may be performed in epochs after a threshold number of epochs have been performed. In some embodiments, the process 1500 may be performed multiple times within an epoch. In some embodiments, the learning rate and other training parameters may be adjusted at different iterations or epochs. [0169] Process 1500 begins at block 1502, where the system determines updated parameters of a machine learning model. In some embodiments, the system may be configured to determine updated parameters by: (1) performing forward pass operations to determine output of a machine learning model; (2) determine a difference between the output and target output (e.g., labels); and (3) performing backpropagation operations to determine updated parameters of the machine learning model. Examples of forward pass operations and backpropagation operations are described herein. For example, the system may determine updated parameters by determining a gradient of a loss function, and determining updated parameters based on the gradient of the loss function (e.g., by adding or subtracting a proportion of the gradient to previous parameter values). [0170] Next, process 1500 proceeds to block 1504, where the system determines an average of the updated parameters determined at block 1502, and parameters of the machine learning model determined at one or more previous iterations. For example, the system may determine an average of the updated parameters and those determined at previous iteration(s) of a gradient descent learning technique. In some embodiments, the system may be configured to maintain a running average of the parameters of the machine learning model over the iteration(s). The system may update the running average with the updated parameters of the machine learning model determined at block 1502. [0171] Next, process 1500 proceeds to block 1506, where the system sets parameters of the machine learning model to the average of the updated parameters and parameters set at previous iteration(s) of training. [0172] FIG. 16 illustrates an example processor 160, according to some embodiments of the technology described herein. The processor 160 may be hybrid processor 100 described herein with reference to FIGs. 1A-1C. The example processor 160 of FIG. 16 is a hybrid analog-digital processor implemented using photonic circuits. As shown in FIG. 16, the processor 160 includes a digital controller 1600, digital-to-analog converter (DAC) modules 1606, 1608, an ADC module 1610, and a photonic accelerator 1650. The photonic accelerator 1650 may be used as the analog processor 106 in the hybrid processor 100 of FIGs. 1A-1C. Digital controller 1600 operates in the digital domain and photonic accelerator 1650 operates in the analog photonic domain. Digital controller 1600 includes a digital processor 1602 and memory 1604. Photonic accelerator 1650 includes an optical encoder module 1652, an optical computation module 154, and an optical receiver module 1656. DAC modules 106, 108 convert digital data to analog signals. ADC module 1610 converts analog signals to digital values. Thus, the DAC/ADC modules provide an interface between the digital domain and the analog domain used by the processor 160. For example, DAC module 1606 may produce N analog signals (one for each entry in an input vector), a DAC module 1608 may produce NxN analog signals (e.g., one for each entry of a matrix storing neural network parameters), and ADC module 1610 may receive N analog signals (e.g., one for each entry of an output vector). [0173] The processor 160 may be configured to generate or receive (e.g., from an external device) an input vector of a set of input bit strings and output an output vector of a set of output bit strings. For example, if the input vector is an N-dimensional vector, the input vector may be represented by N bit strings, each bit string representing a respective component of the vector. An input bit string may be an electrical signal and an output bit string may be transmitted as an electrical signal (e.g., to an external device). In some embodiments, the digital process 1602 does not necessarily output an output bit string after every process iteration. Instead, the digital processor 1602 may use one or more output bit strings to determine a new input bit string to feed through the components of the processor 160. In some embodiments, the output bit string itself may be used as the input bit string for a subsequent process iteration. In some embodiments, multiple output bit streams are combined in various ways to determine a subsequent input bit string. For example, one or more output bit strings may be summed together as part of the determination of the subsequent input bit string. [0174] DAC module 1606 may be configured to convert the input bit strings into analog signals. The optical encoder module 1652 may be configured to convert the analog signals into optically encoded information to be processed by the optical computation module 1654. The information may be encoded in the amplitude, phase, and/or frequency of an optical pulse. Accordingly, optical encoder module 1652 may include optical amplitude modulators, optical phase modulators and/or optical frequency modulators. In some embodiments, the optical signal represents the value and sign of the associated bit string as an amplitude and a phase of an optical pulse. In some embodiments, the phase may be limited to a binary choice of either a zero phase shift or a π phase shift, representing a positive and negative value, respectively. Some embodiments are not limited to real input vector values. Complex vector components may be represented by, for example, using more than two phase values when encoding the optical signal. [0175] The optical encoder module 1652 may be configured to output N separate optical pulses that are transmitted to the optical computation module 1654. Each output of the optical encoder module 1652 may be coupled one-to-one to an input of the optical computation module 1654. In some embodiments, the optical encoder module 1652 may be disposed on the same substrate as the optical computation module 1654 (e.g., the optical encoder 1652 and the optical computation module 1654 are on the same chip). The optical signals may be transmitted from the optical encoder module 1652 to the optical computation module 1654 in waveguides, such as silicon photonic waveguides. In some embodiments, the optical encoder module 1652 may be on a separate substrate from the optical computation module 1654. The optical signals may be transmitted from the optical encoder module 1652 to optical computation module 1654 with optical fibers. [0176] The optical computation module 1654 may be configured to perform multiplication of an input vector ‘X’ by a matrix ‘A’. In some embodiments, the optical computation module 1654 includes multiple optical multipliers each configured to perform a scalar multiplication between an entry of the input vector and an entry of matrix ‘A’ in the optical domain. Optionally, optical computation module 1654 may further include optical adders for adding the results of the scalar multiplications to one another in the optical domain. In some embodiments, the additions may be performed electrically. For example, optical receiver module 1656 may produce a voltage resulting from the integration (over time) of a photocurrent received from a photodetector. [0177] The optical computation module 1654 may be configured to output N optical pulses that are transmitted to the optical receiver module 1656. Each output of the optical computation module 1654 is coupled one-to-one to an input of the optical receiver module 1656. In some embodiments, the optical computation module 1654 may be on the same substrate as the optical receiver module 1656 (e.g., the optical computation module 1654 and the optical receiver module 1656 are on the same chip). The optical signals may be transmitted from the optical computation module 1654 to the optical receiver module 1656 in silicon photonic waveguides. In some embodiments, the optical computation module 1654 may be disposed on a separate substrate from the optical receiver module 1656. The optical signals may be transmitted from the optical computation module 1654 to the optical receiver module 1656 using optical fibers. [0178] The optical receiver module 1656 may be configured to receive the N optical pulses from the optical computation module 1654. Each of the optical pulses may be converted to an electrical analog signal. In some embodiments, the intensity and phase of each of the optical pulses may be detected by optical detectors within the optical receiver module. The electrical signals representing those measured values may then be converted into the digital domain using ADC module 1610, and provided back to the digital process 1602. [0179] The digital processor 1602 may be configured to control the optical encoder module 1652, the optical computation module 1654 and the optical receiver module 1656. The memory 1604 may be configured to store input and output bit strings and measurement results from the optical receiver module 1656. The memory 1604 also stores executable instructions that, when executed by the digital processor 1602, control the optical encoder module 1652, optical computation module 1654, and optical receiver module 1656. The memory 1604 may also include executable instructions that cause the digital processor 1602 to determine a new input vector to send to the optical encoder based on a collection of one or more output vectors determined by the measurement performed by the optical receiver module 1656. In this way, the digital processor 1602 may be configured to control an iterative process by which an input vector is multiplied by multiple matrices by adjusting the settings of the optical computation module 1654 and feeding detection information from the optical receive module 1656 back to the optical encoder 1652. Thus, the output vector transmitted by the processor 160 to an external device may be the result of multiple matrix multiplications, not simply a single matrix multiplication. [0180] FIG. 17 shows graphs 1702, 1704, 1706 illustrating accuracy versus gain factor of various neural network models trained according to some embodiments of the technology described herein. Each of the graphs 1702, 1704, 1706 corresponds to a respective tile width (e.g., used by analog processor 106 to perform matrix operations). In the example of FIG. 17, a fixed-point representation of 8 bits was used by the analog processor to represent the weights, inputs, and outputs. The graph 1702 shows accuracy of neural network models trained using a tile width of 8, the graph 1704 shows accuracy of neural network models trained using a tile width of 32, and the graph 1706 shows accuracy of neural network models trained using a tile width of 128. The graph 1702 shows that all the models achieve accuracy that is 99% of the accuracy when training the neural networks exclusively in the digital domain using float32 representation (“float32 accuracy”). The graph 1704 shows that the 3D- UNet, RNN-T, BERT-Large, and DLRM neural network models all achieve 99% of the float32 accuracy for multiple different gain factors. The ResNet-50 and SSD-ResNet-34 neural networks achieve close to 99% of the float32 accuracy at a gain factor of 2. The graph 1706 shows that the 3D-UNet, RNN-T, BERT-Large, and DLRM neural network models all achieve 99% of the float32 accuracy for multiple different gain factors. The ResNet-50 and SSD-ResNet-34 neural networks achieve close to 99% of the float32 accuracy at a gain factor of 2 3 . [0181] Table 1 below shows results of fine tuning the ResNet50 and SSD-ResNet34 neural network models by finetuning using QAT of process 1200, and the DNF of process 1300. The performance metric for ResNet50 is top-1 accuracy and the performance metric for SSD- ResNet34 is mean average precision (mAP) score. The bolded values in the table are those that achieve a performance metric of greater than 99% of the performance when training the neural network using float32 representation exclusively in the digital domain. The performance metrics is shown when using the following two fixed-bits representations: (1) 6 bits for weights, 6 bits for inputs, and 8 bits for output; and (2) 8 bits for the weights, inputs, and outputs. As can be appreciated from Table 1, that QAT and DNF techniques improve performance of the neural networks. For ResNet50, the QAT and DNF techniques allows the neural network models to achieve greater than 99% float32 performance. For SSD-ResNet34, the DNF technique allows the neural network models to achieve greater than 99% float32 performance. Table 1 Example Implementation for Matrix Operation Using Analog Processor [0182] Below is an example series of steps that may be performed (e.g., by training system 101) for performing the matrix operation R = A ∗ B + C involving matrices A, B, and C to obtain the matrix R. Denote Q ( M) to be the analog representation of the matrix M . The following steps may, for example, be performed to perform a matrix operation as part of training or performing inference with a machine learning model (e.g., a neural network). 1) Break matrix A into M × N tiles. Call the ( m , n)-th tile with , where m = 1, ... , M and n = 1, ... , N. 2) Break matrix B into N rows. Call the n -th row with B ( n) where n = 1, ... , N. Each row will contain K column vectors and name these vectors . 3) Initialize a temporary floating point matrix D with zeros. The matrix D will be computed vector-by-vector by computing 4) For m = 1, ... , M: i. For n = 1, ... , N 1. Normalize by taking out floating point scale(s), i.e., (Each scale can be shared per row of the tile or by the entire tile). 2 . Program into the matrix modulators of analog processor. 3. For k = 1... K : a. Normalize by taking out a floating point scale, i.e., b. Program into the input vector modulators of analog processor. c. Compute and readout the quantized partial results using the analog processor d. Obtain the scaled FP32 partial results by multiplying result of step 4(3)(b) with and 4. (The accumulation is done in floating point.) 5) Output R = D + C. (The addition is done in floating point.) [0183] FIG.18 shows a block diagram of an example computer system 1800 that may be used to implement some embodiments of the technology described herein. The computing device 1800 may include one or more computer hardware processors 1802 and non-transitory computer-readable storage media (e.g., memory 1804 and one or more non-volatile storage devices 1806). The processor(s) 1802 may control writing data to and reading data from (1) the memory 1804; and (2) the non-volatile storage device(s) 1806. To perform any of the functionality described herein, the processor(s) 1802 may execute one or more processor- executable instructions stored in one or more non-transitory computer-readable storage media (e.g., the memory 1804), which may serve as non-transitory computer-readable storage media storing processor-executable instructions for execution by the processor(s) 1802. [0184] The terms “program” or “software” are used herein in a generic sense to refer to any type of computer code or set of processor-executable instructions that can be employed to program a computer or other processor (physical or virtual) to implement various aspects of embodiments as discussed above. Additionally, according to one aspect, one or more computer programs that when executed perform methods of the disclosure provided herein need not reside on a single computer or processor, but may be distributed in a modular fashion among different computers or processors to implement various aspects of the disclosure provided herein. [0185] Processor-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform tasks or implement abstract data types. Typically, the functionality of the program modules may be combined or distributed. [0186] Various inventive concepts may be embodied as one or more processes, of which examples have been provided. The acts performed as part of each process may be ordered in any suitable way. Thus, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments. [0187] As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, for example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements);etc. [0188] The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc. [0189] Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Such terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term). The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” “having,” “containing”, “involving”, and variations thereof, is meant to encompass the items listed thereafter and additional items. [0190] Having described several embodiments of the techniques described herein in detail, various modifications, and improvements will readily occur to those skilled in the art. Such modifications and improvements are intended to be within the spirit and scope of the disclosure. Accordingly, the foregoing description is by way of example only, and is not intended as limiting. The techniques are limited only as defined by the following claims and the equivalents thereto.