Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEEP NEURAL NETWORK WITH MULTIPLE LAYERS FORMED OF MULTI‑TERMINAL LOGIC GATES
Document Type and Number:
WIPO Patent Application WO/2023/219898
Kind Code:
A1
Abstract:
A deep neural network circuit with multiple layers formed of multi-terminal logic gates is provided. In one aspect, the neural network circuit includes a plurality of logic gates arranged into a plurality of layers and a plurality of logical connectors arranged between each pair of adjacent layers. Each of the logical connectors connects the output of a first logic gate to the input of a second logic gate and each of the logical connectors has one of a plurality of different logical connector states. The neural network circuit is configured to be trained to implement a function by finding a set of the logical connector states for the logical connectors such that the neural network circuit implements the function.

Inventors:
TRAVERSA FABIO LORENZO (US)
Application Number:
PCT/US2023/021187
Publication Date:
November 16, 2023
Filing Date:
May 05, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MEMCOMPUTING INC (US)
International Classes:
G06N3/063; G06G7/122; G06N3/0464; G06N5/00; H03K19/20
Foreign References:
US20090138419A12009-05-28
US20140180989A12014-06-26
US4660166A1987-04-21
US20180144239A12018-05-24
Attorney, Agent or Firm:
FULLER, Michael L. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS: 1. A neural network circuit, comprising: a plurality of logic gates arranged into a plurality of layers, each of the logic gates having a plurality of inputs and an output; and a plurality of logical connectors arranged between each pair of adjacent layers, each of the logical connectors determining a relationship between the output of a first logic gate to one of the plurality of inputs of a second logic gate, and each of the logical connectors having one of a plurality of different logical connector states, wherein the neural network circuit is configured to be trained to implement a function by finding a set of the logical connector states for the logical connectors such that the neural network circuit implements the function. 2. The neural network circuit of Claim 1, wherein the logical connector states include a first state in which the output of the first logic gate is connected to the input of the second logic gate via a NOT gate and a second state in which the output of the first logic gate is connected to the input of the second logic gate via a short circuit. 3. The neural network circuit of Claim 2, wherein the logical connector states further include a third state in which the output of the first logic gate is connected to the input of the second logic gate via an open circuit. 4. The neural network circuit of Claim 1, wherein the logic gates and the logical connectors are implemented in complementary metal-oxide semiconductor (CMOS) technology. 5. The neural network circuit of Claim 1, wherein the neural network circuit is formed on a single chip. 6. The neural network circuit of Claim 1, wherein each of the logical connectors comprises: an input; a short circuit connected to the input; an inverter arranged in parallel with the short circuit and connected to the input; an output; and at least one switch configured to connect one of the short circuit and the inverter to the output. 7. The neural network circuit of Claim 6, wherein each of the logical connectors further comprises: an open circuit connected to the input, wherein the at least one switch configured to connect one of the short circuit, the inverter, and the open circuit to the output. 8. The neural network circuit of Claim 6, wherein: the at least one switch comprises a first switch and a second switch connected in series, the first switch is configured to electrically connect to one of the short circuit and the inverter, and the second switch is configured to operate in either an open circuit or short circuit state. 9. The neural network circuit of Claim 1, wherein: each of the logical connectors comprises at least one of a short circuit or an inverter, and each of the logical connectors connects an output of a logic gate of a previous layer to an input of a logic gate of a current layer. 10. The neural network circuit of Claim 1, wherein each of the logic gates comprises a multi-terminal NOR gate. 11. The neural network circuit of Claim 1, further comprising a training circuit configured to produce the set of the logical connector states. 12. The neural network circuit of Claim 1, wherein each of the logical connectors has a fixed one of the logical connector states.

13. The neural network circuit of Claim 12, wherein each of the logical connectors comprises a single one of: a short circuit, an inverter, and an open circuit corresponding to the fixed one of the logical connector states. 14. A method of computing a function using a neural network circuit, comprising: providing a neural network circuit including: a plurality of logic gates arranged into a plurality of layers, each of the logic gates having a plurality of inputs and an output; and a plurality of logical connectors comprising sets of logical connectors arranged between each pair of adjacent layers, each of the logical connectors having one of a plurality of different logical connector states, wherein the plurality of logical connectors are programmed to implement a function; and computing the function for an input signal using the neural network. 15. The method of 14, further comprising: finding a set of the logical connector states for the plurality of logical connectors such that the neural network circuit implements the function. 16. The method of Claim 15, further comprising: generating a set of integer linear programming (ILP) problems based on the function; and solving the set of ILP problems to produce the set of the logical connector states. 17. The method of Claim 16, further comprising: determining a set of inequalities that describe the states of the logical connectors; and linking outputs from a previous layer to inputs of a subsequent layer through the set of inequalities, wherein the generating of the ILP problems is based on the linking of the outputs from the previous layer to the inputs of the subsequent layer through the set of inequalities.

18. The method of Claim 14, wherein the logical connector states include a first state in which the output of the first logic gate is connected to the input of the second logic gate via a NOT gate and a second state in which the output of the first logic gate is connected to the input of the second logic gate via a short circuit. 19. A single chip, comprising: a plurality of logic gates arranged into a plurality of layers of a neural network circuit, each of the logic gates having a plurality of inputs and an output; a plurality of logical connectors arranged between logic gates of each pair of adjacent layers, each of the logical connectors determining a relationship between the output of a first logic gate to one of the plurality of inputs of a second logic gate, and each of the logical connectors having one of a plurality of different logical connector states; a plurality of input terminals; and at least one output terminal, wherein the chip is configured to compute a function between the input terminals and the output terminal. 20. The chip of Claim 19, wherein the chip is configured to compute the function for a given input provided to the input terminals in ten clock cycles or fewer.

Description:
DEEP NEURAL NETWORK WITH MULTIPLE LAYERS FORMED OF MULTI-TERMINAL LOGIC GATES CROSS REFERENCE TO ANY PRIORITY APPLICATIONS [0001] The present application claims the benefit of priority of U.S. Provisional Patent Application No. 63/364,405, filed May 9, 2022 and titled “DEEP NEURAL NETWORK WITH MULTIPLE LAYERS FORMED OF MULTI-TERMINAL LOGIC GATES,” the disclosure of which is hereby incorporated in its entirety and for all purposes. BACKGROUND Technical Field [0002] The present disclosure relates generally to neural networks. More particularly, the present disclosure is related to deep neural networks which are implemented using multiple layers formed of multi-terminal logic gates. Description of the Related Technology [0003] Neural networks can be implemented on various types of hardware such as central processing units (CPUs) and field programmable gate arrays (FPGAs) as well as specialty hardware designed for neural networks, such as distributed architectures like graphics processing units (GPUs) or tensor processing units (TPUs). SUMMARY OF CERTAIN INVENTIVE ASPECTS [0004] The innovations described in the claims each have several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of the claims, some prominent features of this disclosure will now be briefly described. [0005] One inventive aspect is a neural network circuit, comprising: a plurality of logic gates arranged into a plurality of layers, each of the logic gates having a plurality of inputs and an output; and a plurality of logical connectors arranged between each pair of adjacent layers, each of the logical connectors determining a relationship between the output of a first logic gate to one of the plurality of inputs of a second logic gate, and each of the logical connectors having one of a plurality of different logical connector states, wherein the neural network circuit is configured to be trained to implement a function by finding a set of the logical connector states for the logical connectors such that the neural network circuit implements the function. [0006] In some embodiments, the logical connector states include a first state in which the output of the first logic gate is connected to the input of the second logic gate via a NOT gate and a second state in which the output of the first logic gate is connected to the input of the second logic gate via a short circuit. [0007] In some embodiments, the logical connector states further include a third state in which the output of the first logic gate is connected to the input of the second logic gate via an open circuit. [0008] In some embodiments, the logic gates and the logical connectors are implemented in complementary metal-oxide semiconductor (CMOS) technology. [0009] In some embodiments, the neural network circuit is formed on a single chip. [0010] In some embodiments, each of the logical connectors comprises: an input; a short circuit connected to the input; an inverter arranged in parallel with the short circuit and connected to the input; an output; and at least one switch configured to connect one of the short circuit and the inverter to the output. [0011] In some embodiments, each of the logical connectors further comprises: an open circuit connected to the input, wherein the at least one switch configured to connect one of the short circuit, the inverter, and the open circuit to the output. [0012] In some embodiments, the at least one switch comprises a first switch and a second switch connected in series, the first switch is configured to electrically connect to one of the short circuit and the inverter, and the second switch is configured to operate in either an open circuit or short circuit state. [0013] In some embodiments, each of the logical connectors comprises at least one of a short circuit or an inverter, and each of the logical connectors connects an output of a logic gate of a previous layer to an input of a logic gate of a current layer. [0014] In some embodiments, each of the logic gates comprises a multi-terminal NOR gate. [0015] In some embodiments, the neural network further comprises a training circuit configured to produce the set of the logical connector states. [0016] In some embodiments, each of the logical connectors has a fixed one of the logical connector states. [0017] In some embodiments, each of the logical connectors comprises a single one of: a short circuit, an inverter, and an open circuit corresponding to the fixed one of the logical connector states. [0018] Another aspect is a method of computing a function using a neural network circuit, comprising: providing a neural network circuit including: a plurality of logic gates arranged into a plurality of layers, each of the logic gates having a plurality of inputs and an output; and a plurality of logical connectors comprising sets of logical connectors arranged between each pair of adjacent layers, each of the logical connectors having one of a plurality of different logical connector states, wherein the plurality of logical connectors are programmed to implement a function; and computing the function for an input signal using the neural network. [0019] In some embodiments, the method further comprises: finding a set of the logical connector states for the plurality of logical connectors such that the neural network circuit implements the function. [0020] In some embodiments, the method further comprises: generating a set of integer linear programming (ILP) problems based on the function; and solving the set of ILP problems to produce the set of the logical connector states. [0021] In some embodiments, the method further comprises: determining a set of inequalities that describe the states of the logical connectors; and linking outputs from a previous layer to inputs of a subsequent layer through the set of inequalities, wherein the generating of the ILP problems is based on the linking of the outputs from the previous layer to the inputs of the subsequent layer through the set of inequalities. [0022] In some embodiments, the logical connector states include a first state in which the output of the first logic gate is connected to the input of the second logic gate via a NOT gate and a second state in which the output of the first logic gate is connected to the input of the second logic gate via a short circuit. [0023] Yet another aspect is a single chip, comprising: a plurality of logic gates arranged into a plurality of layers of a neural network circuit, each of the logic gates having a plurality of inputs and an output; a plurality of logical connectors arranged between logic gates of each pair of adjacent layers, each of the logical connectors determining a relationship between the output of a first logic gate to one of the plurality of inputs of a second logic gate, and each of the logical connectors having one of a plurality of different logical connector states; a plurality of input terminals; and at least one output terminal, wherein the chip is configured to compute a function between the input terminals and the output terminal. [0024] In some embodiments, the chip is configured to compute the function for a given input provided to the input terminals in ten clock cycles or fewer. [0025] For purposes of summarizing the disclosure, certain aspects, advantages and novel features of the innovations have been described herein. It is to be understood that not necessarily all such advantages may be achieved in accordance with any particular embodiment. Thus, the innovations may be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other advantages as may be taught or suggested herein. BRIEF DESCRIPTION OF THE DRAWINGS [0026] FIG.1 is a circuit diagram illustrating a digital neural network in accordance with aspects of this disclosure. [0027] FIG. 2 illustrates an embodiment of a multi-terminal logic gate and a plurality of logical connectors connected thereto in accordance with aspects of this disclosure. [0028] FIG. 3 illustrates another embodiment of a multi-terminal logic gate and a plurality of connected logical connectors connected thereto in accordance with aspects of this disclosure. [0029] FIG. 4 illustrates an embodiment of a multi-terminal NOR gate in accordance with aspects of this disclosure. [0030] FIG. 5 illustrates an embodiment of a multi-terminal NOR gate having a threshold in accordance with aspects of this disclosure. [0031] FIG. 6 is a graph of v/V DD evaluated with th and th+1 switches closed in accordance with aspects of this disclosure. [0032] FIG.7 illustrates an example neural network in accordance with aspects of this disclosure. [0033] FIGs. 8A and 8B illustrate embodiments of the logical connectors in accordance with aspects of this disclosure. DETAILED DESCRIPTION [0034] The following description of certain embodiments presents various descriptions of specific embodiments. However, the innovations described herein can be embodied in a multitude of different ways, for example, as defined and covered by the claims. In this description, reference is made to the drawings where like reference numerals can indicate identical or functionally similar elements. It will be understood that elements illustrated in the figures are not necessarily drawn to scale. Moreover, it will be understood that certain embodiments can include more elements than illustrated in a drawing and/or a subset of the elements illustrated in a drawing. Further, some embodiments can incorporate any suitable combination of features from two or more drawings. The headings provided herein are for convenience only and do not necessarily affect the scope or meaning of the claims. [0035] While current processor architectures are adequate for relatively small data sets, they may be incapable of accommodating the growing volume of information desired in real-time for certain computationally intensive applications. For example, vehicles can benefit greatly from more computing capacity to decipher incoming sensor data and rapidly make significant real-time decisions. Aspects of this disclosure relate to cost-effective and scalable computing systems and methods that can improve computing capacity for various applications, including to combat the future void in the vehicle data processing capabilities. [0036] Current edge processors based on central processing units (CPUs), graphics processing units (GPUs), and field programmable gate arrays (FPGAs) cannot execute algorithms on certain large sensor data streams (such as those in vehicles) efficiently enough to meet certain computing goals. Data movement represents one of the most significant limitations in certain computing architectures. This limitation is called the von Neumann bottleneck, and it has two main implications: it limits computational throughput and involves considerable energy to move the data between memory and processing. [0037] The growing adoption of machine learning and artificial intelligence (AI) technologies, such as neural networks, seek to improve processing capabilities for larger data sets and enable automation. However, in many cases, these techniques are still limited by their underlying von Neumann architectures. Specifically, there is the challenge of realizing ultra- low-power electronic architectures for real-time evaluation/inference of neural networks. This is still an unresolved issue for neural networks running on traditional hardware since the nonlinear (artificial neuron activation functions can be sigmoid, rectified linear unit (ReLU), etc.) and deep (several layers to be evaluated in sequence) nature of neural networks may involve an unavoidable sequence of calculations and therefore include several unavoidable clocks cycles even with the most advanced distributed architectures like GPUs or tensor processing units (TPUs). This translates in evaluation/inference time that is intolerably high for many applications (e.g., autonomous vehicles). Despite the effort on proposing mitigations to this problem, the fundamental issue may not be solved by digital architectures (GPUs, CPUs, TPUs) because of physical and computational limitations of modern neural networks designs. [0038] Aspects of this disclosure relate to improved neural network designs, which may be referred to as a MemComputing Neural Network (MEMC-NN) or more generally as digital neural networks. Digital neural networks disclosed herein can be implemented by circuits. In certain embodiments, the disclosed neural networks can be easily integrated on a chip using only logic gates, switches, and/or selectors. Aspects of this disclosure can partially or completely address some or all of the limitations described above, and thereby provide a real-time neural network evaluation/inference while using negligible power. The neural network solutions provided herein can be integrated virtually at all levels in systems, from servers and the cloud to smaller devices like smart watches and glasses, or internet of things (IOT) and edge computing such as those described herein. Advantageously, aspects of this disclosure have the potential to deliver unprecedented AI processing capabilities that adhere to size, weight, and power; environment; and cost objectives. The digital neural network systems and methods disclosed herein can be applied to any other suitable applications and/or meet any other suitable objectives. Embodiments of the Digital Neural Network Design [0039] FIG. 1 is a circuit diagram illustrating a digital neural network 100 in accordance with aspects of this disclosure. The digital neural network 100 can be implemented as a circuit formed on a single chip. The digital neural network 100 is a deep neural network comprising a plurality of inputs 102, a plurality of layers 104, a plurality of outputs 106, and an optional training circuit 111. Each of the layers 104 comprises a plurality of multi-terminal logic gates 108 (also referred to as “logic gates”). Each multi-terminal logic gate 108 can be logically associated with one or more of the multi-terminal logic gates 108 in the previous layer 104. For example, the logical association between two multi-terminal logic gates 108 can apply a logical operation to the output of the multi-terminal logic gate 108 from the previous layer 104 before providing the result of the logical operation to the multi-terminal logic gate 108 of the current layer 104. These logical associations can be implemented by logical connectors 110 (which may also be referred to as “logical operators”, “binary logic operators”, or “logical circuits”). [0040] Each of the logical connectors 110 is configured to define a relationship between a pair of multi-terminal logic gates 108. The logical connectors 110 may not always physically connect logic gates. For example, the logical connectors 110 can implement an open circuit between two multi-terminal logic gates 108. As shown in FIGs. 2 and 3 and discussed in more detail below, the relationship between two multi-terminal logic gate 108 can include a logical NOT (e.g., implemented via an inverter 118), an open circuit 120, and/or a short circuit 122, although other relationships are also possible. [0041] In certain embodiments, the logical connectors 110 can receive a single input and output a single output. In certain embodiments, the topology of the connectivity implemented by the logical connectors 110 defines the network layer of the digital neural network 100, e.g., fully connected, convolutional, pooling, etc. [0042] The optional training circuit 111 can be used to train the digital neural network 100. Depending on the embodiment, the optional training circuit 111 may be included as part of the same chip as the digital neural network 100 or may be implemented on a separate chip. The digital neural network 100 can be trained to implement a function as disclosed herein. After the digital neural network 100 is trained, it can compute the function for a given input in a single clock cycle, a few clock cycles, a single digit number of clock cycles. For example, the digital neural network 100 can be configured to compute the function in less than two, three, four, five, six, seven, eight, nine, or ten clock cycles, or a few depending on the implementation. [0043] FIG. 2 illustrates an embodiment of a multi-terminal logic gate 108 and a plurality of logical connectors 110 connected thereto in accordance with aspects of this disclosure. In the illustrated embodiment, each of the logical connectors 110 includes an input 112, an output 114, a switch 116, and a plurality of alternate parallel paths between the input 112 and the output 114. In the top logical connection 110 illustrated in FIG. 2, the parallel paths include an inverter 118 (e.g., a NOT gate), an open circuit 120, and a short circuit 122. The switch 116 is configured to define the state (also referred to as a “logical state”) of the logical connector 110 by defining a logical relationship between the input 112 and the output 114 of the logical connector 110 via one of the inverter 118, the open circuit 120, and the short circuit 122. Although FIG. 2 illustrates an open circuit 120 as a separate path to which the switch 116 can select, in certain implementations, there may not be a physical path with an open circuit 120. For example, when selecting the open circuit, the switch 116 may be disconnected from each of the inverter 118 and the short circuit 122. Thus, the logical operation applied by a given logical connector 110 may depend on the path (e.g., the inverter 118, the open circuit 120, and the short circuit 122) used to connect the input 112 and the output 114 of the logical connectors 110. Some logical connectors 110 can include two alternative parallel paths, such as an inverter and a short circuit. The bottom two logical connectors 110 of FIG. 2 illustrates such logical connectors 110. Depending on the embodiment, the digital neural network 100 may include logical connectors 110 having substantially the same structure, or the digital neural network 100 may include a plurality of logical connectors 110 having different structures such as the logical connectors 110 shown in FIG. 2. Thus, in various embodiments the logical connectors 110 used for a particular application may be homogeneous or heterogeneous. [0044] In addition, in the example of FIG. 2, the multi-terminal logic gate 108 is embodied as a multi-terminal OR gate. However, aspects of this disclosure are not limited thereto and the multi-terminal logic gate 108 can be embodied using other types of multi- terminal logic gates 108 depending on the implementation. As described in more detail herein, the states of the logical connectors 110 may be analogous to the weights of a traditional neural network. In some other embodiments, each of the logical connectors 110 can be implemented using only a single path (e.g., one of the inverter 118, the open circuit 120, and the short circuit 122) without the switch 116 representing the trained state of the logical connector 110 as described herein. [0045] FIG. 3 illustrates another embodiment of a multi-terminal logic gate 108 and a plurality of connected logical connectors 110 connected thereto in accordance with aspects of this disclosure. The embodiment of FIG. 3 is similar to that of FIG. 2 except the logical connectors 110 do not include an open circuit 120 path. While FIGs. 2 and 3 provide example embodiments of the logical connectors 110, aspects of this disclosure are not limited thereto and the logical connectors 110 can include a greater number of logical states implementing one or more different logical operations. Universality of a Digital Neural Network [0046] As used herein, a neural network may function as a universal machine if the neural network can compute any computable function in the Turing sense. A computable function can be defined as a function that can be computed by a Turing machine. The digital neural networks 100 embodied in FIGs.1-3 can be used to create a universal machine. In fact, OR gates together with NOT gates (e.g., inverters) can form a complete basis in the sense that any Boolean function can be written as a collection of OR and NOT gates. [0047] While the digital neural networks 100 described herein are not limited to the configurations illustrated in FIGs.1-3, the digital neural networks 100 of FIGs.1-3 are able use any basis of Boolean functions or a mixture of them to create layers allowing for large flexibility. The universality of the digital neural networks 100 described herein can also be viewed from an operative point of view. Given N input terminals and M output terminals, there is a minimum number of layers (dependent on the topology of the connectivity) that allows the computation of any possible function y = f(x) for any input x of length N and binary output y of length M. The states of the logical connectors 110 can be used to implement the function f. In other words, for each function f there is at least one set of states of the logical connectors 110 that configures the digital neural network 100 to exactly evaluate the function f. Training a Digital Neural Network [0048] As used herein, training a digital neural network 100 generally refers to finding the configuration(s) of logical states for the logical connectors 110 that maps a function f into the digital neural network 100. This training may be performed without either an analytical knowledge of the function f or the outcomes y for all possible inputs x. Typically, a set of data outcomes y (labels) for given inputs x (training set) is available for training. The training circuit 111 can perform training to train the digital neural network 100. [0049] For traditional neural networks, gradient descent-based techniques can be used to find the configuration of the weights that allows for a good representation of the function f. However, such techniques may not be applicable to training the digital neural networks 100 described herein since there are an integer number of available states for the logical connectors 110 and therefore it may not be possible to define a gradient for the logical connectors 110. This is one reason that it has been difficult to implement fully digital neural networks. [0050] Various typical training methods for neural networks can encounter technical challenges for training digital circuits. In such typical training methods, continuous parameters are used for training. However, digital circuits can function based on binary values rather than continuous parameters. Methods that work on continuous parameter values are generally not well suited for digital circuits that use binary values. [0051] Training digital neural networks 100 disclosed herein can involve determining associations (e.g., as defined by the digital neural network 100) between logic gates 108 in different layers of a neural network 100. For example, this can involve determining whether to connect logic gates 108 in different layers via a short circuit 122 or via an inverter 118. As another example, training can involve determining whether to connected logic gates 108 in different layers via a short circuit 122 or via an inverter 118 or to not connect the two logic gates 108 (e.g., by connecting an open circuit 120 to an input of one of the logic gates 108). [0052] One technique for training the neural networks 100 disclosed herein involves casting the training problem using integer linear programming (ILP). For example, for each input x, a set of linear inequalities, where the binary variables representing the states of the logical connectors 110 are the unknowns, can be defined to represent the propagation of the input x through the neural network 100. Therefore, the training of a digital neural network 100 can be translated into solving an ILP problem. [0053] However, solving ILPs may not be a simple task. ILPs belongs to the class of combinatorial problems also known as non-deterministic polynomial (NP) problems, infamous for their hardness. This disclosure is related to the Virtual Memcomputing Machine (VMM), where non-transitory computer readable storage stores instructions that, when executed by one or more processors, emulate a novel computing architecture and solves large ILP problems very efficiently. The VMM can be used to solve the ILP problem related to training digital neural networks 100 and provide a configuration for the logical connector 110 states that represents the function f. The VMM can be a software emulation of Self-Organizing Algebraic Gates (SOAGs), which can be used as a training solution for digital neural networks 100. [0054] Any suitable principles and advantages disclosed in International Patent Application No. PCT/US2022/053781 filed December 22, 2022 and/or International Patent Application No. PCT/US2016/041909 filed July 12, 2016 and published as International Publication No. WO 2017/011463 can be used to solve problem (e.g., an ILP problem) related to training any of the neural networks disclosed herein, the disclosures of each of these patent applications are hereby incorporated by reference in their entireties and for all purposes. For instance, the training circuit 111 can be implemented in accordance with any suitable principles and advantages disclosed in these international patent applications. Any other suitable training methods can be applied to the digital neural networks disclosed herein. [0055] Digital neural networks disclosed herein can be trained multiples times to compute different functions. In certain applications, digital neural networks disclosed herein can be trained a single time. This can be useful for certain applications, such an Internet of Things devices. Design of NOR Gates as Activation Functions [0056] In certain embodiments, NOR gates can be used to implement the multi- terminal logic gates 108 of a digital neural network 100. A multiterminal NOR gate can be defined with a threshold having a general relation as follows: [0057] In Equation 1, o is the output and i j is the j - th input of the NOR gate and ^^ is the threshold. In certain implementations, a NOR gate can be used to implement a multi- terminal logic gate 108 instead of an OR gate because NOR gates can be easily implemented in complementary metal-oxide semiconductor (CMOS) technology. However, aspects of this disclosure are not limited thereto. To obtain an OR gate, a NOT gate can be added after a NOR gate. However, when used in certain embodiments of the digital neural networks 100 described herein, OR and NOR gates may be completely interchangeable because the input terminals of the multi-terminal logic gates 108 can be coupled to logical connectors 110 which can include switches 116 that can select the inverter 118 or the short circuit 122 (e.g., as shown in FIG.3). [0058] FIG. 4 illustrates an embodiment of a multi-terminal NOR gate 200 in accordance with aspects of this disclosure. FIG.5 illustrates an embodiment of a multi-terminal NOR gate 250 having a threshold in accordance with aspects of this disclosure. Each of the multi-terminal NOR gates 200 and 250 can be implemented in CMOS technology. [0059] With reference to FIG. 4, the multi-terminal NOR gate 200 includes a plurality of inputs i1, i2, … in, an output o, a first power supply terminal VDD, a second power supply terminal GND, and a plurality of transistors 202. The transistors 202 are arranged to implement a NOR function. [0060] In FIG. 5, the multi-terminal NOR gate 250 includes a plurality of input switches i 1 , i 2 , … i n , an output o, a first power supply terminal V DD , two second power supply terminals GND, a pair of transistors 252, a plurality of first resistors R each in series with a respective input switch i1, i2, … in, and a threshold resistor Rth. When the threshold resistor Rth is set to R/2 or any larger value, the functionality of the multi-terminal NOR gate 250 of FIG. 5 may be substantially the same as that of the multi-terminal NOR gate 200 of FIG. 4. The working principle of the NOR gate 250 can be as follows. The resistor R th can be sized such that if a number of switches strictly larger than th is closed, then the output voltage o is set to 0 otherwise to V DD . Accordingly, NOR gate 250 implements Equation 1 where the state of the switches are the inputs i j and the output o is the voltage at the node o. [0061] The multi-terminal NOR gate 250 of FIG. 5 can be implemented using a combination of digital and analog circuit elements, even when implemented fully in CMOS technology. This can provide certain advantages over fully digital implementations in certain applications, for example, as discussed herein. [0062] With continuing reference to the multi-terminal NOR gate 250 of FIG. 5, the inputs are configured to open or close the inputs switches i1, i2, … in. The input switches i 1 , i 2 , … i n are configured to be opened or closed by applying a voltage to a control terminal of a respective transistor (e.g., a gates of the CMOS components) used to implement the input switches i1, i2, … in. Therefore, the input switches i1, i2, … in can be controlled directly by the outputs of other NOR gates 250 or other generic CMOS based logic gates. The implementation of FIG. 5 may be one of the most compact implementations for a NOR gate with a threshold. For example, the FIG.5 implementation may use a minimum of 3(n+1) transistors with n being the number of inputs. A fully digital implementation may involve a more complex digital circuit that would essentially perform the sum of the inputs and compare the sum against a threshold. [0063] Depending on the implementation, the magnitude of resistance of the resistor R th , or the ratio may be a significant design consideration to implement the desired functionality of the NOR gate 250. For example, standard CMOS transistors may have a cut- off gate voltage at V DD /2. In this case, the ratio may be configured such that have the NOR gate 250 function properly. In one example, if th switches are closed, the voltage while th + 1 switches are closed then FIG. 6 is a graph of v/VDD evaluated with th and th+1 switches closed in accordance with aspects of this disclosure. Here, ^^ may be a parameter that characterizes the multi-terminal NOR gate 250. Th may be a threshold number of inputs that can be asserted in a logic 1 state for the output of the multi-terminal NOR gate 250 to be at logic 1. For example, ^^ may characterize the multi-terminal NOR gate 250 as defined in Equation 1: where if up to th inputs are 1 (e.g., up to th switches are closed) then the output 1. Otherwise if th+1 or more inputs are 1 (e.g., th+1 switches or more are closed) then the output is 0. Accordingly, the multi-terminal NOR gate 250 with the parameter th may be a generalization of a multi-terminal NOR gate. A standard multi-terminal NOR gate can be achieved by setting th = 0. [0064] However, when implemented in CMOS technology, there may be variability in the resistances and the NOR gate 250 may not follow a perfect step function. As shown in FIG.6, if the threshold is small, ^^ ^ 2, the gap is about 10%, which is enough to handle the variabilities mentioned. This relationship may not depend on the number of inputs of the NOR gate 250. However, if the threshold is higher, then the variability may be an inherent aspect of the NOR gates 250 and which can be addressed in a different way in the training of the digital neural network 100. ILP formulation of NOR gates [0065] According to aspects of this disclosure, a digital neural network 100 that includes logic gates 108 and/or threshold logic gates 250 can be trained using an ILP formulation. Starting from Equation 1 for output o above, the ILP formulation of a multi- terminal NOR gate 250 with threshold can be written as a pair of inequalities of the form: [0066] In Equation 2, i j are the n inputs, o is the output and ^^ the threshold. These two inequalities can be used to describe completely the multi-terminal NOR gates 250 with threshold in the ILP format. To address possible variability coming from CMOS implementation of such multi-terminal NOR gates 250, the NOR relation can be implemented with an extra gap δ + above and δ- below the In terms of ILP formulations this changes Equation 2 into: [ 0067] Equation 3 forbids creating a gap of δ + above and δ- below the threshold th. This, properly sized, can be used to compensate the variability introduced by the CMOS implementation. ILP Formulation of Training Problem [0068] FIG. 7 illustrates an example of digital neural network 300 in accordance with aspects of this disclosure. The digital neural network 300 of FIG. 7 is provided as an example with a topology that is simple and small in order to described and capture aspects of the ILP formulation of the training problem that can be applied to a digital neural network 100 of any size and topology. As shown in FIG. 7, the neural network 300 includes a plurality of multi-terminal logic gates 308 arranged into a plurality of layers 304. The neural network 300 also includes a plurality of inputs i 1,1 , i 1,2 , …, i 1,12 and an output o 3,1 . The neural network 300 further includes a plurality of logical connectors 310 with states that define the relationships or connections between adjacent layers 304. Also shown are internal outputs o1,1, o1,2, … o2,3, internal inputs i2,1, i2,2, …, i3,3 to the multi-terminal logic gates 308, and internal inputs i'2,1, i'2,2, …, i'3,3 to the logical connectors 310 that connect each of the layers 304 to its adjacent layer(s) 304. The neural network 300 can further include an optional training circuit 111. [0069] FIGs. 8A and 8B illustrate embodiments of the logical connectors in accordance with aspects of this disclosure. In particular, the logical connector 311 of FIG.8A is substantially similar to the logical connectors 110 illustrated in FIG.2. The logical connector 311 includes a switch 116, an inverter 118, an open circuit 120, and a short circuit 122. [0070] The logical connector 313 of FIG.8B implements the same functionality as the logical connector 311 of FIG. 8A with an alternative design. In particular, the logical connector 313 includes an inverter 118, a short circuit 122, a first switch 314, and a second switch 316. The first switch 314 is configured to select (e.g., connect to) one of the inverter 118 and the short circuit 122 while the second switch 316 is configured to operate in either an open circuit or short circuit state. Accordingly, the combination of the first switch 314 and the second switch 316 can implement three different states for the logical connector 313 (e.g., an open circuit, a closed circuit, or an inversion). [0071] One significant aspect to training the neural network 300 is identifying the variables of the ILP problem. The training involves finding the set of states of the logical connectors 310 at the input terminals of each of the multi-terminal logic gates 308 such that for a set of inputs ^^^, the neural network 300 will returns outputs Therefore, for each ^ א ^^^, the propagation of the input I through the neural network 300 can be defined by a set of inequalities that link through the states of the logical connectors 310. The following discussion applies to an implementation in which the multi-terminal logic gates 308 are embodied as NOR gates. However, the principles and advantages of this discussion can be modified to apply equally to OR gate embodiments or any other suitable implementation of the multi-terminal logic gates 308. [0072] For example, the binary variables can describe the partial state of the ^ െ ^^ logical connector 313 of the layer l. If x l,j = 1, then the logical connector 310 is in a first state (e.g., the first switch 314 is connected to the inverter 118). If x l,j = 0 then the logical connector 313 is in a second state (e.g., the first switch 314 is connected to the short circuit 122). For the binary variables describing the complementary partial state of the j - th logical connector 313 of the layer l, if y l,j = 1 then the logical connector has a third state (e.g., the second switch 316 implements an open circuit). If y i,j = 0 then the logical connector 313 has a state that depends on x l,j (e.g., that depends on the state of the first switch 314). Therefore, the states of the logical connector 313 at the input terminals of the multi- terminal logic gates 308 can satisfy the following set of inequalities: [0073] In Equation 4, i l,j is the input that is applied to the input terminal of the multi-terminal logic gate 308 and i' l,j is the input that is applied to the logical connector 313. The set of inequalities in Equation 4 may fully describe the state of the logical connectors 313 in the neural network 300. [0074] The training process through an ILP can be described based on the set of inequalities in Equation 4. For example, for I ∈ (I), the inputs i 1,1 , … , i 1,12 are set equal to the components of I. The outputs of the first layer o 1,1 , …., o 1,4 are subsequently evaluated. The inputs i 1,1 , … , i 1,12 and outputs o 1,1 , …., o 1,4 of the first layer 304 do not involve any logical connectors 313, and thus, the inputs i 1,1 , … , i 1,12 and outputs o 1,1 , …., o 1,4 are parameters that will enter in the next equations. For the second layer 304, the first multi-terminal gate 308 will be described in detail, since the remaining multi-terminal gate 308 operate similarly. The inputs i' 2,1 , … , i' 2,4 can be set as o 1,1 , … ,o 1,4 since the inputs i' 2,1 , … , i' 2,4 are directly connected to the output o 1,1 , … ,o 1,4 . The inputs i' 2,1 , … , i' 2,4 can be linked to the inputs i 2,1 , … , i 2,4 using the inequalities (4) through the states of the logical connectors 313 x 2,1 , … ,x 2,4 and y 2,1 , … ,y 2,4 . That output o 2,1 on the multi-terminal gate 308 can be linked to the inputs i 2,1 , … , i 2,4 using Equation 3 as: [0075] A similar process can be used for the last multi-terminal gate 308 where the outputs of the second layer o 2,1 , … ,o 2,3 are linked to the inputs of the logical connectors 313 i' 3,1 , … , i' 3,3 that in turn is linked to the inputs of the multi-terminal gate 308 i 3,1 , … , i 3,3 through the inequalities (4) and finally the inputs of the multi-terminal gate 308 can be linked to the output o3,1 through the inequalities (4). The output ^3,1 is set as which corresponds to the input I ∈ (I). [0076] Therefore, for each pair ^ , the training process can involve generating the set of ILP inequalities that propagates the inputs through the neural network 300 and at the same time backpropagates the outputs through the neural network 300. For example, because the layers 304 of the neural network 300 are linked by the inequalities (4), both the outputs from a previous layer 304 and the inputs to a subsequent layer 304 will affect the states of the logical connectors 313 connecting a current layer 304 to the previous and subsequent layers 304. Therefore, solving the ILP for all pairs ( , ) simultaneously, the training process will return the configurations of the logical connectors 313 that reproduces the function . For large neural networks 300, instead of generating an extremely large ILP problem, a minibatch method can be employed to solve subsets of randomly extracted pairs and iterate the process in several epochs until the training process reaches a threshold accuracy. Applications [0077] The adoption of neural networks for data mining, image recognition and signal recognition, among other applications, is driving this growth in the neural network market, as there is a need to detect complex nonlinear relationships between variables and patterns. Indeed, there is a great need for more efficient, lower energy digital neural network architectures to support current and future computing endeavors. [0078] There are great commercial applications across a variety of industries (e.g., fintech, i.t., life sciences, manufacturing, government and defense, transportation logistics). The adoption of cloud-based training and edge deployment of digital neural network solutions is expected to grow, mainly due to their benefits, such as easy maintenance of generated data, cost-effectiveness, scalability, and effective management. The digital neural networks described herein have serious potential to disrupt this market and deliver strategic competitive advantages to its early adopters. Evaluation of a Digital Neural Network [0079] Using the training to find a configuration of the logical connectors 110 that maps the function f into the digital neural network 100, the digital neural network 100 can be programmed such that the states of the logical connectors 110 (e.g., the states of the switch 116) are set following the training outcome. In implementation in which the digital neural network 100 can be retrained, updates of the training of a digital neural network 100 can be implemented by simply reprogramming the states the switch 116. This can allow for, for example, offline training if more labelled data is available and a simple reconfiguration of the switches 116 to implement updates. Once the switches 116 are set, the digital neural network 100 is ready to be evaluated for any possible input x. By design, the evaluation of the digital neural network 100 can be performed in a single clock cycle. Moreover, using CMOS technology to design the multi-terminal logic gates 108, the power and energy for the evaluation is extremely small, orders of magnitude smaller than current neural networks. In fact, since there is no data movement from the memory (just bringing the input and returning the output) to the processing unit (e.g., the digital neural network 100), the CMOS technology allows for an extremely low power multi-terminal logic gate 108 implementation. [0080] Theoretical aspects and practical performance of a digital neural network 100 can be analyzed against established benchmarks. Size in terms of gates, transistors, and interconnect complexity for specific applications (e.g., classification, predictive analytics, image recognition, etc.) can be quantified. To this end, for a given topology and input and output sizes, a minimum number of layers for achieving and/or guaranteeing universality can be determined. Efficiency of the training in terms of accuracy measured on established benchmarks can be assessed. In order to efficiently train the digital neural network 100, a Virtual Memcomputing Machine can be used. In some embodiments, training can be performed with the full training set as well as using mini-batches to reduce the size of ILPs to be solved since this may allow for speedup for the training under certain conditions. Performance in terms of energy/power and speed can be evaluated. This can be evaluated by considering a realization in CMOS technology. Trained Digital Neural Networks [0081] As discussed herein, a digital neural network 100 can be trained by solving a first set of ILP problems to determine a first set of states of the logical connectors 110 that maps a first function f1 into the digital neural network 100. The digital neural network 100 can also retrained to map a second function f2 into the digital neural network 100 by determining a second set of states of the logical connectors 110 by solving a second set of ILP problems corresponding to the second function f 2 . Thus, the digital neural network 100 can be retrained to implement substantially any function f by generating the corresponding set of ILP problems. [0082] However, for certain applications it may not be necessary to retrain a digital neural network 100. For example, when a digital neural network 100 is designed to implement a single function f, it may not be necessary to ever retrain the digital neural network 100. Accordingly, it is not necessary to include each of the parallel paths (e.g., the inverter 118, the open circuit 120, and the short circuit 122 of FIG. 2) or the switch 116 in each of the logical connectors 110. For example, if the trained state of a given logical connector 110 is the inverter 118 path, then the logical connector 110 can include only the inverter 118 without any of the other components. By implementing the logical connectors 110 with only the component corresponding to the trained state of the logical connector 110, the digital neural network 100 can be implemented with a significantly fewer number of components. [0083] In certain implementations, the digital neural network 100 may be embodied on a single chip, for example, when incorporated into certain devices (e.g., in autonomous vehicles). Since the digital neural network 100 can be implemented entirely in CMOS technology, the digital neural network 100 can be more easily implemented on a chip than other neural networks that rely on von Neumann architectures. Advantageously, this enables digital neural networks 100 to be more readily adopted for various different applications compared to traditional neural networks. [0084] The digital neural networks disclosed herein can provide fast computations. A digital neural network can be embodied on a single chip. In certain instances, an input can be loaded, the digital neural networks can compute a function in a single clock cycle, and the output can then be read out. This is a significant improvement in speed relative to certain existing neural network function computations. Digital neural networks disclosed herein can also use little energy compared to certain existing neural network computations. Conclusion [0085] The foregoing disclosure is not intended to limit the present disclosure to the precise forms or particular fields of use disclosed. As such, it is contemplated that various alternate embodiments and/or modifications to the present disclosure, whether explicitly described or implied herein, are possible in light of the disclosure. Having thus described embodiments of the present disclosure, a person of ordinary skill in the art will recognize that changes may be made in form and detail without departing from the scope of the present disclosure. Thus, the present disclosure is limited only by the claims. [0086] In the foregoing specification, the disclosure has been described with reference to specific embodiments. However, as one skilled in the art will appreciate, various embodiments disclosed herein can be modified or otherwise implemented in various other ways without departing from the spirit and scope of the disclosure. Accordingly, this description is to be considered as illustrative and is for the purpose of teaching those skilled in the art the manner of making and using various embodiments. It is to be understood that the forms of disclosure herein shown and described are to be taken as representative embodiments. Equivalent elements, materials, processes or steps may be substituted for those representatively illustrated and described herein. Moreover, certain features of the disclosure may be utilized independently of the use of other features, all as would be apparent to one skilled in the art after having the benefit of this description of the disclosure. Expressions such as “including,” “comprising,” “incorporating,” “consisting of,” “have,” “is” used to describe and claim the present disclosure are intended to be construed in a non-exclusive manner, namely allowing for items, components or elements not explicitly described also to be present. Reference to the singular is also to be construed to relate to the plural. [0087] Further, various embodiments disclosed herein are to be taken in the illustrative and explanatory sense, and should in no way be construed as limiting of the present disclosure. All joinder references (e.g., attached, affixed, coupled, connected, and the like) are only used to aid the reader’s understanding of the present disclosure, and may not create limitations, particularly as to the position, orientation, or use of the systems and/or methods disclosed herein. Therefore, joinder references, if any, are to be construed broadly. Moreover, such joinder references do not necessarily infer that two elements are directly connected to each other. Additionally, all numerical terms, such as, but not limited to, “first”, “second”, “third”, “primary”, “secondary”, “main” or any other ordinary and/or numerical terms, should also be taken only as identifiers, to assist the reader's understanding of the various elements, embodiments, variations and/or modifications of the present disclosure, and may not create any limitations, particularly as to the order, or preference, of any element, embodiment, variation and/or modification relative to, or over, another element, embodiment, variation and/or modification. [0088] It will also be appreciated that one or more of the elements depicted in the drawings/figures can also be implemented in a more separated or integrated manner, or even removed or rendered as inoperable in certain cases, as is useful in accordance with a particular application.