Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OPTICAL SYSTEMS AND METHODS USING BROADBAND DIFFRACTIVE NEURAL NETWORKS
Document Type and Number:
WIPO Patent Application WO/2021/050550
Kind Code:
A1
Abstract:
A broadband diffractive optical neural network simultaneously processes a continuum of wavelengths generated by a temporally-incoherent broadband source to all-optically perform a specific task learned using network learning. The optical neural network design was verified by designing, fabricating and testing seven different multi-layer, diffractive optical systems that transform the optical wavefront generated by a broadband THz pulse to realize (1) a series of tunable, single passband as well as dual passband spectral filters, and (2) spatially-controlled wavelength de-multiplexing. Merging the native or engineered dispersion of various material systems with a deep learning-based design, broadband diffractive optical neural networks help engineer light-matter interaction in 3D, diverging from intuitive and analytical design methods to create task-specific optical components that can all-optically perform deterministic tasks or statistical inference for optical machine learning. The optical neural network may be implemented as a reflective optical neural network.

Inventors:
OZCAN AYDOGAN (US)
LUO YI (US)
MENGU DENIZ (US)
RIVENSON YAIR (US)
Application Number:
PCT/US2020/049950
Publication Date:
March 18, 2021
Filing Date:
September 09, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV CALIFORNIA (US)
International Classes:
G06N3/067
Domestic Patent References:
WO2019200289A12019-10-17
Foreign References:
US6815683B22004-11-09
US9459142B12016-10-04
US20120262610A12012-10-18
US20180262291A12018-09-13
US20180045953A12018-02-15
US20140067742A12014-03-06
Other References:
LUO YI, MENGU DENIZ, YARDIMCI NEZIH T., RIVENSON YAIR, VELI MUHAMMED, JARRAHI MONA, OZCAN AYDOGAN: "Design of task-specific optical systems using broadband diffractive neural networks", ARXIV.ORG, vol. 8, no. 1, 14 September 2019 (2019-09-14), pages 1 - 36, XP081477293, DOI: 10.1038/s41377-019-0223-1
MENGU, D. ET AL.: "Analysis of Diffractive Optical Neural Networks and Their Integration With Electronic Neural Networks", IEEE JOURNAL OF SELECTED TOPICS IN QUANTUM ELECTRONICS, vol. 26, no. 1, 6 June 2019 (2019-06-06), Retrieved from the Internet DOI: 10.1109/JSTQE.2019.2921376
Attorney, Agent or Firm:
DAVIDSON, Michael S. (US)
Download PDF:
Claims:
What is claimed is:

1. An optical neural network device for processing a broadband input optical signal from a broadband light source comprising: a plurality of optically transmissive and/or reflective substrate layers arranged in an optical path, each of the plurality of optically transmissive and/or reflective substrate layers comprising a plurality of physical features formed on or within the plurality of optically transmissive or reflective substrate layers and having different complex-valued transmission and/or reflection coefficients as a function of the lateral coordinates across each substrate layer, wherein the plurality of optically transmissive and/or reflective substrate layers and the plurality of physical features thereon collectively define a trained mapping function between the broadband input optical signal to the plurality of optically transmissive and/or reflective substrate layers and one or more output optical signal(s) created by optical diffraction through and/or optical reflection from the plurality of optically transmissive and/or reflective substrate layers; and one or more optical sensors configured to capture the one or more output optical signal(s) resulting from the plurality of optically transmissive and/or reflective substrate layers.

2. The optical neural network device of claim 1, wherein the plurality of optically transmissive and/or reflective substrate layers filter a particular wavelength or a set of wavelengths within a wavelength range of the broadband light source.

3. The optical neural network device of claim 1, wherein the plurality of optically transmissive and/or reflective substrate layers filter a plurality of particular wavelengths or sets of wavelengths within a wavelength range of the broadband light source.

4. The optical neural network device of claim 1, wherein the plurality of optically transmissive and/or reflective substrate layers generate wavelength de-multiplexed optical signals for the one or more optical sensors.

5. The optical neural network device of claim 1, wherein the broadband input optical signal comprises a broadband terahertz (THz) source.

6. The optical neural network device of claim 1, wherein the broadband input optical signal comprises a broadband visible source.

7. The optical neural network device of claim 1, wherein the broadband input optical signal comprises a broadband infrared source.

8. The optical neural network device of claim 1, wherein the plurality of physical features of the plurality of optically transmissive and/or reflective substrate layers comprise regions of varied thicknesses.

9. The optical neural network device of claim 1, wherein the plurality of physical features of the plurality of optically transmissive and/or reflective substrate layers comprise regions having different optical properties.

10. The optical neural network device of claim 1, wherein the plurality of physical features of the plurality of optically transmissive and/or reflective substrate layers comprise metamaterials and/or metasurfaces.

11. The optical neural network of claim 1 , wherein the plurality of optically transmissive and/or reflective substrate layers are positioned within and/or surrounded by vacuum, air, a gas, a liquid or a solid material.

12. The optical neural network of claim 1, wherein the plurality of optically transmissive and/or reflective substrate layers comprise a nonlinear optical material.

13. The optical neural network of claim 1, wherein the plurality of optically transmissive and/or reflective substrate layers comprises one or more physical substrate layers that comprise reconfigurable physical features that can change on demand as a function of time.

14. An optical neural network device for processing an input optical signal having multiple distinct channels comprising: a plurality of optically transmissive and/or reflective substrate layers arranged in an optical path, each of the plurality of optically transmissive and/or reflective substrate layers comprising a plurality of physical features formed on or within the plurality of optically transmissive or reflective substrate layers and having different complex-valued transmission and/or reflection coefficients as a function of the lateral coordinates across each substrate layer, wherein the plurality of optically transmissive and/or reflective substrate layers and the plurality of physical features thereon collectively define a trained mapping function between the input optical signal to the plurality of optically transmissive and/or reflective substrate layers and one or more output optical signal(s) comprising one or more multiplexed channel (s) created by optical diffraction through and/or optical reflection from the plurality of optically transmissive and/or reflective substrate layers; and one or more optical sensors configured to capture the one or more output optical signal(s) resulting from the plurality of optically transmissive and/or reflective substrate layers.

15. A method of forming a multi-layer optical neural network for processing broadband light comprising: training a software-based neural network to perform one or more specific optical functions for a multi-layer transmissive and/or reflective network having a plurality of optically diffractive physical features located in different locations in each of the layers of the transmissive and/or reflective network, wherein the training comprises feeding a plurality of different wavelengths of a broadband optical signal to the software-based neural network and computing at least one optical output of optical transmission and/or reflection through the multi-layer transmissive and/or reflective network using a wave propagation model and iteratively adjusting complex-valued transmission/reflection coefficients for each layer of the multi-layer transmissive and/or reflective network until optimized transmission/reflection coefficients are obtained or a certain time or epochs have elapsed; and manufacturing or having manufactured a physical embodiment of the multi-layer transmissive and/or reflective network comprising a plurality of substrate layers having physical features that match the optimized transmission/reflection coefficients obtained by the trained neural network.

16. The method of claim 15, wherein the optimized transmission/reflective coefficients are obtained by error back-propagation.

17. The method of claim 15, wherein the optimized transmission/reflection coefficients are obtained by deep learning.

18. The method of claim 15, wherein the physical embodiment of the multi-layer transmissive and/or reflective network is manufactured by additive manufacturing.

19. The method of claim 15, wherein the physical embodiment of the multi-layer transmissive and/or reflective network is manufactured by lithography.

20. The method of claim 15, wherein the plurality of optically transmissive and/or reflective substrate layers forming the network are positioned within and/or surrounded by vacuum, air, a gas, a liquid or a solid material.

21. The method of claim 15, wherein the physical embodiment of the multi-layer transmissive and/or reflective network comprises one or more physical substrate layers that comprise a nonlinear optical material.

22. The method of claim 15, wherein the physical embodiment of the multi-layer transmissive and/or reflective network comprises one or more physical substrate layers that comprise reconfigurable physical features that can change on demand as a function of time.

23. The method of claim 15, wherein the physical embodiment of the multi-layer transmissive and/or reflective network filters a particular wavelength or set of wavelengths within a wavelength range of the broadband light source.

24. The method of claim 15, wherein the physical embodiment of the multi-layer transmissive and/or reflective network filters a plurality of particular wavelengths or sets of wavelengths within a wavelength range of the broadband light source.

25. The method of claim 15, wherein the physical embodiment of the multi-layer transmissive and/or reflective network generates wavelength de-multiplexed optical signals for one or more optical sensors.

26. The method of claim 15, wherein the broadband input optical signal comprises a broadband terahertz (THz) source.

27. The method of claim 15, wherein the broadband input optical signal comprises a broadband visible source.

28. The method of claim 15, wherein the broadband input optical signal comprises a broadband infrared source.

29. A method of filtering or de-multiplexing a broadband input optical signal comprising: providing an optical neural network device comprising: a plurality of optically transmissive and/or reflective substrate layers arranged in an optical path, each of the plurality of optically transmissive and/or reflective substrate layers comprising a plurality of physical features formed on or within the plurality of optically transmissive or reflective substrate layers and having different complex-valued transmission and/or reflection coefficients as a function of the lateral coordinates across each substrate layer, wherein the plurality of optically transmissive and/or reflective substrate layers and the plurality of physical features thereon collectively define a trained mapping function between the broadband input optical signal to the plurality of optically transmissive and/or reflective substrate layers and one or more output optical signal (s) created by optical diffraction through and/or optical reflection from the plurality of optically transmissive and/or reflective substrate layers; and one or more optical sensors configured to capture the one or more output optical signal(s) resulting from the plurality of optically transmissive and/or reflective substrate layers; and inputting the broadband input optical signal to the optical neural network.

30. A method of processing an input optical signal having multiple distinct channels comprising: providing an optical neural network device comprising: a plurality of optically transmissive and/or reflective substrate layers arranged in an optical path, each of the plurality of optically transmissive and/or reflective substrate layers comprising a plurality of physical features formed on or within the plurality of optically transmissive or reflective substrate layers and having different complex-valued transmission and/or reflection coefficients as a function of the lateral coordinates across each substrate layer, wherein the plurality of optically transmissive and/or reflective substrate layers and the plurality of physical features thereon collectively define a trained mapping function between the input optical signal to the plurality of optically transmissive and/or reflective substrate layers and one or more output optical signal(s) comprising one or more multiplexed channels ) created by optical diffraction through and/or optical reflection from the plurality of optically transmissive and/or reflective substrate layers; and one or more optical sensors configured to capture the one or more output optical signal(s) resulting from the plurality of optically transmissive and/or reflective substrate layers; and inputting the input optical signal having multiple distinct channels into the optical neural network.

Description:
OPTICAL SYSTEMS AND METHODS USING BROADBAND DIFFRACTIVE NEURAL NETWORKS

Related Application

[0001] This Application claims priority to U.S. Provisional Patent Application No. 62/900,353 filed on September 13, 2019, which is hereby incorporated by reference in its entirety. Priority is claimed pursuant to 35 U.S. C. § 119 and any other applicable statute.

Technical Field

[0002] The technical field generally relates to an optical deep learning physical architecture or platform that can perform, at the speed of light, various complex functions. In particular, the technical field relates to a broadband diffractive optical neural network design that simultaneously processes a continuum of wavelengths generated by a temporally - incoherent broadband source to all-optically perform a specific task or operation learned using deep learning. The specific tasks of spectral filtering and spatially-controlled wavelength de-multiplexing are illustrated.

Background

[0003] Deep learning has been redefining the state-of-the-art results in various fields, such as image recognition, natural language processing and semantic segmentation. The optics community has also benefited from deep learning methods in various applications such as microscopic imaging and holography, among many others. Aside from optical imaging, deep learning and related optimization tools have recently been utilized for solving inverse problems in optics related to e.g., nanophotonic designs and nanoplasmonics. These successful demonstrations and many others have also inspired a resurgence on the design of optical neural networks and other optical computing techniques motivated by their advantages in terms of parallelization, scalability, power efficiency and computation speed.

A recent addition to this family of optical computing methods is Diffractive Deep Neural

Networks (D 2 NN) which are based on light-matter interaction engineered by successive diffractive layers, designed in a computer by deep learning methods such as error backpropagation and stochastic gradient descent. Once the training phase is finalized, the diffractive optical network that is composed of transmissive and/or reflective layers is physically fabricated using e.g., 3D printing or lithography. Each diffractive layer consists of elements (termed as neurons) that modulate the phase and/or amplitude of the incident beam at their corresponding location in space, connecting one diffractive layer to successive ones through spherical waves based on the Huygens-Fresnel principle. Using a spatially and temporally coherent illumination, these neurons at different layers collectively compute the spatial light distribution at the desired output plane based on a given task that is learned. Diffractive optical neural networks designed using this framework and fabricated by 3D printing were experimentally demonstrated to achieve all-optical inference and data generalization for object classification, a fundamental application in machine learning. Overall, multi-layer diffractive neural networks have been shown to improve their blind testing accuracy, diffraction efficiency and signal contrast with additional diffractive layers, exhibiting depth advantage even using linear optical materials. In all these previous studies on diffractive optical networks, the input light was both spatially and temporally coherent, i.e., utilized a monochromatic plane wave at the input.

[0004] In general, diffractive optical networks with multiple layers enable generalization and perform all-optical blind inference on new input data (never seen by the network before), beyond the deterministic capabilities of the previous diffractive surfaces that were designed using different optimization methods to provide wavefront transformations without any data generalization capability. These previous demonstrations cover a variety of applications over different regions of the electromagnetic spectrum, providing unique capabilities compared to conventional optical components that are designed by analytical methods. While these earlier studies revealed the potentials of single-layer designs using diffractive surfaces under temporally-coherent radiation, the extension of these methods to broadband designs operating with a continuum of wavelengths has been a challenging task. Operating at a few discrete wavelengths, different design strategies have been reported using single-layer phase elements based on composite materials and thick layouts covering multiple 2p modulation cycles. In a recent work, a low numerical aperture (NA-0.01) broadband diffractive cylindrical lens design has also been demonstrated. In addition to diffractive elements, metasurfaces also present engineered optical responses devised through densely packed subwavelength resonator arrays which control their dispersion behavior. Recent advances in metasurfaces enabled several broadband, achromatic lens designs for e.g., imaging applications. On the other hand, the design space of broadband optical components that process a continuum of wavelengths relying on these elegant techniques have been restrained to single-layer architectures, mostly with an intuitive analytical formulation of the desired surface function. Summary

[0005] In one embodiment, a broadband diffractive optical network framework (i.e., an optical neural network) is disclosed that unifies deep learning methods with the angular spectrum formulation of broadband light propagation and the material dispersion properties in order to design task-specific optical components through 3D-engineering of light-matter interaction. Designed initially using software executed by a computing device, a physical broadband diffractive optical network is then fabricated in accordance with the computationally derived design. After its fabrication, one can process a continuum of input wavelengths (i.e., from a broadband light source) all in parallel, and perform a learned task at its output plane (e.g., filtering or de-multiplexing), resulting from the diffraction of broadband light through multiple layers. The success of broadband diffractive optical networks is demonstrated experimentally by designing, fabricating and testing different types of optical components using a broadband THz pulse as the input source.

[0006] In exemplary embodiments, first, a series of single passband as well as dual passband spectral filters are demonstrated, where each design used three diffractive layers fabricated by 3D printing, experimentally tested. A classical tradeoff between the Q-factor and the power efficiency was observed and it was demonstrated that the diffractive neural network framework can control and balance these design parameters on demand, i.e., based on the selection of the diffractive network training loss function. Combining the spectral filtering operation with spatial multiplexing, spatially-controlled wavelength de-multiplexing was demonstrated using three (3) diffractive layers that are trained to de-multiplex a broadband input source onto four (4) output apertures located at the output plane of the diffractive network, where each aperture has a unique target passband. The experimental results obtained with these different diffractive optical networks that were 3D printed provided a very good fit to the trained diffractive models.

[0007] The broadband diffractive optical neural networks provide a powerful framework for merging the dispersion properties of various material systems with deep learning methods in order to engineer light-matter interaction in 3D and help to create task-specific optical components that can perform deterministic tasks as well as statistical inference and data generalization. The presented device and methods can be further empowered by various metamaterial designs, as part of the layers of a diffractive optical network, and bring additional degrees of freedom by engineering and encoding the dispersion of the fabrication materials to further improve the performance of broadband diffractive networks. [0008] In one embodiment, an optical neural network device is disclosed for processing a broadband input optical signal from a broadband light source. The optical neural network includes a plurality of optically transmissive and/or reflective substrate layers arranged in an optical path, each of the plurality of optically transmissive and/or reflective substrate layers comprising a plurality of physical features formed on or within the plurality of optically transmissive or reflective substrate layers and having different complex-valued transmission and/or reflection coefficients as a function of the lateral coordinates across each substrate layer, wherein the plurality of optically transmissive and/or reflective substrate layers and the plurality of physical features thereon collectively define a trained mapping function between the broadband input optical signal to the plurality of optically transmissive and/or reflective substrate layers and one or more output optical signal(s) created by optical diffraction through and/or optical reflection from the plurality of optically transmissive and/or reflective substrate layers. The optical neural network includes one or more optical sensors configured to capture the one or more output optical signal(s) resulting from the plurality of optically transmissive and/or reflective substrate layers.

[0009] In another embodiment, an optical neural network device for processing an input optical signal having multiple distinct channels includes a plurality of optically transmissive and/or reflective substrate layers arranged in an optical path, each of the plurality of optically transmissive and/or reflective substrate layers comprising a plurality of physical features formed on or within the plurality of optically transmissive or reflective substrate layers and having different complex-valued transmission and/or reflection coefficients as a function of the lateral coordinates across each substrate layer, wherein the plurality of optically transmissive and/or reflective substrate layers and the plurality of physical features thereon collectively define a trained mapping function between the input optical signal to the plurality of optically transmissive and/or reflective substrate layers and one or more output optical signal(s) comprising one or more multiplexed channels ) created by optical diffraction through and/or optical reflection from the plurality of optically transmissive and/or reflective substrate layers. The optical neural network further includes one or more optical sensors configured to capture the one or more output optical signal(s) resulting from the plurality of optically transmissive and/or reflective substrate layers.

[0010] In another embodiment, a method of forming a multi-layer optical neural network for processing broadband light includes training a software-based neural network to perform one or more specific optical functions for a multi-layer transmissive and/or reflective network having a plurality of optically diffractive physical features located in different locations in each of the layers of the transmissive and/or reflective network, wherein the training comprises feeding a plurality of different wavelengths of a broadband optical signal to the software-based neural network and computing at least one optical output of optical transmission and/or reflection through the multi-layer transmissive and/or reflective network using a wave propagation model and iteratively adjusting complex-valued transmission/reflection coefficients for each layer of the multi-layer transmissive and/or reflective network until optimized transmission/reflection coefficients are obtained or a certain time or epochs have elapsed; and manufacturing or having manufactured a physical embodiment of the multi-layer transmissive and/or reflective network comprising a plurality of substrate layers having physical features that match the optimized transmission/reflection coefficients obtained by the trained neural network.

[0011] In another embodiment, a method of filtering or de-multiplexing a broadband input optical signal includes providing an optical neural network device that is formed from a plurality of optically transmissive and/or reflective substrate layers arranged in an optical path, each of the plurality of optically transmissive and/or reflective substrate layers comprising a plurality of physical features formed on or within the plurality of optically transmissive or reflective substrate layers and having different complex-valued transmission and/or reflection coefficients as a function of the lateral coordinates across each substrate layer, wherein the plurality of optically transmissive and/or reflective substrate layers and the plurality of physical features thereon collectively define a trained mapping function between the broadband input optical signal to the plurality of optically transmissive and/or reflective substrate layers and one or more output optical signal(s) created by optical diffraction through and/or optical reflection from the plurality of optically transmissive and/or reflective substrate layers. The optical neural network device further includes one or more optical sensors configured to capture the one or more output optical signal(s) resulting from the plurality of optically transmissive and/or reflective substrate layers. The broadband input optical signal is input to the optical neural network.

[0012] In another embodiment, a method of processing an input optical signal having multiple distinct channels includes providing an optical neural network device that is formed from a plurality of optically transmissive and/or reflective substrate layers arranged in an optical path, each of the plurality of optically transmissive and/or reflective substrate layers comprising a plurality of physical features formed on or within the plurality of optically transmissive or reflective substrate layers and having different complex-valued transmission and/or reflection coefficients as a function of the lateral coordinates across each substrate layer, wherein the plurality of optically transmissive and/or reflective substrate layers and the plurality of physical features thereon collectively define a trained mapping function between the input optical signal to the plurality of optically transmissive and/or reflective substrate layers and one or more output optical signal(s) comprising one or more multiplexed channels ) created by optical diffraction through and/or optical reflection from the plurality of optically transmissive and/or reflective substrate layers. The optical neural network device further includes one or more optical sensors configured to capture the one or more output optical signal(s) resulting from the plurality of optically transmissive and/or reflective substrate layers. The input optical signal having multiple distinct channels is input into the optical neural network.

Brief Description of the Drawings

[0013] FIG. 1 schematically illustrates one embodiment of an optical neural network that is used in transmission mode according to one embodiment. A broadband source of light directs light through the optical neural network. In this mode, light passes through the individual substrate layers that form the optical neural network. The light that passes through the optical neural network forms one or more output signals that is/are detected by one or more optical sensors.

[0014] FIG. 2 schematically illustrates another embodiment of an optical neural network that is used in reflection mode according to one embodiment. A broadband source of light directs light onto an optical neural network that is setup in reflection mode. In this mode, light reflects off the individual substrate layers that form the optical neural network. The reflected light from the optical neural network forms one or more output signals that is/are detected by one or more optical sensors.

[0015] FIG. 3 illustrates a single substrate layer of an optical neural network. The substrate layer may be made from a material that is optically transmissive (for transmission mode such as illustrated in FIG. 1) or an optically reflective material (for reflective mode as illustrated in FIG. 2). The substrate layer, which may be formed as a substrate or plate in some embodiments, has surface features formed across the substrate layer. The surface features form a patterned surface (e.g., an array) having different complex-valued transmission (or reflection) coefficients as a function of lateral coordinates across each substrate layer. These surface features act as artificial “neurons” that connect to other “neurons” of other substrate layers of the optical neural network through optical diffraction (or reflection) and alter the phase and/or amplitude of the light wave. [0016] FIG. 4 schematically illustrates a cross-sectional view of a single substrate layer of an optical neural network according to one embodiment. In this embodiment, the surface features are formed by adjusting the thickness of the substrate layer that forms the optical neural network. These different thicknesses may define peaks and valleys in the substrate layer that act as the artificial “neurons.”

[0017] FIG. 5 schematically illustrates a cross-sectional view of a single substrate layer of an optical neural network according to another embodiment. In this embodiment, the different surface features are formed by altering the material composition or material properties of the single substrate layer at different lateral locations across the substrate layer. This may be accomplished by doping the substrate layer with a dopant or incorporating other optical materials into the substrate layer. Metamaterials or plasmonic structures may also be incorporated into the substrate layer.

[0018] FIG. 6 schematically illustrates a cross-sectional view of a single substrate layer of an optical neural network according to another embodiment. In this embodiment, the substrate layer is reconfigurable in that the optical properties of the various artificial neurons may be changed, for example, by application of a stimulus (e.g., electrical current or field).

An example includes spatial light modulators (SLMs) which can change their optical properties. In this embodiment, the neuronal structure is not fixed and can be dynamically changed or tuned as appropriate (i.e. changed on demand). This embodiment, for example, can provide a learning optical neural network or a changeable optical neural network that can be altered on-the-fly (e.g., over time) to improve the performance, compensate for aberrations, or even change another task.

[0019] FIG. 7 illustrates a flowchart of the operations according to one embodiment to create and use an optical neural network.

[0020] FIG. 8 illustrates an embodiment of a holder that is used to secure the substrate layers used in an optical neural network.

[0021] FIG. 9 A schematically illustrate an embodiment of an optical neural network that is used to process input broadband light into an output light signal that is filtered.

[0022] FIG. 9B schematically illustrate an embodiment of an optical neural network that is used to process input broadband light into an output light signal that is demultiplexed.

[0023] FIG. 9C schematically illustrate an embodiment of an optical neural network that is used to process input light signal that is filtered or de-multiplexed (i.e., multiple input channels) into an output that is a single common multiplexed channel. [0024] FIGS. 10A-10D: Schematic of spectral filter design using broadband diffractive neural networks and the experimental set-up. FIG. 10A schematically illustrates a diffractive neural network-based design of a spectral filter. FIG. 10B illustrates a photograph of the physically fabricated diffractive filter design shown in FIG. 10A. The diffractive layers are 3D printed over a surface that is larger than their active (i.e., beam modulating) area to avoid bending of the layers. These extra regions do not modulate the light and are covered by aluminum, preventing stray light in the system. The active area of the first diffractive layer is 1 cmxl cm, while the other layers have active areas of 5 cmx5 cm. FIG. IOC illustrates the physical layout of the spectral filters with three (3) diffractive layers and an output aperture (2 mmx2 mm). FIG. 10D schematically illustrates the experimental optical setup used herein. Solid lines indicate the optical path of the femtosecond pulses generated by a TkSapphire laser at 780 nm wavelength, which was used as the pump for the THz emitter and detector. Dashed lines depict the optical path of the THz pulse (peak frequency ~ 200 GHz, observable bandwidth ~ 5THz) that is modulated and spectrally filtered by the designed diffractive neural networks.

[0025] FIGS. 11 A-l IE: Single passband spectral filter designs using broadband optical neural networks and their experimental validation. FIGS. 11 A-l ID show optimized/leamed thickness profiles of three (3) diffractive layers along with the corresponding simulated (solid) and experimentally measured (dashed) spectral responses. (FIG. 11 A) 300 GHz, (FIG. 1 IB) 350 GHz, (FIG. 11C) 400 GHz and (FIG. 101) 420 GHz filters. These four (4) spectral filters were designed to favor power efficiency over Q-factor by setting b=0 in Equation (8). FIG. 1 IE is the same as in (FIG. 1 IB) except that the targeted spectral profile is a Gaussian with a Q-factor of 10, which was enforced during the training phase of the network by setting - = 0.1 in Equation (8). All these 5 diffractive neural networks were 3D-printed after their design and were experimentally tested using the set-up in FIGS. 10A-10D. The small residual peaks at around -0.55 THz observed in the experimental results are due to the water absorption lines in air, which were not taken into account in the numerical forward model. [0026] FIGS. 12A-12C: Dual passband spectral filter design using an optical neural network and its experimental validation. FIG. 12A shows the simulated (solid) and the experimentally observed (dashed) spectra of the diffractive network design. The center frequencies of the two target bands are 250 GHz and 450 GHz. FIG. 12B shows the projection of the spatial intensity distributions created by the 3-layer design on the xz plane (at y=0) for 250 GHz, 350 GHz and 450 GHz. FIG. 12C illustrates the learned thickness profiles of the three (3) diffractive layers of the network design. This broadband diffractive neural network design was 3D-printed and experimentally tested using the set-up in FIGS. 10A-10D.

[0027] FIGS. 13A-13C: Broadband diffractive neural network design for spatially- controlled wavelength de-multiplexing and its experimental validation. FIG. 13 A shows the physical layout of the 3-layer diffractive optical network model that channels four (4) spectral passbands around 300 GHz, 350 GHz, 400 GHz and 450 GHz onto four (4) different corresponding apertures at the output plane of the network. FIG. 13B shows the thickness profiles of the three (3) diffractive layers that are learned, forming the diffractive network model. This broadband diffractive neural network design was 3D-printed and experimentally tested using the set-up in FIGS. 10A-10D. FIG. 13C shows the simulated (solid) and the experimentally measured (dashed) power spectra at the corresponding four (4) detector locations at the output plane.

[0028] FIGS. 14A-14D: Illustrates the tunability of broadband optical neural networks. FIG. 14A shows the experimental (dashed line) and the numerically computed (solid line) output spectra for different axial distances between the last (3 rd ) diffractive layer and the output aperture based on the single passband spectral filter model shown in FIG. 1 IB. Az denotes the axial displacement of the output aperture with respect to its designed location. Negative values of Az represent locations of the output aperture closer to the diffractive layers and vice versa for the positive values. FIG. 14B shows numerical and experimentally measured changes in the center frequency (circles), the Q-factor (dashed line and squares) and the relative intensity (solid line and triangles) with respect to Az. FIGS. 14C and 14D are the same as in FIGS. 14A, 14B, respectively, except corresponding to a new design that is initialized using the diffractive spectral filter model of FIG. 11B, which was further trained with multiple loss terms associated with the corresponding passbands at different propagation distances from the last diffractive layer. Similar to transfer learning techniques used in deep learning, this procedure generates a new design that is specifically engineered for a tunable frequency response with enhanced and a relatively flat Q-factor across the targeted displacement range, Az. The small residual peaks at around -0.55 THz observed in the experimental results are due to the water absorption lines in air, which were not taken into account in the numerical forward model.

[0029] FIGS. 15A-15B are the dispersion curves of the polymer material (VeroBlackPlus RGD875) used for 3D-printing of the diffractive optical neural networks. Complex valued refractive index of a 1 mm-thick 3D printed layer that is made of VeroBlackPlus RGD875 was analyzed by the THz-TDS system described in the Methods section herein. The refractive index (FIG. 15 A) and the extinction coefficient (FIG. 15B) of VeroBlackPlus RGD875 material was calculated based on the real and imaginary parts of the complex refractive index, respectively.

[0030] FIG. 16 shows the broadband diffractive optical network-based spectral filter design using different sizes of output apertures (Apt.). Using three (3) diffractive layers with a layer-to-layer separation of 3 cm, and an output aperture that is 5 cm away from the 3 rd diffractive layer, different single passband spectral filters were designed around the center frequency of 350 GHz by changing the size of the output aperture (1, 2, 5 and 10 mm).

(X

Training phase involved a target Q-factor of 10 and — = 1. The degrees of freedom in the diffractive network permit one to maintain the Q-factor for various aperture sizes. Relatively lower power efficiency in the case of the 1 mm aperture is due to the fact that the aperture size in that case is comparable to the center wavelength (-0.85 mm).

[0031] FIGS. 17A-17C illustrates the thickness distributions of the diffractive layers (substrate layers) of the two single passband filter designs, before and after transfer learning. FIG. 17A shows the thickness profile of the diffractive layers before transfer learning; also shown in FIG. 11B. FIG. 17B shows the thickness profile of the diffractive layers after transfer learning. FIG. 17C illustrates the differences in the thickness distributions of the diffractive layers of the two designs, before and after transfer learning. With the additional constraints and training, subtle changes to the thickness distributions of the diffractive layers were applied to improve the tunability of the diffractive bandpass filter. The positive (negative) values in (FIG. 17C) denote a thickness reduction (increase) after the transfer learning step, which generated a new design that is specifically engineered for a tunable frequency response with enhanced and a more uniform Q-factor across the targeted axial displacement range (see FIGS. 14A-14D).

Detailed Description of the Illustrated Embodiments [0032] FIG. 1 schematically illustrates one embodiment of an optical neural network device 10 that is used in transmission mode according to one embodiment. A broadband light source 12 directs broadband light or a broadband optical signal 13 into the optical neural network device 10 that contains a plurality of substrate layers 16 (also sometimes referred to herein as diffractive layers). The broadband light source 12 may include in some embodiments a pulsed light source 12. In other embodiments, the broadband light source 12 may include a continuous light source 12. In the experiments described herein, the broadband light source is a THz emitter driven by a laser that produces a THz pulse. As explained herein, in some embodiments, an input aperture 14 may be interposed between the broadband light source 12 and the first substrate layer 16 (e.g., diffractive layer) of the optical neural network device 10. The aperture 14 may also be formed in the light source 12 or in a waveguide or the like coupled to the light source 14 (e.g., end of optical fiber forms effective aperture 14).

[0033] The optical neural network 10 contains a plurality of substrate layers 16 that are physical layers which may be formed as a physical substrate or matrix of optically transmissive material (for transmission mode) or optically reflective material (for reflective mode one or more materials in the optical neural network 10 form a reflective surface). Exemplary materials that may be used for the substrate layers 16 include polymers and plastics (e.g., those used in additive manufacturing techniques such as 3D printing) as well as semiconductor-based materials (e.g., silicon and oxides thereof, gallium arsenide and oxides thereol), crystalline materials or amorphous materials such as glass and combinations of the same. While FIG. 1 illustrates light emitting directly from a broadband light source 12 going directly into the optical neural network 10, the light from the broadband light source 12 pass through and/or reflect off an object, medium, or the like prior entering the optical neural network 10. The broadband light source of light 12 may generate a broadband optical signal 13 that is formed from a plurality of different wavelengths that collectively form a broadband optical signal. The broadband optical signal 13 that is input to the optical neural network 10 may span a variety of wavelengths or frequency ranges. This includes, for example, a broadband optical signal 13 that is within the visible electromagnetic spectrum (e.g., around 380 nm to around 750 nm wavelengths) from a visible source of light 12. The broadband optical signal 13 may be in the THz frequency range as is disclosed herein. The broadband optical signal 13 may also be in the infrared region of the electromagnetic spectrum (e.g., generally above 750 nm wavelengths to millimeter, centimeter, or larger wavelengths).

[0034] With reference to FIGS. 4-6, each substrate layer 16 of the optical neural network has a plurality of physical features 18 formed on the surface of the substrate layer 16 or within the substrate layer 16 itself that collectively define a pattern of physical locations along the length and width of each substrate layer 16 that have varied complex-valued transmission coefficients (or varied complex-valued transmission reflection coefficients for the embodiment of FIG. 2). The physical features 18 formed on or in the layers 16 thus create a pattern of physical locations within the layers 16 that have different complex-valued transmission coefficients as a function of lateral coordinates (e.g., length and width and in some embodiments depth) across each substrate layer 16. In some embodiments, each separate physical feature 18 may define a discrete physical location on the substrate layer 16 while in other embodiments, multiple physical features 18 may combine or collectively define a physical region with a particular complex-valued transmission coefficient. The plurality of optically transmissive layers 16 arranged along the optical path 11 (FIG. 1) collectively define a trained mapping function between a broadband optical signal 13 input to the plurality layers 16 and one or more output optical signal(s) 22 created by optical diffraction through/off the plurality of substrate layers 16.

[0035] The pattern of physical locations formed by the physical features 18 may define, in some embodiments, an array located across the surface of the substrate layer 16. With reference to FIG. 3, the substrate layer 16 in one embodiment is a two-dimensional generally planer substrate having a length (L), width (W), and thickness (t) that all may vary depending on the particular application. In other embodiments, the substrate layer 16 may be non-planer such as, for example, curved. In addition, while FIG. 3 illustrates a rectangular or square shaped substrate layer 16 different geometries are contemplated. With reference to FIG. 1 and FIG. 3, the physical features 18 and the physical regions formed thereby act as artificial “neurons” 24 as seen in FIG. 3 that connect to other “neurons” 24 of other substrate layers 16 of the optical neural network 10 (as seen, for example, in FIGS. 1 and 2) through optical diffraction (or reflection in the case of the embodiment of FIG. 2) and alter the phase and/or amplitude of the light wave. The particular number and density of the physical features 18 or artificial neurons 24 that are formed in each substrate layer 16 may vary depending on the type of application. In some embodiments, the total number of artificial neurons 24 may only need to be in the hundreds or thousands while in other embodiments, hundreds of thousands or millions of neurons 24 or more may be used. Likewise, the number of layers 16 that are used in a particular optical neural network 10 may vary although it typically ranges from at least two substrate layers 16 to less than ten substrate layers 16.

[0036] As seen in FIG. 1, the one or more output optical signal(s) 22 is/are captured by one or more optical sensors 26. The optical sensor 26 may include, for example, an image sensor (e.g., CMOS image sensor or image chip such as CCD), photodetectors (e.g., photodiode such as avalanche photodiode detector (APD)), photomultiplier (PMT) device, and the like. With reference to FIGS. 1 and 2, there are multiple optical sensors 26a, 26b,

26c, 26d. These may be discrete optical sensors or they may even be certain pixels on a larger array such as CMOS image sensor that act as individual sensors. The one or more optical sensors 26 may, in some embodiments, be coupled to a computing device 29 as seen in FIG. 1 (e.g., a computer or the like such as a personal computer, laptop, server, mobile computing device) that is used to acquire, store, process, manipulate, analyze, and/or transfer the one or more output optical signal(s) 22. In other embodiments, the optical sensor 26 may be integrated within a device such as a camera that is configured to acquire, store, process, manipulate, analyze, and/or transfer the one or more output optical signal (s) 22. As described herein, in some embodiments, each sensor 26 may be associated with an output aperture 34. An opaque layer having one or more apertures 34 formed therein may be interposed between the last of the substrate layers 16 and the sensor(s) 26.

[0037] FIG. 2 schematically illustrates one embodiment of an optical neural network 10 that is used in reflection mode according to one embodiment. Similar components and features shared with the embodiment of FIG. 1 are labeled similarly. In this embodiment, the object is illuminated with broadband light from the broadband light source 12 as described previously. This broadband optical signal 13 forms an input optical signal that is input to the optical neural network 10. In this embodiment, the optical neural network 10 operates in reflection mode whereby light is reflected by a plurality of substrate layers 16. As seen in the embodiment of FIG. 2, the optical path 11 is a folded optical path as a result of the reflections off the plurality of substrate layers 16. The number of substrate layers 16 may vary depending on the particular function or task that is to be performed as noted above. Each substrate layer 16 of the optical neural network 10 has a plurality of physical features 18 formed on the surface of the substrate layer 16 or within the substrate layer 16 itself that collectively define a pattern of physical locations along the length and width of each substrate layer 16 that have varied complex-valued reflection coefficients. Like the FIG. 1 embodiment, the one or more output optical signal(s) 22 is captured by one or more optical sensors 26. The one or more optical sensors 26 may be coupled to a computing device 29 as noted or integrated into a device such as a camera.

[0038] FIG. 4 illustrates one embodiment of how different physical features 18 are formed in the substrate layer 16. In this embodiment, a substrate 16 has different thicknesses (t) of material at different lateral locations along the substrate layer 16. In one embodiment, the different thicknesses (t) modulates the phase of the light passing through the substrate layer 16. This type of physical feature 18 may be used, for instance, in the transmission mode embodiment of FIG. 1. The different thicknesses of material in the substrate layer 16 forms a plurality of discrete “peaks” and “valleys” that control the complex-valued transmission coefficient of the neurons 24 formed in the substrate layer 16. The different thicknesses of the substrate layer 16 may be formed using additive manufacturing techniques (e.g., 3D printing) or lithographic methods utilized in semiconductor processing. For example, the design of the substrate layers 16 may be stored in a stereolithographic file format (e.g., .stl file format) which is then used to 3D print the substrate layers 16. Other manufacturing techniques include well-known wet and dry etching processes that can form very small lithographic features on a substrate layer 16. Lithographic methods may be used to form very small and dense physical features 18 on the substrate layer 16 which may be used with shorter wavelengths of the light. As seen in FIG. 4, in this embodiment, the physical features 18 are fixed in permanent state (i.e., the surface profile is established and remains the same once complete).

[0039] FIG. 5 illustrates another embodiment in which the physical features 18 are created or formed within the substrate layer 16. In this embodiment, the substrate layer 16 may have a substantially uniform thickness but have different regions of the substrate 16 have different optical properties. For example, the complex-valued refractive (or reflective) index of the substrate layers 16 may altered by doping the substrate layers 16 with a dopant (e.g., ions or the like) to form the regions of neurons 24 in the substrate layers 16 with controlled transmission properties. In still other embodiments, optical nonlinearity can be incorporated into the deep optical network design using various optical non-linear materials (crystals, polymers, semiconductor materials, doped glasses, polymers, organic materials, semiconductors, graphene, quantum dots, carbon nanotubes, and the like) that are incorporated into the substrate layer 16. A masking layer or coating that partially transmits or partially blocks light in different lateral locations on the substrate layer 16 may also be used to form the neurons 24 on the substrate layers 16.

[0040] Alternatively, the complex-valued transmission function of a neuron 24 can also engineered by using metamaterial or plasmonic structures. Combinations of all these techniques may also be used. In other embodiments, non-passive components may be incorporated in into the substrate layers 16 such as spatial light modulators (SLMs). SLMs are devices that imposes spatial varying modulation of the phase, amplitude, or polarization of a light. SLMs may include optically addressed SLMs and electrically addressed SLM. Electric SLMs include liquid crystal-based technologies that are switched by using thin-film transistors (for transmission applications) or silicon backplanes (for reflective applications). Another example of an electric SLM includes magneto-optic devices that use pixelated crystals of aluminum garnet switched by an array of magnetic coils using the magneto-optical effect. Additional electronic SLMs include devices that use nanofabricated deformable or moveable mirrors that are electrostatically controlled to selectively deflect light. [0041] FIG. 6 schematically illustrates a cross-sectional view of a single substrate layer 16 of an optical neural network 10 according to another embodiment. In this embodiment, the substrate layer 16 is reconfigurable in that the optical properties of the various physical features 18 that form the artificial neurons 24 may be changed, for example, by application of a stimulus (e.g., electrical current or field). An example includes spatial light modulators (SLMs) discussed above which can change their optical properties. In other embodiments, the layers may use the DC electro-optic effect to introduce optical nonlinearity into the substrate layers 16 of an optical neural network 10 and require a DC electric-field for each substrate layer 16 of the optical neural network 10. This electric-field (or electric current) can be externally applied to each substrate layer 16 of the optical neural network 10.

Alternatively, one can also use poled materials with very strong built-in electric fields as part of the material (e.g., poled crystals or glasses). In this embodiment, the neuronal structure is not fixed and can be dynamically changed or tuned as appropriate (i.e., changed on demand). This embodiment, for example, can provide a learning optical neural network 10 or a changeable optical neural network 10 that can be altered on-the-fly to improve the performance, compensate for aberrations, or even change another task.

[0042] The optical neural network 10 described herein may perform a number of functions or operations on the input broadband optical signal 13. For example, in one embodiment, the optical neural network 10 filters a particular wavelength or a set of wavelengths within a wavelength range of the broadband optical signal 13 that is input to the optical neural network 10 (e.g., within the range of the broadband light source 12). In another embodiment, the optical neural network 10 filters a plurality of particular wavelengths or sets of wavelengths within a wavelength range of the input broadband optical signal 13. The optical neural network 10 may also be used to generate wavelength de-multiplexed output optical signal(s) 22 for the one or more optical sensors 26. The optical neural network 10 may also generate a single common multiplexed output optical signal 22 or channel for the one or more optical sensors 26 (in this case one optical sensor 26) from individually filtered or de multiplexed channels that are used as the broadband optical signal 13 that is input to the optical neural network 10. In this last embodiment, the optical neural network 10 works in the “reverse” that takes multiple input “channels” as part of the broadband optical signal 13 and outputs a single common multiplexed output optical signal 22 or channel.

[0043] FIG. 7 illustrates a flowchart of the operations or processes according to one embodiment to create and use an optical neural network 10. As seen in operation 200 of FIG. 7, a specific task/function is first identified that the optical neural network 10 will perform. This may include, for example, filtering a broadband optical signal 13 as explained herein. The filter may, for example, pass one or more bands of a particular wavelength or wavelength range. The specific task/function may also include wavelength de-multiplexing of the input broadband optical signal 13. Once the task or function has been established, a computing device 100 having one or more processors 102 executes software 104 thereon to then digitally train a model or mathematical representation of multi-layer diffractive and/or reflective substrate layers 16 formed in the optical neural network 10 to the desired task or function to then generate a design for a physical embodiment of the optical neural network 10. This operation is illustrated as operation 210 in 220.

[0044] Next, using the design for the physical embodiment of the optical neural network 10, the actual substrate layers 16 used in the physical optical neural network 10 are then manufactured in accordance with the design. The design, in some embodiments, may be embodied in a software format (e.g., SolidWorks, AutoCAD, Inventor, or other computer- aided design (CAD) program or lithographic software program) and may then be manufactured into a physical embodiment that includes the plurality of substrate layers 16. The physical substrate layers 16, once manufactured may be mounted or disposed in a holder 30 such as that illustrated in FIG. 8. The holder 30 may include a number of slots 32 formed therein to hold the individual substrate layers 16 in the required sequence and with the required spacing between adjacent layers (if needed). Once the physical embodiment of the optical neural network 10 has been made, the optical neural network 10 is then used to perform the specific task or function as illustrated in operation 230 of FIG 7.

[0045] Note that in some embodiments, an aperture 34 such as that illustrated in FIGS. 10A, 10B, IOC may be interposed between the last of the physical substrate layers 16 and the optical sensor 26. While a THz optical sensor 26 is disclosed herein, it should be appreciated that a variety of different optical sensors 26 could be employed. These include, a CMOS image sensor or image chip such as CCD, photodetectors (e.g., photodiode such as avalanche photodiode detector (APD)), photomultiplier (PMT) device, focal plane array, and the like. There may be a single optical sensor 26 or an array of such sensors 26. Likewise, the broadband light source or input optical signal may comprise other frequencies of the electromagnetic spectrum. This includes, for example, light from a broadband visible source or a broadband infrared source.

[0046] As noted above, the particular spacing of the substrates 16 that make the optical neural network 10 may be maintained using the holder 30 of FIG. 8. The holder 30 may contact one or more peripheral surfaces of the substrate layers 16. In some embodiments, the holder 30 may contain a number of slots 32 that provide the ability of the user to adjust the spacing (S) between adjacent substrate layers 16. In some embodiments, the substrate layers 16 may be permanently secured to the holder 30 while in other embodiments, the substrate layers 16 may be removable from the holder 30. The plurality of substrate layers 16 may be positioned within and/or surrounded by vacuum, air, a gas, a liquid, or a solid material.

[0047] Experimental

[0048] Design of broadband diffractive optical networks

[0049] Designing broadband, task-specific and small footprint, compact components that can perform arbitrary optical transformations is highly sought in all parts of the electromagnetic spectrum for various applications, including e.g., tele-communications, biomedical imaging and chemical identification, among others. This general broadband inverse optical design problem was approached from the perspective of diffractive optical neural network training and its success was demonstrated with various optical tasks. Unlike the training process of the previously reported monochromatic diffractive neural networks, here the optical forward model used to design the optical neural network 10 is based on the angular spectrum formulation of broadband light propagation within the diffractive network, precisely taking into account the dispersion of the fabrication material to determine the light distribution at the output plane of the network. Based on a network training loss function, a desired optical task can be learned through error backpropagation within the diffractive layers 16 of the optical network, converging to an optimized spectral and/or spatial distribution of light at the output plane.

[0050] In its general form, the broadband diffractive network design assumes an input spectral frequency band between fmin and fmax. Uniformly covering this range, M discrete frequencies are selected to be used in the training phase. In each update step of the training, an input beam carrying a random subset of B frequencies out of these M discrete frequencies is propagated through the diffractive layers (the layers that ultimately form the substrate layers 16) and a loss function is calculated at the output plane tailored according to the desired task; without loss of generality B/M has been selected in the designs to be less than 0.5%. At the final step of each iteration, the resultant error is backpropagated to update the physical parameters of the diffractive layers (e.g., during the computational training operation 210 in FIG. 7) controlling the optical modulation within the optical network. The training cycle 210 continues until either a predetermined design criterion at the network output plane

M is satisfied or the maximum number of epochs (where each epoch involves _B_ successive iterations) is reached. In the broadband diffractive network designs 10, the physical parameter to be optimized was selected as the thickness of each neuron 24 (i.e., thickness of physical features 18) within the substrate layers 16, enabling the control of the phase modulation profile of each diffractive layer 16 in the network. In addition, the material dispersion including the real and imaginary parts of the refractive index of the network material as a function of the wavelength were also taken into account to correctly represent the forward model of the broadband light propagation within the optical neural network 10.

As a result of this, for each wavelength within the input light spectrum, there is a unique complex (i.e., phase and amplitude) modulation, corresponding to the transmission coefficient of each neuron 24, determined by its physical thickness, which is a trainable parameter for all the layers 16 of the diffractive optical network.

[0051] Upon completion of the digital training phase in a computer 29, which typically takes ~5 hours, the designed diffractive layers 16 were then physically fabricated using a 3D- printer and the resulting optical neural networks 10 were experimentally tested using the THz Time-Domain Spectroscopy (TDS) system illustrated in FIGS. 9A-9D, which has a noise- equivalent power bandwidth of 0.1 - 5 THz.

[0052] Single passband spectral filter design and testing

[0053] The diffractive single passband spectral filter designs are comprised of three (3) diffractive layers 16, with a layer-to-layer separation of 3 cm, and an output aperture 34, positioned 5 cm away from the last diffractive layer 16, serving as a spatial filter as shown in FIGS. 9A-9D. For the spectral filter designs, the parameters M, fmin and fmax were taken as 7500, 0.25 THz and 1 THz, respectively. Using this broadband diffractive network framework employing three (3) successive layers 16, four (4) different optical neural networks 10 that operated as spectral bandpass filters were designed with center frequencies of 300 GHz, 350 GHz, 400 GHz and 420 GHz, as shown in FIGS. 1 lA-1 ID, respectively.

For each design, the target spectral profile was set to have a flat-top bandpass over a narrow band (±2.5 GHz) around the corresponding center frequency. During the training of these designs, a loss function was used that solely focused on increasing the power efficiency of the target band, without a specific penalty on the Q-factor of the filter. As a result of this design choice during the training phase, the numerical models converged to bandpass filters centered around each target frequency as shown in FIGS. 11 A-l ID. These trained diffractive models reveal the peak frequencies (and the Q-factors) of the corresponding designs to be 300.1 GHz (6.21), 350.4 GHz (5.34), 399.7 GHz (4.98) and 420.0 GHz (4.56), respectively. After the fabrication of each one of these trained models to form the physical optical neural network 10 using a 3D-printer, these four different diffractive networks were experimentally tested to find out a very good match between the numerical testing results and the physical diffractive network 10 results: based on the dashed lines depicted in FIGS. 11 A-l ID, the experimental counterparts of the peak frequencies (and the Q-factors) of the corresponding designs were calculated as 300.4 GHz (4.88), 351.8 GHz (7.61), 393.8 GHz (4.77) and 418.6 GHz (4.22). [0054] Furthermore, the power efficiencies of these four different bandpass filter designs, calculated at the corresponding peak wavelength, were determined as 23.13%, 20.93%, 21.76% and 18.53%, respectively. To shed more light on these efficiency values of the diffractive THz systems and estimate the specific contribution due to the material absorption, the expected power efficiency at 350 GHz was analyzed by modeling each diffractive layer 16 as a uniform slab. Based on the extinction coefficient of the 3D-printing polymer at 350 GHz (FIGS 15A, 15B), three (3) successive flat layers, each with 1 mm thickness, provide 27.52% power efficiency when the material absorption is assumed to be the only source of loss. This comparison reveals that the main source of power loss in the spectral filter models is in fact the material absorption, which can be circumvented by selecting different types of fabrication materials with lower absorption compared to the 3D printer material actually used in the experiments (i.e., acrylic compound, VeroBlackPlus RGD875).

[0055] To further exemplify the different degrees of freedom in the diffractive network- based design framework, FIG. 1 IE illustrates another optical neural network 10 that was designed as a bandpass filter that is centered at 350 GHz, same as in FIG. 1 IB; however, different than FIG. 1 IB this particular case represents a design criterion where the desired spectral filter profile was set as a Gaussian with a Q-factor of 10. Furthermore, the training loss function was designed to favor a high Q-factor rather than better power efficiency by penalizing Q-factor deviations from the target value more severely compared to poor power efficiency. To provide a fair comparison between FIGS. 1 IB and 1 IE, all the other design parameters, e.g., the number of diffractive layers 16, the size of the output aperture 34 and the relative distances are kept identical. Based on this new design (FIG. 1 IE), the numerical (experimental) values of the peak frequency and the Q-factor of the final model can be calculated as 348.2 GHz (352.9 GHz) and 10.68 (12.7), once again providing a very good match between the numerical testing and experimental results (solid and dashed lines), following the 3D printing of the optical neural network 10. Compared to the results reported in FIG. 1 IB, this improvement in the Q-factor also comes at the expense of a power efficiency drop down to 12.76%, which is expected by design, i.e., the choice of the training loss function. [0056] Another important difference between the designs depicted in FIGS. 1 IB and 1 IE lies in the structures of their diffractive layers 16. A comparison of the 3 rd layers shown in FIGS. 1 IB and 1 IE reveals that, while the former design demonstrates a pattern at its 3 rd layer that is intuitively similar to a diffractive lens, the thickness profile of the latter design (FIG. 11E) does not evoke any physically intuitive explanation of its immediate function within the diffractive network 10; the same conclusion is also evident if one examines the 1 st diffractive layers reported in FIG. 11E as well as FIGS. 12A-12C and 13A-13C.

Convergence to physically non-intuitive designs such as illustrated herein, in the absence of a tailored initial condition or prior design, shows the power of the diffractive computational framework in the context of broadband, task-specific optical system design.

[0057] Dual passband spectral filter design and testing

[0058] Having presented the design and experimental validation of five (5) different bandpass filters using broadband optical neural networks 10, the same design framework was used for a more challenging task: a dual passband spectral filter that directs two separate frequency bands onto the same output aperture 34 while rejecting the remaining spectral content of the broadband input light. The physical layout of the diffractive network 10 design is the same as before, composed of three (3) diffractive layers 16 and an output aperture 34 plane. The goal of this diffractive optical network 10 is to produce a power spectrum at the same aperture 34 that is the superposition of two flat-top passband filters, around the center frequencies of 250 GHz and 450 GHz (see FIGS. 1 lA-11C). Following the deep learning- based design and 3D fabrication of the optical neural network 10, the experimental measurement results (dashed line in FIG. 11 A) provide a very good agreement with the numerical results (solid line in FIG. 11 A); the numerical diffractive model has peak frequencies at 249.4 GHz and 446.4 GHz, which closely agree with the experimentally observed peak frequencies, i.e., 253.6 GHz and 443.8 GHz for the two target bands.

[0059] Despite the fact that no restrictions or loss terms related to the Q-factor were imposed during the training phase, the power efficiencies of the two center/peak frequencies, 250 GHz and 450 GHz, were calculated as 11.91% and 10.51%, respectively. These numbers indicate a power efficiency drop compared to the single passband diffractive designs reported earlier (FIGS. 11 A-l IE); however, it should be noted that the total power transmitted from the input plane to the output aperture 34 (which has the same size as before) is maintained at approximately 20% in both the single passband and the double passband filter designs.

[0060] A projection of the intensity distributions produced by the 3-layer design on the xz plane (at y=0) is also illustrated in FIG. 1 IB, which exemplifies the operation principles of the diffractive network regarding the rejection of the spectral components residing between the two targeted passbands. For example, one of the undesired frequency components at 350 GHz, is focused to a location between the 3 rd layer 16 and the output aperture 34 with a higher numerical aperture (NA) compared to the waves in the target bands. As a result, this frequency quickly diverges as it propagates until the output plane, hence its contribution to the transmitted power beyond the aperture is significantly decreased, as desired.

[0061] From the spectrum reported in FIG. 11 A, it can also be noticed that there is a difference between the Q-factors of the two passbands. The main factor causing this variation in the Q-factor is the increasing material loss at higher frequencies (FIGS. 15A and 15B), which is a limitation due to the 3D printing material that was used. If one selects the power efficiency as the main design priority in a broadband diffractive network, the optimization of a larger Q-factor optical filter function is relatively more cumbersome for higher frequencies due to the higher material absorption that are experienced in the physically fabricated, 3D- printed optical neural network 10. As a general rule, maintaining both the power efficiencies and the Q-factors over K bands in a multi-band filter design requires optimizing the relative contributions of the training loss function sub-terms associated with each design criterion; this balance among the sub-constituents of the loss function should be carefully engineered during the training phase of a broadband diffractive network depending on the specific task of interest.

[0062] Spatially-controlled wavelength de-multiplexing

[0063] Next, the simultaneous control of the spatial and spectral content of the diffracted light at the output plane of a broadband diffractive optical network was investigated, and demonstrated its utility for spatially-controlled wavelength de-multiplexing, by training three (3) diffractive layers 16 (FIG. 13B) that channel the broadband input light onto four (4) separate output apertures 34 on the same plane, corresponding to four (4) passbands around 300 GHz, 350 GHz, 400 GHz and 450 GHz (FIG. 13A). The numerically designed spectral profiles based on the diffractive optical network model (solid) and its experimental validation (dashed), following the 3D-printing of the trained model, are reported in FIG. 13C for each sub-band, providing once again a very good match between the numerical model and the experimental results. Based on FIG. 13C, the numerically estimated and experimentally measured peak frequency locations are (297.5 GHz, 348.0 GHz, 398.5 GHz, 450.0 GHz) and (303.5 GHz, 350.1 GHz, 405.1 GHz, 454.8 GHz), respectively. The corresponding Q-factors calculated based on the simulations (11.90, 10.88, 9.84, 8.04) are also in accordance with their experimental counterparts (11.0, 12.7, 9.19, 8.68), despite various sources of experimental errors. Similar to the earlier observations in dual passband filter results, higher bands exhibit a relatively lower Q-factor that is related to the increased material losses at higher frequencies (FIGS. 15A-15B).

[0064] In addition to the material absorption losses, there are two other factors that need to be considered for wavelength multiplexing or de-multiplexing related applications using optical neural networks 10. First, the lateral resolution of the fabrication method that is selected to manufacture a broadband optical neural network 10 might be a limiting factor at higher frequencies; for example, the lateral resolution of the 3D printer dictates a feature size of ~l/2 at 300 GHz that restricts the diffraction cone of the propagating waves at higher frequencies. Second, the limited axial resolution of a 3D fabrication method might impose a limitation on the thickness levels of the neurons 24 of a diffractive layer design; for example, using the 3D-printer the associated modulation functions of individual neurons 24 are quantized with a step size of 0.0625 mm, which provides 4 bits (within a range of 1 mm) in terms of the dynamic range, which is sufficient over a wide range of frequencies. With increasing frequencies, however, the same axial step size will limit the resolution of the phase modulation steps available per diffractive layer 16, partially hindering the associated performance and the generalization capability of the diffractive optical network 10. Nevertheless, with dispersion engineering methods (using e.g., metamaterials) and/or higher resolution 3D-fabricati on technologies, including e.g., optical lithography or two-photon polymerization-based 3D-printing, multi-layer wavelength multiplexing/de-multiplexing systems operating at various different parts of the electromagnetic spectrum can be designed and tested using broadband optical neural networks 10.

[0065] There are several factors which might have contributed to the relatively minor discrepancies observed between the numerical simulations and the experimental results reported. First, any mechanical misalignment (lateral and/or axial) between the diffractive layers 16 due to e.g., the 3D-printer’s resolution can cause some deviation from the expected output. In addition, the THz pulse incident on the input plane is assumed to be spatially - uniform, propagating parallel to the optical axis 11, which might introduce additional experimental errors in the results due to imperfect beam profile and alignment with respect to the optical axis 11. Moreover, the wavelength dependent properties of the THz detector 36 such as the acceptance angle and the coupling efficiency are not modeled as part of the forward model, which might also introduce error. Finally, potential inaccuracies in the characterization of the dispersion of the 3D-printing materials used in the experiments could also contribute some error in the measurements compared to the trained model numerical results.

[0066] For all the designs presented herein, the width of each output aperture 34 is selected as 2 mm which is approximately 2.35 times the largest wavelength (corresponding to fmin = 0.25THz) in the input pulses. The reason behind this specific design choice is to mitigate some of the unknown effects of the Si lens attached in front of the THz detector, since the theoretical wave optics model of this lens is not available. Consequently, the last layer before the output aperture 34 in the single passband spectral filter designs intuitively resembles a diffractive lens (see FIGS. 11 A-l ID). However, unlike a standard diffractive lens, the optical neural network 10 that is composed of multiple-layers can provide a targeted Q-factor even for a large range of output apertures, as illustrated in FIG. 16.

[0067] It is interesting to note that the diffractive single passband filter designs reported in FIGS. 11 A-l IE can be tuned by changing the distance between the diffractive neural network (the substrate layers 16) and the output plane or detector(s) 26, establishing a simple passband tunability method for a given fabricated diffractive network 10. FIGS. 14A-14D reports the simulations and experimental results at five (5) different axial distances using the 350 GHz diffractive network design, where Az denotes the axial displacement around the ideal, designed location of the output plane/detectors 26. As the aperture 34 gets closer to the final diffractive layer 16, the passband experiences a red shift (center frequency decreases) and any change in the opposite direction causes a blue shift (center frequency increases). However, deviations from the ideal position of the output aperture also decrease the resulting Q-factor (see FIG. 14B); this is expected since these distances with different Az values were not considered as part of the optical system design during the network training phase. Interestingly, a given diffractive spectral filter model can be used as the initial condition of a new diffractive network design and be further trained with multiple loss terms around the corresponding frequency bands at different propagation distances from the last diffractive layer 16 to yield a better engineered tunable frequency response that is improved from the original diffractive design. To demonstrate the efficacy of this approach, FIGS. 14C and 14D report the output power spectra of this new model (designed around 350 GHz) and the associated Q-factors, respectively. As desired, the resulting Q-factors are now enhanced and more uniform across the targeted Az range due to the additional training with a band tunability constraint, which can be regarded as the counterpart of transfer learning technique (frequently used in machine learning) within the context of optical system design using diffractive neural network models. FIGS. 17A-17C also reports the differences in the thickness distributions of the diffractive layers 16 of these two designs, i.e., before and after the transfer learning, corresponding to FIGS. 14 A, 14B and FIGS. 14C, 14D, respectively. [0068] As disclosed herein, the optical neural network platform can be generalized to broadband sources and process optical waves over a continuous, wide range of frequencies. The design framework is, however, not limited to THz wavelengths and can be applied to other parts of the electromagnetic spectrum, including the visible band (and infrared band), and therefore it represents a vital progress towards expanding the application space of diffractive optical neural networks for scenarios where broadband operation is more attractive and essential.

[0069] Methods

[0070] Terahertz time-domain spectroscopy system

[0071] A TkSapphire laser (Coherent MIRA-HP) is used in mode-locked operation to generate femtosecond optical pulses at 780 nm wavelength. Each optical pulse is split into two beams. One part of the beam illuminates the THz emitter (illumination source 12), a high-power plasmonic photoconductive nano-antenna array. The THz pulse generated by the THz emitter is collimated and guided to a THz detector 26 through an off-axis parabolic mirror, which is another plasmonic nano-antenna array that offers high-sensitivity and broadband operation. The other part of the optical beam passes through an optical delay line and illuminates the THz detector 26. The generated signal as a function of delay line position and incident THz/optical fields is amplified with a current pre-amplifier (Femto DHPCA- 100) and detected with a lock-in amplifier (Zurich Instruments MFLI). For each measurement, traces are collected for 5 seconds and 10 pulses are averaged to obtain the time-domain signal. Overall, the system offers signal-to-noise ratio levels over 90 dB and observable bandwidths up to 5 THz. Each time-domain signal is acquired within a time window of 400-ps.

[0072] Each model of the optical neural network 10, after its 3D-printing, was positioned in between the emitter 12 and the detector 26, coaxial with the THz beam, as shown in FIGS. 9D. With a limited input beam size, the first layer of each diffractive network 10 was designed with a 1 cm c 1 cm input aperture 14 (as shown in e.g., FIG. 10B. After their training, all the optical neural networks 10 were fabricated using a commercial 3D-printer (Objet30 Pro, Stratasys Ltd.). In front of the THz detector 26, a 2 mm c 2 mm output aperture 34 was printed and aluminum coated (FIG. IOC).

[0073] Forward propagation model [0074] The broadband diffractive optical neural network 10 framework performs optical computation through diffractive layers 12 connected by free space propagation in air. The diffractive layers 12 are modeled as thin modulation elements, where each pixel on I th layer at a spatial location (x u y u z L ) provides a wavelength (A) dependent modulation, t,

[0075] where a and f denotes the amplitude and phase, respectively.

[0076] Between the diffractive layers 12, free space light propagation is calculated following the Rayleigh-Sommerfeld equation. The i"' pixel on I th layer 16 at location (x j , y Zj ) can be viewed as the source of a secondary wave w- (x, y, z, A), which is given by:

[0077] Treating the incident field as the 0 th layer, then the modulated optical field u l by I th layer at location

O i.yi.zd is given by:

[0078] where / denotes all pixels on the previous layer.

[0079] Digital implementation

[0080] Without loss of generality, an equalized spectrum was used during the training phase, i.e., for each distinct l value, a plane wave with unit intensity and uniform phase profile was assumed. The assumed frequency range of at the input plane was taken as 0.25 - 1 THz for all the designs and this range was uniformly partitioned into M=7500 discrete frequencies. A square input aperture 14 with a width of 1 cm was chosen to match the beam width of the incident THz pulse. Restricted by the fabrication method, a pixel size of 0.5 mm was used as the smallest printable feature size. To accurately model the wave propagation over a wide range of frequencies based on the Rayleigh-Sommerfeld diffraction integral, the simulation window was oversampled by 4 times with respect to the smallest feature size, i.e., the space was sampled with 0.125 mm steps. Accordingly, each feature of the diffractive layers 12 of a given optical neural network 10 design was represented on a 4 x 4 grid, all the 16 elements sharing the same physical thickness value. The printed thickness value, h, is the superposition of two parts, h m and h base as depicted in Equation 4b. h m denotes the part where the wave modulation takes place and is confined between hmin = 0 and /?ma\ = 1 mm. The second term, h base = 0.5 mm, is a constant, non-trainable thickness value that ensures robust 3D-printing, helping with the stiffness of the diffractive layers. To achieve the constraint applied to h m , the thickness of each diffractive feature over an associated latent (trainable) variable, h p , was defined using the following analytical form: h m = (sin (4a) h q (4b)

[0081] where q{. ) denotes a 16-level uniform quantization (0.0625 mm for each level with h max = 1 mm).

[0082] The amplitude and phase components of the i th neuron on layer /, i.e., a l (x i, y Zj , l) and f i (co y^, z 0 l) in Equation (1), can be defined as a function of the thickness of each neuron, h and the incident wavelength as follows:

[0083] The wavelength dependent parameters, n(X) and the extinction coefficient k(l), are defined over the real and imaginary parts of the refractive index, h(l) = h(l) + ]k(l). characterized by the dispersion analysis performed over a broad range of frequencies (FIGS. 15 A, 15B).

[0084] Loss function and training related details

[0085] After light propagation through the substrate layers 12 of the optical neural network 10, a 2 mm wide output aperture 34 was used at the output plane, right before the integrated detector lens 26 which is made of Si and has the shape of a hemisphere with a radius of 0.5 cm. In the simulations, the detector was modeled as an achromatic flat Si slab with a refractive index of 3.4 and a thickness of 0.5 cm. After propagating through this Si slab, the light intensity residing within a designated detector active area was integrated and denoted by I out . The power efficiency was defined by:

[0086] where I in denotes the power of the incident light within the input aperture 14 of the optical neural network 10. For each diffractive network model, the power efficiency was reported for the peak wavelength of a given passband. [0087] Loss term, L, used for single passband filter designs was devised to achieve a balance between the power efficiency and the Q-factor, defined as:

L = aL p + /Jig (8)

[0088] where L p denotes the power loss and L Q denotes the Q-factor loss term; a and b are the relative weighting factors for these two loss terms, which were calculated using the following equations:

[0089] with B, o 0 and Aw R denoting the number of frequencies used in a training batch, the center frequency of the target passband and the associated bandwidth around the center frequency, respectively rect(m) function is defined as:

[0090] Assuming a power spectrum profile with a Gaussian distribution N(w 0 , s 2 ) with a Full-Width-Half-Maximum (FWHM) bandwidth of Dw, the standard deviation and the associated A J Q were defined as:

Dwr = 6s (lib)

[0091] And the Q-factor was defined as: wo

Q (12)

Dw

[0092] For the single passband diffractive spectral filter designs reported in FIGS. 11 A- 1 ID and the dual passband spectral filter reported in FIGS. 11 A-l 1C, Aw R for each band was taken as 5 GHz. For these five (5) diffractive designs, b in Equation (8) was set to be 0, to enforce the network model to maximize the power efficiency without any restriction or penalty on the Q-factor. For the diffractive spectral filter design illustrated in FIG. 1 IE, on the other hand, - ratio (balancing the power efficiency and Q-factor) was set to be 0.1 in Equation (8).

[0093] In the design phase of the spatially-controlled wavelength de-multiplexing system (FIGS. 13A-13C), following the strategy used in the filter design depicted in FIG. 1 IE, the target spectral profile around each center frequency was taken as a Gaussian with a Q-factor of 10. For simplicity, the - ratio in Equation (8) was set to be 0.1 for each band and detector location, where the indices refer to the four different apertures at the detector/output plane 26. Although not implemented in this work, - ratios among different bands/channels can also be separately tuned to better compensate for the material losses as a function of wavelength. In general, to design an optical component that maintains the photon efficiency and Q-factor over K different bands based on the broadband diffractive optical network framework, a set of 2K coefficients, i.e., (a 1 , a 2 , ... , a K , b 1 , b 2 , ... , bk), must be tuned according to the material dispersion properties for all the subcomponents of the loss function.

[0094] In the training phase, M= 7500 frequencies were randomly sampled in batches of B 20. which is mainly limited by the GPU memory of the computer 100. The trainable variables, h p in Equation (4b), were updated following the standard error backpropagation method using Adam optimizer with a learning rate of 1 x 10 -3 . The initial conditions of all the trainable parameters were set to be 0. For the diffractive network models with more than one detector 26 location, loss values were individually calculated for each detector 26 with a random order, and the design parameters were updated hereafter. In other words, for a d- detector optical system, loss calculations and parameter updates were performed d times with respect to each detector 26 in a random order.

[0095] The models were simulated using Python (v3.7.3) and TensorFlow (vl.13.0, Google Inc.). All the models were trained using 200 epochs (the network saw all the 7500 frequencies at the end of each epoch) with a GeForce GTX 1080 Ti Graphical Processing Unit (GPU, Nvidia Inc.) and Intel® Core ™ i9-7900X Central Processing Unit (CPU, Intel Inc.) and 64 GB of RAM, running Windows 10 operating system (Microsoft). Training of a typical optical neural network 10 model takes ~5 hours to complete with 200 epochs. The thickness profile of each diffractive layer was then converted into an .stl file format using Matlab. [0096] While embodiments of the present invention have been shown and described, various modifications may be made without departing from the scope of the present invention. For example, while the output of the optical neural network 10 described herein filters or de-multiplexes an input optical signal the reverse is also possible. That is to say, the plurality of substrate layers of the multi-layer transmissive and/or reflective network generate a single common multiplexed channel as an output from individually filtered or de multiplexed input channels from an input optical signal (e.g., FIG. 9C). The invention, therefore, should not be limited, except to the following claims, and their equivalents.