Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTI-APERTURE MONOCULAR OPTICAL IMAGING SYSTEM FOR MULTI-VIEW IMAGE ACQUISITION WITH DEPTH PERCEPTION AND METHOD
Document Type and Number:
WIPO Patent Application WO/2023/003853
Kind Code:
A1
Abstract:
Multi-aperture monocular endoscopic objective configured to acquire multiple perspective views across simultaneously present multiple fields-of-view to enable expanded imaging capabilities (such as image acquisition in a time-multiplexed fashion and/or with multiple spatially- multiplexed sensors, with an image representing an object in wide field-of-view portion being captured through a center aperture to preserve peripheral awareness). Endoscope with such objective and use of same.

Inventors:
HUA HONG (US)
KWAN ELLIOT (US)
Application Number:
PCT/US2022/037559
Publication Date:
January 26, 2023
Filing Date:
July 19, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV ARIZONA (US)
International Classes:
A61B1/04; G02B5/04; G02B30/23; G02B30/60
Foreign References:
US20150168702A12015-06-18
US20180341100A12018-11-29
US20170285324A12017-10-05
US20170041534A12017-02-09
US7286295B12007-10-23
Other References:
KWAN ELLIOTT, QIN YI, HUA HONG: "High resolution, programmable aperture light field laparoscope for quantitative depth mapping", OSA CONTINUUM, vol. 3, no. 2, 15 February 2020 (2020-02-15), pages 1 - 10, XP093027891, DOI: 10.1364/OSAC.382558
LAI TEH, POTTER B. G., SIMMONS-POTTER KELLY: "Electroluminescence image analysis of a photovoltaic module under accelerated lifecycle testing", APPLIED OPTICS, vol. 59, no. 22, 1 August 2020 (2020-08-01), US , pages G225 - G233, XP055892534, ISSN: 1559-128X, DOI: 10.1364/AO.391957
KWAN ELLIOTT, HUA HONG: "Prism-based tri-aperture laparoscopic objective for multi-view acquisition", OPTICS EXPRESS, vol. 30, no. 2, 17 January 2022 (2022-01-17), pages 1 - 16, XP093027901, DOI: 10.1364/OE.448164
Attorney, Agent or Firm:
SIDORIN, Yakov (US)
Download PDF:
Claims:
CLAIMS

1. A monocular optical objective having an optical axis and comprising a combination of: a first lens, a light selector system comprising an opaque screen having multiple optically-transmissive apertures formed therein, and a light deflector system comprising a plurality of optically-transmissive components each of which respectively corresponds to only one of said multiple apertures, said monocular optical objective configured to form multiple images of an object in an image plane thereof, wherein different images represent respectively-corresponding portions of the object subtended by respectively-corresponding different fields-of-view (FOVs), and wherein each of said different FOVs is defined at least in part by only one of said multiple apertures.

2. A monocular optical objective according to claim 1, wherein

(2A) the light deflector system includes at least one of an optical prism and a diffraction grating; and/or

(2B) each of the optically-transmissive component of the light deflector system is necessarily optically coupled with only one of said multiple apertures in light propagating through the first lens.

3. A monocular optical objective according to claim 2, wherein, when the light deflector system includes an optical prism, the light deflector includes substantially plane parallel plate centered on the optical axis and a pair of optical prisms configured to satisfy at least one of the following conditions:

(3 A) bases of each of the optical prisms are substantially parallel to the optical axis, and

(3B) the optical prisms of said pair of optical prisms are positioned and oriented substantially symmetrically to one another with respect to the optical axis.

4. A monocular optical objective according to claim 1, wherein at least one of the following conditions is satisfied:

(4 A) the optical objective comprises a second lens disposed to receive light transmitted through the first lens; and

(4B) at least one the light selector system and the light deflector system is between the first and second lenses. 5. A monocular optical objective according to claim 1, wherein one or more of the following conditions is satisfied:

(5 A) the light selector system includes at least one optical polarizer juxtaposed with at least one aperture of said multiple apertures;

(5B) the light selector system includes at least one optical spectral filter juxtaposed with at least one aperture of said multiple apertures;

(5C) the light selector system includes at least one light blocker element juxtaposed with at least one of the multiple apertures to prevent light from propagating between a first component of the optical objective and a second component of the optical objective through said at least one of the multiple apertures when the at least one light blocker element is in a light-blocking state; and

(5D) when at least one the light selector system and the light deflector system is between the first and second lenses, both the light selector system and the light deflector system are between the first and second lenses.

6. A monocular optical objective according to claim 5, wherein, when the at least one light blocker element is present in the objective, the at least one light blocker element includes at least one of a mechanical shutter, an electro-optical element, a liquid crystal cell, and a micro-electro-mechanical system (MEMS).

7. A monocular optical objective according to claim 1, wherein the light selector system includes a first aperture located on the optical axis and a pair of second optical apertures located substantially symmetrically to one another with respect to the optical axis.

8. A monocular optical objective according to claim 7, wherein, when the optical prisms of said pair are positioned and oriented substantially symmetrically to one another with respect to the optical axis, each of said optical prisms deviates light, that has been acquired from the corresponding one of the pair of second optical apertures, in a direction away from the optical axis.

9. An optical imaging system comprising a monocular optical objective according to claim 1 , further comprising an optical detection system including at least one optical detector disposed at or behind an image plane of the objective.

10. An optical imaging system according to claim 9, and further comprising an auxiliary lens located behind an image plane of the objective and configured to be repositionable across an axis of the objective.

11. An optical imaging system according to claim 9, further comprising a relay lens configured to form an image of light distribution at the image plane of the objective. 12. An optical system according to claim 9, wherein the optical detection system includes a first optical detector and a second optical detector and configured such that light delivered to the optical detection system through only a first aperture of the multiple apertures is acquired only by the first optical detector, and light delivered through any other aperture of the multiple apertures is acquired only by the second optical detector.

13. An optical laparoscope comprising at least one of (13A) a monocular optical objective according to claim 1, and (13B) an optical system according to claim 9, and further comprising an illumination system configured to channel light along a lightguide formed beyond an outer perimeter of a constituent lens of the optical objective.

14. An optical laparoscope according to claim 13, wherein the lightguide includes an optical fiber.

15. A method comprising : using at least a monocular optical objective according to claim 1; acquiring first light from a first field of view (FOV) that defined at least in part by the first optically-transmissive aperture of said multiple apertures and acquiring second light from a second FOV that is defined at least in part by the second optically-transmissive aperture of said multiple apertures; transmitting the first light only through the first aperture and/or transmitting the second light only through the second aperture, transferring said first light only through a first optically-transmissive component of the light deflector system and transferring said second light only through a second optically-transmissive component of the light deflector system while, when an aperture, from the first and second apertures, is not located at the objective axis, changing a direction of propagation of a corresponding light, from the first and second lights, by a respectively-corresponding transferring, and with the use of a lens of the objective forming, at an image plane of the objective, a first image of the object in said first light and/or a second image of the object in said second light.

16. A method according to claim 15, wherein said transferring the first light through only the first optically-transmissive component includes transmitting the first light through an optical prism. 17. A method according to claim 15, wherein at least one of the following conditions is satisfied:

(17A) said transmitting the first light only through the first aperture and/or said transmitting the second light only through the second aperture includes changing respective spectral content of the first light and/or the second light;

(17B) said transmitting the first light only through the first aperture and/or said transmitting the second light only through the second aperture includes changing a respective polarization state of the first light and/or the second light;

(17C) blocking the first light and/or the second light from traversing a respective aperture from the first and second apertures;

(17D) blocking the first light from propagating towards the lens after the first light has traversed the first aperture and/or blocking the second light from propagating towards the lens after the second light has traversed the second aperture.

18. A method comprising: acquiring first light from a first field of view (FOV) and second light from a second FOV with a monocular optical objective that has an objective axis and that includes

(i) an opaque screen containing an array of optically-transmissive apertures including first and second apertures, and

(ii) an array of optical elements each of which uniquely corresponds to a respective aperture from the array of optically-transmissive apertures wherein the first FOV is defined at least in part by the first optically-transmissive aperture and the second FOV is defined at least in part by the second optically- transmissive aperture; transmitting the first light only through the first aperture and/or transmitting the second light only through the second aperture, transferring said first light only through a first optical element of the array of the optical elements and/or transferring said second light only through a second optical element of the array of the optical elements while, when an aperture, from the first and second apertures, is not located at the objective axis, changing a direction of propagation of a corresponding light, from the first and second lights, by a respectively-corresponding transferring, and with the use of a lens of the objective forming, at an image plane of the objective, a first image of the object in said first light and/or a second image of the object in said second light.

19. A method according to claim 18, wherein the array of optical elements includes at least one of a diffraction grating and an optical prism.

20. A method according to claim 18, wherein at least one of the following conditions is satisfied:

(20A) said transmitting the first light only through the first aperture and/or said transmitting the second light only through the second aperture includes changing respective spectral content of the first light and/or the second light;

(20B) said transmitting the first light only through the first aperture and/or said transmitting the second light only through the second aperture includes changing a respective polarization state of the first light and/or the second light;

(20C) blocking the first light and/or the second light from traversing a respective aperture from the first and second apertures;

(20D) blocking the first light from propagating towards the lens after the first light has traversed the first aperture and/or blocking the second light from propagating towards the lens after the second light has traversed the second aperture.

21. A method according to claim 18, further comprising one of the following:

(21 A) simultaneously registering, with an optical detection system that is positioned behind an image plane of the objective, third and fourth images of the object that are substantially not overlapping with one another, wherein the third image is optically-conjugate with the first image and represents a portion of the object subtended only by the first FOV while the fourth image is optically-conjugate with the second image and represents a portion of the object subtended only by the second FOV; and

(2 IB) registering sequentially in time, with the optical detector system that is positioned behind the imaging plane of the objective, said third and fourth images.

22. A method according to claim 21 , wherein said simultaneously registering includes registering the third image with a first optical detector of the optical detection system and registering the fourth image with a second optical detector of the optical detection system, and wherein said registering sequentially in time include registering the third and fourth images with the same optical detector of the optical detection system

23. A method according to claim 21 , further comprising : transmitting said first light through an auxiliary lens positioned in a first location between the objective and the optical detection system; and transmitting said second light through the auxiliary lens positioned in a second location between the objective and the optical detection system.

24. A method according to claim 18, further comprising: forming an auxiliary image of the object in a surface located between the imaging plane of the objective and the optical detection system, said auxiliary image being optically-conjugate both to at least one of the first and second images and at least one of the third and fourth images.

25. A method according to claim 18, further comprising: sequentially transmitting at least one of said first light and said second light through a first lens of the objective, said array of apertures, said array of optical components, and a second lens of the objective.

26. A method according to claim 18, further comprising: illuminating an object space with excitation light delivered along the objective axis, wherein said acquiring includes acquiring said excitation light that has been reflected at the object space.

27. A method according to claim 18, further comprising: illuminating an object space through an illuminating aperture having a substantially annular shape, wherein said illuminating aperture has an inner diameter exceeding an outer diameter of the objective.

Description:
MULTI-APERTURE MONOCULAR OPTICAL IMAGING SYSTEM FOR MULTI- VIEW IMAGE ACQUISITION WITH DEPTH PERCEPTION AND METHOD

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This patent application claims priority from and benefit of the US Provisional patent application No. 63/224,096 filed on July 21, 2021, the disclosure of which is incorporated herein by reference.

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

[0002] This invention was made with government support under Grant No. R01 EB018921 awarded by National Institutes of Health. The government has certain rights in the invention.

RELATED ART

[0003] The conventionally recognized and widely utilized optical design of a rigid laparoscope employs an optical objective and rod lens relays. This configuration has remained unchanged since the beginnings of minimally invasive surgery, and has provided surgeons with adequate two-dimensional (2D) image quality over the operative field. In terms of the optomechanical aspect of this conventional design, standard lenses and spacers are employed when inserted into simple lens tubes, which enables efficient manufacturability.

[0004] However, the utilization of conventional 2D laparoscopes is subject to two major optical limitations: (1) the absence of binocular vision imposes restriction on depth perception, and (2) the field of view (FOV) and spatial resolution are inversely proportional to one another. The ever present lack of depth information requires extensive training for physicians to become efficient with a 2D operative view Meanwhile, as a result of the second limitation, in order to maintain sufficient image resolution, the conventional 2D laparoscope has to define the FOV to cover just the limited surgical area and not more.

A person of skill readily appreciates that, with such practical limitations, complications or problems with tissue under the investigation that occur outside the surgical area would not be seen unless the laparoscope is physically moved. To improve upon these surgeries, these two limitations and their corresponding optical design solutions have been explored separately in related art.

[0005] To recover depth perception, for example, various methodologies have been thought of including dual-sensor stereo, single-sensor stereo, single-sensor 3D imaging via structured light, and uniaxial 3D imaging. Commercially, stereoscopic endoscopes with dual-channel object-relay optics and dual imaging sensors, like the DaVinci and Endoeye Flex 3D have been popularized through successful demonstration of 3D vision and depth perception in the live surgical setting. Academically speaking, other types of stereo endoscopes have also shown potential for the future. The ones that can facilitate the ability of stereo vision in a uniaxial, single camera system are potentially attractive because they preserve the limited design volume constrained by the laparoscope housing. The 3D-MARVEL endoscope, for example, employs a monocular system with a dual aperture comprising complementary multi-passband spectral fdters.

[0006] An embodiment of related art (see E. Kwan et ak, "High resolution, programmable aperture light field laparoscope for quantitative depth mapping," OSA Contin. 3, 194, 2020, the disclosure of which is incorporated herein by reference), shown in Fig. 1A, captures each view by sampling the entrance pupil (EP) with a programmable aperture (PA) placed at the stop of the optical system of the laparoscope. To substitute the use of multiple apertures, another considered way to acquire multiple views included placing a multifaceted prism or a microprism array (MPA), as shown in Fig. IB, in front of the endoscope (see, for example, S.-P. Yang et ak, "Compact stereo endoscopic camera using microprism arrays," Opt. Lett. 41, 1285, 2016). The MPA, in operation of the system, refracts the light rays from the two stereo views towards the camera lens so that each stereo image can be captured on one half of the image sensor. Although all these types of laparoscopes can acquire depth information, they still are limited by the tradeoff between the FOV and the spatial resolution.

[0007] The use of multi-resolution foveated laparoscope (MRFL) discussed by J. I. Katz et ak

(in "Improved multi-resolution foveated laparoscope with real-time digital transverse chromatic correction," Apph Opt. 59, G79, 2020) demonstrated sibstantial elimination the FOV-versus-spatial- resolution tradeoff. As shown in Fig. 1C, the MRFL system accomplishes this by simultaneously capturing a zoomed and wide FOV via beam splitting, to relay portions of light from the intermediate image into two separate imaging probes. Surgeons can then use the zoomed view for surgical operation while the wide view provides peripheral awareness for preventing patient injury from accidental collisions of surgical instruments outside the surgical area. While the MRFL has demonstrated successful 2D wide FOV minimally invasive surgery in animal trials, the question of depth perception recovery has not been addressed.

[0008] Overall, a skilled person is well aware that a monocular optical system for a laparoscope configured to possess both 2D wide FOV (WFOV) and perform imaging characterized by depth perception has not been addressed. There remains a need in such a system that would pave the way towards restoring the binocular and large, foveated FOV characteristics of human vision within the minimally invasive surgical setting while, at the same time, using a simple monocular implementation.

SUMMARY

[0009] Embodiments of the invention provide a monocular optical objective that has an optical axis. Such objective includes a first lens, a light selector system (or, a light selection system), and a light deflector system (or, a light deflection system). A light selector system contains an opaque screen having multiple optically-transmissive apertures through the screen, and a light deflector system includes a plurality of optically-transmissive components each of which respectively operably corresponds to only one of these multiple apertures. The monocular optical objective is structured to form (optionally- simultaneously) multiple different images of an object in the image plane of the objective. Here, wherein different images represent respectively-corresponding portions of the object subtended by respectively- corresponding different fields-of-view (FOVs), and each of such different FOVs is defined at least in part by only one of the multiple apertures. In at least one embodiment, the light deflector system includes at least one of an optical prism and a diffraction grating; and/or each of the optically-transmissive component of the light deflector system is necessarily optically coupled with only one of the multiple apertures in light propagating through the first lens. Optionally, in such specific embodiment and when the light deflector system includes an optical prism, the light deflector system may also include substantially plane parallel plate centered on the optical axis and a pair of optical prisms configured to satisfy at least one of the following conditions: (i) bases of each of the optical prisms are substantially parallel to the optical axis, and (ii) the optical prisms of the pair of optical prisms are positioned and oriented substantially symmetrically to one another with respect to the optical axis. Additionally or in the alternative - and substantially in every implementation of the objective - at least one of the following conditions may be satisfied: (a) the optical objective may include a second lens disposed to receive light transmitted through the first lens; and (b) at least one of the light selector system and the light deflector system may be disposed between the first and second lenses of the objective. Alternatively or in addition, and substantially in every implementation, the objective may be configured to satisfy one or more of the following conditions: - the light selector system includes at least one optical polarizer juxtaposed with at least one aperture of the multiple apertures; - the light selector system includes at least one optical spectral filter juxtaposed with at least one aperture of said multiple apertures; - the light selector system includes at least one light blocker element juxtaposed with at least one of the multiple apertures to prevent light from propagating between a first component of the optical objective and a second component of the optical objective through at least one of the multiple apertures when the at least one light blocker element is in a light-blocking state; and - when at least one the light selector system and the light deflector system is between the first and second lenses, both the light selector system and the light deflector system are between the first and second lenses of the objective. (In the latter case, when the at least one light blocker element is present in the objective, at least one light blocker element may be structured to include at least one of a mechanical shutter, an electro-optical element, a liquid crystal cell, and a micro-electro- mechanical system.) Moreover, alternatively or in addition, and substantially in every embodiment of the objective the light selector system may include a first aperture located on the optical axis and a pair of second optical apertures located substantially symmetrically to one another with respect to the optical axis; and/or when the optical prisms of the pair of prisms are positioned and oriented substantially symmetrically to one another with respect to the optical axis, each of these optical prisms may be configured to deviate light (that has been acquired from the corresponding one of the pair of second optical apertures) in a direction away from the optical axis.

[0010] Embodiments of the invention also provide an optical imaging system that includes a monocular optical objective according to any embodiment identified above, and additionally include an optical detection system with at least one optical detector disposed at or behind the image plane of the objective. (Optionally, there may be present an auxiliary lens located behind the image plane of the objective and configured to be repositionable across an axis of the objective, and/or a relay lens configured to form an image of light distribution formed at the image plane of the objective at a secondary image plane of the optical imaging system. Alternatively or in addition - and substantially in every implementation - the optical imaging system may include a first optical detector and a second optical detector and be configured such that light delivered to the optical detection system through only a first aperture of the multiple apertures is acquired only by the first optical detector, while light delivered through any other aperture of the multiple apertures is acquired only by the second optical detector.

[0011] Embodiments additionally provide an optical laparoscope that contains at least one of a monocular optical objective (according to any of the embodiment identified above) and an optical imaging system (according to any embodiment identified above) and additionally includes an illumination system configured to channel light along a lightguide formed beyond an outer perimeter of a constituent lens of the optical objective. In at least one specific case, the lightguide includes an optical fiber.

[0012] Furthermore, embodiments of the invention provide a method including at least the following steps: (a) using at least a monocular optical objective structure according to any of the embodiments thereof identified above; (b) acquiring first light from a first field of view (FOV) that is defined at least in part by the first optically-transmissive aperture of the multiple apertures and acquiring second light from a second FOV that is defined at least in part by the second optically-transmissive aperture of the multiple apertures; (c) transmitting the first light only through the first aperture and/or transmitting the second light only through the second aperture; (d) transferring the first light only through a first optically-transmissive component of the light deflector system and transferring the second light only through a second optically-transmissive component of the light deflector system while (when an aperture, from the first and second apertures, is not located at the objective axis) changing a direction of propagation of a corresponding light, from the first and second lights, by a respectively-corresponding transferring, and (e) with the use of a lens of the objective, forming, at an image plane of the objective, a first image of the object in the first light and/or a second image of the object in the second light. In at least one implementation, the method may be optionally configured to satisfy one of the following conditions: (1) the step of transmitting the first light only through the first aperture and/or transmitting the second light only through the second aperture may include changing respective spectral content of the first light and/or the second light; (2) the step of transmitting the first light only through the first aperture and/or transmitting the second light only through the second aperture may include changing a respective polarization state of the first light and/or the second light; (3) the method additionally includes a step of blocking the first light and/or the second light from traversing a respective aperture from the first and second apertures; (4) the method additionally includes a step of blocking the first light from propagating towards the lens after the first light has traversed the first aperture and/or blocking the second light from propagating towards the lens after the second light has traversed the second aperture.

[0013] Furthermore, embodiments of the invention provide a method that includes an operation / process /step of (a) acquiring first light from a first field of view (FOV) and second light from a second FOV with a monocular optical objective that has an objective axis and that includes (ai) an opaque screen containing an array of optically-transmissive apertures including first and second apertures, and (aii) an array of optical elements each of which uniquely corresponds to a respective aperture from the array of optically-transmissive apertures (here, the first FOV is defined at least in part by the first optically- transmissive aperture and the second FOV is defined at least in part by the second optically-transmissive aperture. Such method additionally includes a step of transmitting the first light only through the first aperture and/or transmitting the second light only through the second aperture; and the step of transferring the first light only through a first optical element of the array of the optical elements and/or transferring the second light only through a second optical element of the array of the optical elements while (when an aperture, from the first and second apertures, is not located at the objective axis) changing a direction of propagation of a corresponding light, from the first and second lights, by a respectively-corresponding transferring. The method further includes the step of forming (with the use of a lens of the objective) at an image plane of the objective, a first image of the object in the first light and/or a second image of the object in the second light. In at least one specific implementation, the method is performed while the array of optical elements includes at least one of a diffraction grating and an optical prism. Alternatively or in addition, an substantially in every implementation, the method may be configured such at least one of the following conditions is satisfied: (a) the step of transmitting the first light only through the first aperture and/or said transmitting the second light only through the second aperture includes changing respective spectral content of the first light and/or the second light; (b) the step of transmitting the first light only through the first aperture and/or said transmitting the second light only through the second aperture includes changing a respective polarization state of the first light and/or the second light; (c) the method additionally includes a step of blocking the first light and/or the second light from traversing a respective aperture from the first and second apertures and/or a step of blocking the first light from propagating towards the lens after the first light has traversed the first aperture and/or blocking the second light from propagating towards the lens after the second light has traversed the second aperture. Alternatively or in addition, and substantially in every implementation, the method may include one of the following steps: (1) simultaneously registering, with an optical detection system that is positioned behind an image plane of the objective, third and fourth images of the object that are substantially not overlapping with one another (here, the third image is optically-conjugate with the first image and represents a portion of the object subtended only by the first FOV while the fourth image is optically- conjugate with the second image and represents a portion of the object subtended only by the second FOV; and (2) registering sequentially in time, with the optical detector system that is positioned behind the imaging plane of the objective, the third and fourth images of the object. At least in one specific case, the step of simultaneously registering may include registering the third image only with a first optical detector of the optical detection system while registering the fourth image only with a different second optical detector of the optical detection system, while the step of registering sequentially in time may include registering the third and fourth images with the same optical detector of the optical detection system. In at least one implementation, the method may further include transmitting the first light through an auxiliary lens positioned in a first location between the objective and the optical detection system; and transmitting the second light through the auxiliary lens positioned in a second location between the objective and the optical detection system. In at least one embodiment, the method may additionally include a step of forming an auxiliary image of the object at a surface located between the imaging plane of the objective and the optical detection system (here, such auxiliary image is optically- conjugate both to at least one of the first and second images and at least one of the third and fourth images.) Additionally or in the alternative - and substantially in every implementation - the method may further include a step of sequentially transmitting at least one of the first light and the second light through a first lens of the objective, the array of apertures, the array of optical components, and a second lens of the objective; and/or the step of illuminating an object space with excitation light delivered along the objective axis (here, such acquiring may include acquiring the excitation light that has been reflected at the object space). Substantially in any embodiment, the method may additionally or in the alternative include illuminating an object space through an illuminating aperture that has a substantially annular shape (with such illuminating aperture having an inner diameter exceeding an outer diameter of the objective).

BRIEF DESCRIPTION OF THE DRAWINGS

[0014] The idea and scope of the invention will be more fully understood by referring to the following Detailed Description of Specific Embodiments in conjunction with the Drawings, of which:

[0015] Figs. 1A, IB, 1C provide optical layouts of endoscopes employed in related art: light field (Fig. 1A), microprism array-based stereo (Fig. IB), and multi-resolution foveated (Fig. 1C) endoscopes.

[0016] Fig. 2 displays a schematic of an embodiment of a multi-aperture monocular endoscope configured for multi-view image acquisition.

[0017] Fig. 3 shows a portion of the embodiment of Fig. 2, representing a multi-view, multi aperture monocular endoscopic objective.

[0018] Fig. 4 presents a 2D schematic front view of an embodiment of a multi-aperture, multi view selector sub-system.

[0019] Fig. 5 is a 2D schematic front view of an embodiment of the prismatic multi-view deflector

[0020] Figs. 6A and 6B illustrate distribution of multi -view images, acquired with an embodiment of the system of the invention, in an intermediate image surface formed by an embodiment of the objective of the system. Fig. 6A: an image of the object space formed with light captured within the wide FOV of the optical imaging system. Fig. 6B: multiple side images formed with light bundles acquired within multiple side FOVs of the system and aggregately providing depth perception information to the image data. [0021] Figs. 7A, 7B, and 7C provide examples of related embodiments of a prismatic multi-view deflector sub-system.

[0022] Figs. 8A, 8B, 8C, 8D illustrate a specific (tri-aperture monocular) embodiment of an optical objective of the invention, TAML, containing an array of three apertures paired with an array of three prismatic elements in one-to-one correspondence. Fig. 8A: illustration of object imaging within the wide FOV of the multiple FOVs of the objective. Fig. 8B: modulation transfer function (MTF) characterizing such imaging. Fig. 8C: illustration of object imaging within two side FOVs forming a stereo image pair. Fig. 8D: MTF such side-view image acquisition with operationally good quality (170 lp/mm cutoff frequency)

[0023] Figs. 9A through 9F: A related design of TAML for prototyping. WFOV (FIG. 9A),

SFOV (FIG. 9B), MTFs (FIGs. 9C, 9D), tolerance analyses (FIGs. 9E, 9F)

[0024] Figs. 10A, 10B: An embodiment of a TAML objective lens 2D (FIG. 10A) and 3D (Fig.

10B) image simulations showing FOV differences, layout, distortion, and disparity.

[0025] Figs. 11A, 1 IB: TAML stereo image overlap simulation (FIG. 11A) and vignetting solution (FIG. 1 IB)

[0026] Figs. 12A, 12B illustrate a housing for an embodiment of a TAML objective lens

[0027] Figs. 13A, 13B illustrate an assembled prototype of an embodiments of the TAML objective lens.

[0028] Fig. 14 contains several views (marked (a), (b), (c), (d), (e), and (f)) that present the

TAML prototype data: WFOV (view (a)), SFOV simultaneous capture with vignetting (view (b)), SFOV simultaneous capture without vignetting (view (c)), overlapping WFOV and SFOV (view (d)), first SFOV (view (e)), second SFOV view (f)).

[0029] Figs. 15A, 15B illustrate a stereo anaglyph (FIG. 15A) and initial relative disparity map

(FIG. 15B) before distortion correction and camera calibration of tilted checkerboard.

[0030] Generally, like elements or components in different Drawings may be referenced by like numerals or labels and/or the sizes and relative scales of elements in Drawings may be set to be different from actual ones to appropriately facilitate simplicity, clarity, and understanding of the Drawings. For the same reason, not all elements present in one Drawing may necessarily be shown in another. DETAILED DESCRIPTION

[0031] The disclosure of each and every reference, publication, patent or other documents mentioned in this disclosure is incorporated herein by reference.

[0032] The problem of inability of optical imaging systems of related art to simultaneously provide an operationally-sufficient three-dimensional depth information in absence of binocular structural arrangement and to avoid the operational shortcomings caused by inverse dependence between the field- of-view (FOV) and spatial resolution is solved by devising an optical objective that not only possesses multiple FOVs but also provides a simultaneous multi -view imaging of an object. Such optical objective that incorporates a combination of a plurality of optical apertures (each defining a corresponding and different for others FOV and a corresponding and different view perspective) and a plurality of light- deflectors configured such that there is a one-to-one operational correspondence between the apertures and light-deflectors. Further complemented with the optical lenses, the objective provides for acquisition of multiple images that represent the object information collected in multiple, spatially distinct FOVs but also are either spatially-resolved from and not overlapping with one another, or time-resolved from and not acquired contemporaneously with one another, or both. The use of such objective in a laparoscopic system, for example, substantially simplifies the work of a physician who does not have to spend time anymore to properly perceive and assess a two-dimensional (2D) operative view not possessing and/or representing spatial depth of visual information, and can now afford to maintain practically-sufficient imaging resolution while at the same time working with different FOVs of the same very objective.

[0033] Practical implementations of the idea of the invention manifested in a multi-aperture monocular endoscopic objective (interchangeably referred to as MAMEO) for multi -view image acquisition configured to provide depth perception information. In disadvantageous comparison, the dual camera design found in commercial stereo laparoscopes effectively wastes optical design volume (which is already limited by the endoscope housing diameter) due to the presence of edge apertures required in each of camera sub-systems and the center "dead space" between them. Contrary to the conventional dual camera design, for example, embodiments of MAMEO necessarily remain monocular - that is, maintain the monocular lens form-factor with a sufficiently large clear aperture - to avoid these shortcomings. Multiple stereo view pairs are captured with a multi-aperture array (MA) of an embodiment. To acquire the multiple views simultaneously in real time, a prism component is additionally utilized between the lenses of the objective. Such form factor was shown to allow for an on-axis view that is conventionally associated with and reserved for WFOV imaging. [0034] Experimental proof of the workability of the concept of the invention is discussed below with the example of a simplified tri-aperture embodiment that efficiently simultaneously implements both desired imaging modalities. It is appreciated, however, that the presented example is not limiting and that embodiments providing a different number of FOVs and/or views of an object are within the scope of the invention.

[0035] To this end, Fig. 2 schematically illustrates a layout of the embodiment 200 of a multi aperture monocular endoscope, which includes a multi-view objective 204 (incorporating multiple objective lenses between which an optical prism-containing system 206 is disposed to intersect an axis of the objcctivcjis additionally utilized between the lenses of the objective). , a relay lens 208, and an optical detection potion or sub-system 212 containing an optional scanning lens 216, a beam splitter 220, and two pairs 224A, 224B of focusing lenses and optical detectors (interchangeably referred to as optical sensors). Details of the multi-FOV multi-view objective 200 are presented in Fig. 3.

[0036] In reference to Figs. 2 and 3, the embodiment 200 is configured to image the illuminated object field (the portion of the object space, OS). which is shown to be illuminated with the use of a judiciously-configured optical fiber system 228 constituting a portion of the optical objective 204 but (in a related embodiment) may be illuminated externally, with a use of an independent illumination system), to an optically conjugated intermediate image #1 (IM1), which is the image plane of the objective.

[0037] At the plane of the aperture stop formed by a group of multiple lenses of the objective

204 a view selector optical system 228 (interchangeably referred to herein as a light selector system or simply as a view selector) that in one embodiment contains an array of optical apertures or multiple optical apertures (indicated in the example of Fig. 3 as Ao, Ai , A 2 , A 3 , A 4 . As a skilled person will readily appreciate, different portions of the view selector 228 (in this case, different apertures) subtend the object field or space at different viewing angles as defined by these different apertures of the aperture array.

That is, as is shown on the left side of the diagram of Fig. 2, different apertures of the array forming the view selector perceived the object space at different viewing angles: different portions of the view selector 228are defined by the different apertures, as indicated by the optical axes for center view #1 and side views #1 and #2 of the diagram of Fig. 2. The design of the lens(es) of the objective 204 is not discussed here, but the design is optimized to support the wide FOV the optical information from which is captured by the central view (that is, through the axially-positioned aperture Ao of the view selector 228) and the stereo FOV the optical information from which is captured in multiple images representing the side views of the object (that is, through those apertures of the view selector of the objective that are not located co-axially with the axis 310 of the objective 204). For illustration, only two of the multiple side- view capturing FOVs are schematically illustrated (those respectively corresponding to the apertures Ai and A2 ), but additional side views can be captured at different stereo baselines within a full 180° range of orientations, as determined by the particular embodiment of the view selector 228.

[0038] The view deflector system 206 (interchangeably referred to herein as a light deflector system) in this example is shown to be located adjacently to the view selector system 206, and is configured to refract (or, in a related embodiment, reflect) the light acquired within the bounds of a given FOV of a given side view of the object space so that the corresponding side view images are laterally separated at intermediate image surface #1 (IM1) with no or minimal overlapping. In the meanwhile, the view deflector system 206 containing an array of optical prism elements Do, Di, ... (respectively, one-to one corresponding to the apertures of the Multi-view selector 228) is configured such as to preferably limit the amount of such ray refraction to the range where the side view images of the object (the two of which are marked Mi, M2 in Fig. 3) do not extend beyond the extent and/or boundary/ies (marked as II w) of the wide-FOV image formed through the aperture Ao , in order to ensure all the views can be imaged by the same relay lens (when the discussed embodiment of the objective is used in a laparoscopic system).

[0039] The relay lens (group 208 of the embodiment 200 of the laparoscopic system is appropriately structured to accept and support light received from both the central, wide field of view FOVw corresponding to and defined by the central aperture Ao and the side FOVs corresponding to the apertures A2, A2 . Such relay lens, in operation, forms images of the central and side views of the object at intermediate image surface #2 (IM2, which is optically conjugated with the intermediate image surface #1). To avoid severe light loss from vignetting, it is preferred that the lens system of the objective 204 and that of the relay 208 are designed to be nearly telecentric in both the space of the intermediate image #1 and in space of the intermediate image #2. Notably, after relaying light to the intermediate image surface #2, the mutual spatial arrangement of the multiple views is still preserved.

[0040] An embodiment 200 of the laparoscopic system of Fig. 2 may be optionally complemented with the auxiliary scanning (lens, configured to collimate light from the intermediate image #2 so that unnecessary aberrations are avoided when such light is further traversing the beam splitter 218 of the optical detection system 212.

[0041] The specific type of the used beam splitter 216 generally depends on the principal of operation of the view selector 228, as discussed in more detail below. For instance, if a spectral bandpass type filter is additionally used in the view selector 228, a beamsplitter 218 may be equipped with corresponding spectral coatings to separate the images representing an object in different FOVs to be acquired by the different optical sensors of the pairs 224A, 224B, as schematically shown. (In one example, one of the shown two optical sensors may be used to record the wide FOV while the other may be used the side views simultaneously for real time 3D imaging.)

[0042] Fig. illustrates an enlarged schematic layout of an embodiment of the MAMEO configured for multi-view acquisition.

[0043] To reiterate, as seen in Fig. 3 in detail, the multi-view selector sub-system of the objective 204 is shown to be made up of multiple aperture stops Ai, and is configured to define and capture light within the wide field of view FOVw defined by the on-axis aperture stop Ao (where the diameter of Ao determines the F/# or numerical aperture of that particular view of the object space). Similarly, the pairs of stereo views are captured by light rays passing through pairs of aperture stops such as Ai and A 2 (and/or A 3 and A 4 , etc when present). Notably, the distance between corresponding pairs of aperture stops determines the effective stereo baseline of the view pair and optical axes angles, e.g., QOAI and Q0A2 , of the resulting side views of the object space

[0044] The multi-view selector sub-system of the embodiment of the objective 204 can be optionally complemented with various means for blocking and/or encoding the light traversing the view selector sub-system 228. (Neither of these situations is illustrated in Figs. 2, 3 for the simplicity of illustration.) The light-blocking means may include a mechanical shutter, a liquid-crystal device (LCD), and/or a digital mirror device (DMD) - to name just a few - to facilitate localized control of light transmission and/or reflection through a sub-region of the selector by switching the corresponding region on or off. The light-encoding means may include a custom polarization device or polarizer and/or color (that is, spectral) filter configured to facilitate localized control of light transmission and/or reflection through a corresponding spatial sub-region by encoding different polarization states and/or spectra across a particular aperture of the view selector. If a blocking means is used, either the central aperture Ao or at least one of the additional apertures Ai + of the view selector 228 may be blocked in a time -sequential fashion to remove the overlap between the wide FOV image and the side view images formed in image space. (Here, the notation Ai + refers to all the side views such as Ai, A 2 , A 3 , etc, other than the center view Ao.)

[0045] A skilled person will now appreciate that a blocking-type view selector may understandably employed to carry out a time-sequential mode of capturing the central view of the object in a wide FOV for peripheral awareness and stereo side views for depth imaging. In such case, following the formation of the intermediate image #2 by the relay lens (group) 208 of Fig. 2, two different image capture modalities can be implemented (discussed below in reference to the non-limiting example when side view of the object space are acquired with the use of apertures Ai and A2, as illustrated in Fig. 2) :

[0046] In the first image capture modality, an image of the object space acquired within the wide

FOV view through the center aperture Ao and the side view images acquired in different FOVs through the apertures Ai , A2 may be captured by the same sensor of the optical detection system 212 (in this case, one of the two optical sensors/detectors shown in FIG. 2 , such that one of the corresponding focusing lenses as well as the beam splitter of the optical imaging system of Fig. 2 can be practically omitted) This leads to a simpler system design with lower hardware cost.

[0047] Alternatively, the wide FOV image through the center aperture Ao and the side view images in different FOV through apertures Ai, A2 may be captured with different optical sensors of the optical detection system 212. The main practical advantage of this methodology is the ability to design different focusing lenses and choose sensors of different resolutions so that different spatial resolution may be obtained for the wide-FOV center view and the stereo side views for 3D imaging.

[0048] If the light-encoding means is used (in addition or alternatively to the blocking means), the image of the object representing the center view through the aperture Ao may be coded in a fashion opposite to (different from) the fashions of encoding of the side views acquired through the apertures Ai + . As a result, the use of a light-encoding -type version of the view selector of the objective facilitates the simultaneous capture of images in the wide-FOV center view for peripheral awareness and images in stereo side views for depth imaging. This type of the view selector may, however, trade off half of each view’s irradiance due to the presence of the light-encoding filter. In such case, following the formation of the intermediate image #2 by the relay lens group 208, the wide FOV view image acquired through the center aperture Ao and the side view images acquired through apertures Ai, A2 need of course to be captured by different optical sensors with the use of a beamsplitter 216, as illustrated in the specific example of Fig. 2. (A corresponding polarizing or dichroic beam splitter matching the custom view selector may be used in this case).

[0049] When either type of multi-view selector sub-system 228 (whether light-blocking or light - encoding) is used, the corresponding multi-view deflector sub-system 206 is made up either of individual optical prism and/or diffractive grating deflector elements Di forming an array (here, Di respectively, one- to-one corresponds to a view selector Ai), as has been already alluded to above. The Do axially- positioned element is effectively configured as a thin plane-parallel optical plate and generally does not change a direction of propagation of the light rays arriving from the Ao aperture (in the example of Figs.

2, 3) so the resulting image of the object acquired within the wide FOV is substantially centered on axis 310. Generally, Do could be removed from the array leaving an air space/gap, but it instead is present in at least some embodiments to provide structural support in the situation when the multi-view deflector sub-system 206 is manufactured as one (optionally - monolithic) piece. Deflector elements Di and D2 are generally dimensioned to spatially deviate light rays traversing these elements by 0m and O , but such deviations are arranged in opposite directions so that the side views Mi and M2 of the object space are accordingly translated apart from one another and do not overlap or have minimal spatial overlap in an image plane. (D3 and D4, etc., when present, also deviate the corresponding optical rays by the same or different angular amounts, depending on the specific of the design - and in opposite directions so that their respective side view images do not overlap the other side view images.) To achieve good image performance, in the example of Figs. 2 and 3 the objective 204 contains multiple lens groups, shown here as lens group 1 and lens group 2, with the corresponding focal lengths of f LGi and respectively, that are placed in front and behind the assembly of the multi-view selector 228 and the multi-view deflector 206 to provide sufficient degrees of freedom during optimization of the lens design of the objective. Notably, such arrangement of lens groups of the objective can be adjusted during the optimization. The various distances /. /; separating adjacent components of Fig. 3 are constrained by the method of optomechanical mounting and then precisely determined by the optical optimization. The ray bundles from each viewing angle are ideally constrained to separate regions of the multiple lens groups of the objective 204 so that the local regions of the constituent lenses can be optimized to the respective viewing angle.

[0050] In the following description, the perspective of the following figures is reoriented to view diagrams perpendicularly to the optical axis of the center view of the system (which coincides with looking along the axis 310). Fig. 4 shows a 2D schematic of the multi-aperture, multi-view selector 228. Each view is denoted by [M,N] matrix notation, and Ao , Ai , and A2 correspond to the aperture stops shown Fig. 3. The regions in between the apertures are subsstantially opaque to block light arriving from undesired viewing angles: as a skilled artisan would readily agree, removing unnecessary views of the object space facilitates reduction of the amount of optical constraint on the lens design.

[0051] Fig. 5 shows a 2D schematic of the prismatic version of the multi-view deflector sub system 206. Generally, at least one of orientation and the prism angle of different constituent prism deflector elements are different to achieve the translation of the side view images in different directions and/or have these images possess different irradiance values. [0052] Figs. 6A, 6B shows a 2D layout of the intermediate image 1 plane (IM1) formed by the embodiment of the objective in front of the optical relay 208. Fig. 6A provides the image formed with light acquired within the wide FOV, FOVw, which is bounded and captured by the projected area of one of the optical sensors/detectors of the optical detection system 212. The projected area of the other sensor is preferably made approximately equal in size and directly overlaps the area of the first sensor to capture the side views of the object space. The right side of the layout (Fig. 6B) illustrates how the side view images, highlighted as black squares, are arranged in the intermediate image plane by the multi-view deflector system 206. For simplicity, the arrangement follows the same organization as the multi-aperture selector 228. The central view is blank (i.e., substantially devoid of the image) because it is blocked by the multi-aperture selector and beam splitter 216. The arrangement of the side view images can be changed according to the design requirements of the multi-view deflector 206, but the translation of the images cannot exceed the boundary of a sensor of the optical detection system 212. For 3D depth imaging purposes, only one pair of stereo views is required, so the simplest embodiment of the embodiment of the MAMEO is, understandably, a tri-aperture design (that is, the one in which the selector system 228 of the objective 204 contains three apertures, for example, Ao, Ai, A2). If light field imaging is desired to enable the additional operational benefits such as, for example, digital refocusing, multi-perspective viewing, and digital obstruction removal, then the dimensionality [M,N] determines the angular resolution of the system. Notably, an operational tradeoff remains because each side view image takes up a certain portion of the sensor area, so the number of side views is inversely proportional to their FOV.

[0053] The design of the multi-view deflector sub-system 206 can be changed depending on the specifics of a particular application. To this end, Figs. 7A, 7B, 7C illustrate three alternative bur related embodiments of the view deflector that utilizes optical prismatic elements. In the embodiment of Fig. 7A, the back faces of the individual constituent deflector element are all aligned vertically while the front faces are judiciously angled to ensure the most of a light ray deviation. This configuration significantly improves manufacturability since only one side requires the angled faces.

[0054] In the middle version of Fig. 7B, the individual deflector elements are flipped about the horizonal plane (as compared with their orientation in Fig. 7A), thereby resulting in a configuration that deflects light internally to the objective and spatially arranges the side view images produced by the objective as shown Fig. 7A. This orientation of the constituent deflector element forces the light rays to pass through different portions of the objective’s front and back lens groups, so lens optimization and image performance can differ.

[0055] The version shown in Fig. 7C deflects light, incident from the left, in the same fashion as that as the contraption of Fig. 7B, but employs microprism arrays instead of each of the single-prism constituent deflector element of Fig. 7B to achieve a thinner, compact design. This configuration also helps remove distortions typically brought to life by a light ray bundle passing through a thick prism.

[0056] Instead of using prisms or micro-prism array, the multi-view deflector system of the optical objective 204 can alternatively be implemented with the used diffractive gratings where each of the constituent deflector elements Di (i>0) is dimensioned as a diffractive grating that diffracts the incident light by an angle matching to the angle required for the formation of the corresponding view. Besides diffractive gratings, other ray deflection techniques may also be utilized.

Example of Embodiments

[0057] To demonstrate manufacturability and workability of an embodiment of the invention, a specific tri-aperture configuration of the MAMEO, the tri-aperture monocular laparoscopic (TAML) objective, has been implemented and is now discussed below. In this configuration, stereoscopic views of the object space are simultaneously acquired via two spatially-separated optical apertures (aperture stops) and the wide FOV is acquired via the central, co-axial with the axis of the objective third aperture stop. A custom prism was designed to operate as the view deflector 206. First order specifications of the design are listed in Table 1. As the skilled person having the advantage of this disclosure will now readily appreciate, the challenge of the TAML design is to balance the optical performance between the multiple stereo (side) FOVs (aggregately referred to as SFOV) and the wide FOV (WFOV, or FOVw) of the overall system. The stereo baseline of the TAML is set to be 4 mm, which is comparable to that of commercial 3D endoscopes. According to the mathematical theory presented by E. Kwan et al. (OSA Contin. 3, 194, 2020), such a baseline will provide ~2 mm depth resolution at a working distance of 120 mm and a higher resolution can be achieved at a shorter working distance. Per aperture stop, the F/# is 5.8, but the entire optical system is effectively F/l .35 because it supports, in a monocular form factor, larger ray angles that come from the stereo aperture stops. The object resolution specification is weighted lower for the WFOV than the stereo field of view (SFOV) because it is mainly used for peripheral awareness. Meanwhile, the constraints applicable to conventional rigid laparoscopes are also met. The optical design is constrained for image space telecentricity so that relay lenses can be easily inserted after the objective lens. To account for housing and fiber illumination of 1 mm thickness each, the maximum lens diameter is 8 mm. [0058]

Table 1. First order objective lens design specifications for a TAML prototype

Working distance 120 mm Stereo baseline 4 mm

Stereoscopic full FOV 26 deg. Wavelengths 625, 506, 456 nm

Wide full FOV 39 deg. Object resolution for 6.25 lp/mm stereoscopic view

Effective focal length 7 mm Object resolution for wide 2.1 lp/mm view

Entrance pupil diameter per 1.2 mm Mechanical housing 12 mm aperture diameter

Telecentricity Image space Maximum diameter of 8 mm telecentric lenses

[0059] An embodiment of the objective 204 was implemented with the use of custom lenses and prism elements. Fig. 8A illustrates the embodiment 800 containing the first lens group (including lenses 804, 808, 812) and the second lens group (containing lenses 816, 820, 824, and 828, as specified in Table 2 below).. Fig. 8A shows the light rays acquired within the WFOV transmitting through the central aperture stop Ao of the view selector sub-system 832 The individual prism deflector Do element of the multi-view deflector sub-system 836 after the stop is dimensioned as a simple plane parallel plate (a cuboid optical prism), so the light ray angles passing through are not impacted. The WFOV is modeled as that of a rotationally symmetric system, thus the center of the image acquired within the bounds of the WFOV (the WFOV image) is located on the optical axis of the embodiment of the objective. As the skilled person will immediately recognize, the modulation transfer function illustrated in Fig. 8B shows good WFOV image performance overall at intermediate image plane IM1, although astigmatism somewhat impacts a few of the fields at the higher frequencies. Since the WFOV is for peripheral awareness and high resolution is not the priority, lower contrast for those fields is acceptable. Fig. 8C illustrates the light rays from the SFOV transmitting through the top and bottom aperture stops of the view selector sub-system of the embodiment 800. The individual prism deflector elements corresponding to each of the constituent stop apertures deviate all the incoming rays at the same angle. These deflector elements are designed to have tilted faces in front and back, by analogy with a configuration shown in Fig. 3, to balance out the total amount of refraction required between the two surfaces. Notably, because this system is only bilaterally symmetric, the entire SFOV is modeled for optimization. Insertion of the deflector sub-system in between the lens group 1 and lens group 2 results in translating the entire SFOV images to the upper or lower side of the optical axis without surpassing the WFOV boundary, thus allowing for simultaneous stereo image pair acquisition on a single sensor.

[0060] Table 2. Example of a Lens prescription for embodiment of Figs. 8A, 8C

Surface Curvature Thickness Index Abbe Y Decenter Alpha

# radius (mm) (mm) # tilt angle

(°)

0 Object Plano 120

1 Lens 804 -27.38368 2 1.517 64.2

2 11.75065 0.397

3 Lens 808 47.64222 7 1.785 25.7

4 -24.51140 0.952

5 Lens 812 13.94395 2 1.517 64.2

6 -37.19550 0 7 Multi -view selector Plano 1.216 Ao: 0

Ai: 2.5138 A 2 : -2.5138

8 Multi-view deflector Plano 2 1.522 59.5 Do: 0 0

Di: 2.5138 3.4003

D 2 : -2.5138 -3.4003

9 Plano 0.388 Do: 0 0

Di: 2.5138 -9.4464

D 2 : -2.5138 9.4464

10 Lens 816 -20.70934 2.622 1.517 64.2

11 -11.26596 0

12 Lens 820 12.02222 2.494 1.670 47.1

13 -10.86683 2 1.923 20.9

14 19.77593 3.024

15 Lens 824 15.20580 2.153 1.664 33.0

16 -18.58007 0

17 Lens 828 6.14234 3.307 1.785 25.7

18 4.70189 0.561 19 Intermediate image 1 Plano

[0061] Fig. 8C further illustrates that for the lenses closest to the aperture stop, the stereo ray bundles occupy a local portion of the lens which is mostly unused by the ray bundles of the WFOV system. This indicates that these lenses have more flexibility to impact the imaging modalities separately. The presence of the lens groups both in front of and behind the multi-view deflector system, the overall objective 204 is provided with degrees of design freedom to achieve a balanced image performance between the two imaging modalities. Finally, the MTF presented in Fig. 8D shows operationally good SFOV image performance overall.

[0062] Figs. 9A, 9B, 9C, 9D, 9E, and 9F illustrate the configuration and optical performance of a related embodiment 900 of the TAML. Figs. 9A, 9B present the optical train of the objective and throughout propagation of light within the wide FOV and the stereo pair of side FOVs, respectively (by analogy with Figs. 8A, B). The lens group 1 (the front lens group facing the object space) of the embodiment includes lenses 904, 908, 912, while the lens group 2 (following the combination of the multi -view selector array 916 of three apertures Ao, Ai, and A2 and the deflector sub-system 920) contains four lenses 924, 928, 932, 936. As shown, all lenses are stock lenses (off the shelf) except for the meniscus achromat 924 located right after (that is, following) the prismatic deflector sub-system 920. It was found that this achromat and the field lens should be kept in the meniscus shape to maintain good image performance. Since meniscus lenses are uncommon among stock lenses, the achromat was custom made and the field lens was formed using two singlets of the same glass. Fig. 9B shows schematically the ray bundles from both side views to demonstrate simultaneous stereo image capture.

[0063] To manufacture the multi-view deflector 920, three individual prisms were combined into one obtuse angle prism element with the top flattened out (by analogy with the structure of Fig. 7B). Therefore, most of the optical ray deviation upon propagation of light through the deflector 920 is done by the front surface of the element 920 while the back surface remains substantially flat and optically shared among light bundles propagating through all three aperture stops Ao, Ai, and A2. In comparison with the deflector sub-system of Figs. 8A, 8B, the deflector 920 is configured to separate/shift the corresponding stereo sub-images in opposite directions from the optical axis rather than in the same direction.

[0064] The MTF of Fig. 9C corresponds to the imaging through the wide FOV (Fig. 9A), while the MTF of Fig. 9D corresponds to the imaging through the side FOVs (Fig. 9B). Comparing the MTFs of Figs. 8B, 8D with those of Figs. 9C, 9D, respectively, shows that the replacement of custom lenses with stock lenses degrades the image performance, but not enough to be considered operationally detrimental. In Figs. 9E, 9F, the tolerance analyses of the imaging within the wide FOV and the side FOVs, respectively, at 110 lp/mm in image space confirms that the optical design 900 is expected to maintain sufficiently robust performance after being assembled. This frequency corresponds to > 0.1 contrast at 6.12 lp/mm in object space, which approximately meets the specification for the SFOV and exceeds it for the WFOV. [0065] Table 3. Lens prescription for embodiment in Figs. 9A, 9B.

Surface Curvature Thickness Index Abbe Y Decenter Alpha

# radius (mm) (mm) # (mm) tilt angle

(°)

0 Object Plano 120

1 Lens 904 -18.86 1.5 1.517 64.2

2 18.86 0.474

3 Lens 908 Plano 5 1.785 25.7

4 -31.39 0.215

5 Lens 912 27.43 2.74 1.517 64.2

6 -27.43 1.016 7 Multi-view selector Plano Ao: 0.75 0 Ai: 1.011 2.5103 A 2 : 1.011 -2.5103

8 Multi-view deflector Plano Do: 2.261 1.517 64.2 0 0

Di: 2.001 2.5103 -11.7967 D 2 : 2.001 -2.5103 11.7967

9 Plano 2.276 Do: 0

Dp 2.5103 D 2 : -2.5103

10 Lens 924 11.37 1.93 1.517 64.2

11 Plano 1.869

12 Lens 928 -8.6 1 1.847 23.8

13 8.13381 3.8 1.806 40.9

14 -9.73765 0

15 Lens 932 12.92 2 1.517 64.2

16 Plano 1.519

17 Lens 936 7.85 2.7 1.785 25.7

18 Plano 2.25 1.785 25.7

19 9.42 1.238

20 Sensor cover glass Plano 0.47 1.517 64.2 21 Plano 0.35 22 Intermediate image 1 Plano [0066] Fig. 10A illustrates simulated images of portions of an object seen from the center and top stereo aperture stops (Ao, Ai) of the embodiment 900 equipped with stock lenses. Here, the particular object is represented by a planar grid 1010 disposed substantially perpendicularly to the optical axis at the designed working distance. The simulation that utilized the computation of point-spread functions vs. wavelengths of light across the FOV (and convolving these with the object field as well as and distorting the object field according to chief rays traced through the system) is seen to produce realistic images, The circular boundary 1020 along with the inscribed contents represents the image acquired within the WFOV. In the central portion of this image (bound by the closed line 1024), the side image acquired through the SFOV of the top aperture Ai is overlaid to visualize the difference between the two FOVs and their corresponding distortions. The dashed rectangle 1028 indicates the area where the top and bottom stereo image pair for the designed SFOV overlap that acquired within the WFOV. The SFOV is effectively doubled in the vertical direction without exceeding the WFOV to acquire the image pair simultaneously. To simulate a range of depth information, the planar grid in the SFOV was tilted dramatically by 67.5° counterclockwise about the axis going into the plane of the Figure. The same image simulation process was performed on this tilted object field for the top and bottom stereo apertures, as shown in Fig. 10B. The spatially matching white reference lines in each view help to visualize parallax and distortion, as indicated by the difference in spacing of the grid in the vertical direction.

[0067] Fig. 10B does not show any overlap of the two stereo view images on the image sensor optical detector) because the planar grid-like object 1010 is limiting the SFOV. However, it is understood that if the SFOV were sufficiently large, the two stereo images acquired through Ai and A2 would begin to overlap. The amount of overlap is simulated by seeing how much one stereo image crosses onto the other half of the optical detector, as shown in Fig. 11A, when the object field is sufficiently large. By symmetry, the other stereo view image would overlap just as much. The amount of overlap past the midline is significant and would corrupt a major portion of each stereo view. Because the multi-view deflector sub-system 920 redirect light to form side images at the opposite sides of the optical axis, a vignetting aperture 940 may be (optionally) inserted right after the prism 920 in the embodiment of Fig. 9B to significantly reduce the overlap between the two side-images of the object space (with results presented in Fig. 1 IB). The optional use of vignetting aperture ensures that at least the majority of optical information of each of the stereo images acquired within the designed SFOV is preserved. However, as a skilled person will appreciate, this vignetting solution would not work when the deflector structured according to the principal of deflector of Fig. 8C is used, because it would result in vignetting at the edges of the optical sensor instead the vignetting in the central region, where the actual overlap of the side image occurs. Thus, every different multi -view deflector design may generally require its own unique vignetting solution to address image overlap.

[0068] Additionally, an embodiment of a basic lens housing was designed, and 3D printed for assembling the TAML objective prototype, see Figs. 12A, 12B and 13A, 13B. The lens mounting features did not introduce any additional vignetting. The second stock lens prescription was not available with a 9 mm diameter, hence the diameter of the housing in the front of the objective had to be increase. The housing contained railings to align the aperture stops, prismatic multi-view deflector, and the optical sensor. Rectangular aperture blockers were inserted into the housing to block either the WFOV or the SFOV apertures, resulting in time sequential acquisition between the two imaging modalities. There was an additional slot after the prism to insert a vignetting aperture to reduce image overlap between the stereo image pair at the center of the sensor. For prototype evaluation, a real sensor was placed at the location of the intermediate image 1 (the surface IM1) to avoid additional costs of implementing suitable relay lenses. The three apertures Ao, Ai, and A2 (for the WFOV and SFOVs) can be seen clearly in the front view of Fig. 13A, and the rectangular blockers are shown in the side view, Fig. 13B.

[0069] Fig. 14 (providing a group of images (a) through (f)) illustrates imaging data acquired with the use of a working TAML prototype. Here, the object is a 2D picture of a tree oriented substantially perpendicularly to the optical axis of the objective. Fig. 14 shows the WFOV while the SFOV apertures are blocked (image (a)), the SFOV captured by both stereo apertures simultaneously with the vignetting aperture present (image (b))) and without vignetting aperture present (image (c)) while the WFOV aperture is blocked, all three apertures unblocked (image (d)), and one stereo aperture (image(e)) and the other image (f)) captured alone. These data provides plethora of information. In particular, the WFOV (image (a)) is twice the SFOV in the vertical direction when the stereo images are captured simultaneously (image (b)). The contrast along the horizontal line splitting the optical sensor in half increases with the use of vignetting aperture (image (b)). Without the use of blocking or encoding of light passing through the objective, the central image acquired though the WFOV and stereo views acquired through the SFOVs overlap, as in (image (d)). The amount of overlap from each stereo image is seen in (images (e) and (f)). The image quality also appears sufficient high, as predicted during the lens design phase.

[0070] Processing the stereo images generally requires camera and distortion calibration for absolute disparity and depth mapping. For two independent cameras combined to provide a binocular type solution of related art, calibrations of each of the constituent cameras has been thoroughly performed. However, the embodiment of the invention (TAML) effectively creates two virtual cameras with the same optical sensor orientation but different distortion model. The conventional calibration of embodiments of related art assumes the different constituent cameras are characterized by rotational symmetry and distortion, so the distortion model can be modeled with a radial polynomial. In contradistinction, the TAML captures each stereo image with an off-axis aperture (Ai, A2), so the distortion model is bilaterally symmetric and has additional distortion from the thickness of the prismatic deflector sub-system of the objective of the TAML. Here, as a preliminary check, a relative disparity map can still be computed since the 3D information is present. Fig. 15Ashows a stereo anaglyph with vertical lines indicating the direction of the stereo baseline and corresponding image features from each stereo image. The distance between the corresponding features can be calculated to determine a relative disparity map, as shown in Fig. 15B. The side bar indicates that elements marked with a brighter shade are closer and darker elements to the right are farther from the camera. This implies that the top right comer of the checkerboard is tilted towards the camera to some degree.

[0071] References throughout this specification to "one embodiment," "an embodiment," "a related embodiment," or similar language mean that a particular feature, structure, or characteristic described in connection with the referred to "embodiment" is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment^ "in an embodiment^ and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment. It is to be understood that no portion of disclosure, taken on its own and in possible connection with a figure, is intended to provide a complete description of all features of the invention.

[0072] Within this specification, embodiments have been described in a way that enables a clear and concise specification to bet written, but it is intended and will be appreciated that embodiments may be variously combined or separated without parting from the scope of the invention. In particular, it will be appreciated that all features described herein at applicable to all aspects of the invention.

[0073] For the purposes of this disclosure and the appended claims, the use of the terms

"substantially", "approximately", "about" and similar terms in reference to a descriptor of a value, element, property or characteristic at hand is intended to emphasize that the value, element, property, or characteristic referred to, while not necessarily being exactly as stated, would nevertheless be considered, for practical purposes, as stated by a person of skill in the art. These terms, as applied to a specified characteristic or quality descriptor means "mostly", "mainly", "considerably", "by and large", "essentially", "to great or significant extent", "largely but not necessarily wholly the same" such as to reasonably denote language of approximation and describe the specified characteristic or descriptor so that its scope would be understood by a person of ordinary skill in the art. In one specific case, the terms "approximately", "substantially", and "about", when used in reference to a numerical value, represent a range of plus or minus 20% with respect to the specified value, more preferably plus or minus 10%, even more preferably plus or minus 5%, most preferably plus or minus 2% with respect to the specified value. As a non-limiting example, two values being "substantially equal" to one another implies that the difference between the two values may be within the range of +/- 20% of the value itself, preferably within the +/- 10% range of the value itself, more preferably within the range of +/- 5% of the value itself, and even more preferably within the range of +/- 2% or less of the value itself.

[0074] The use of these terms in describing a chosen characteristic or concept neither implies nor provides any basis for indefiniteness and for adding a numerical limitation to the specified characteristic or descriptor. As understood by a skilled artisan, the practical deviation of the exact value or characteristic of such value, element, or property from that stated falls and may vary within a numerical range defined by an experimental measurement error that is typical when using a measurement method accepted in the art for such purposes.

[0075] The term “and/or”, as used in connection with a recitation involving an element A and an element B, covers embodiments having element A alone, element B alone, or elements A and B taken together.

[0076] While the invention is described through the above-described exemplary embodiments, it will be understood by those of ordinary skill in the art that modifications to, and variations of, the illustrated embodiments may be made without departing from the inventive concepts disclosed herein. Disclosed aspects, or portions of these aspects, may be combined in ways not listed above. Accordingly, the invention should not be viewed as being limited to the disclosed embodiment(s).