Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR CHARACTERIZATION OF AN ENDOSCOPE AND AUTOMATIC CALIBRATION OF AN ENDOSCOPIC CAMERA SYSTEM
Document Type and Number:
WIPO Patent Application WO/2021/071988
Kind Code:
A1
Abstract:
Systems and methods for determining the calibration of an endoscopic camera consisting in a camera-head equipped with exchangeable, rotatable optics (the rigid endoscope), which is accomplished with no user intervention in the Operating Room by first characterizing the rigid endoscope through a set of parameters Φ and then using this lens descriptor Φ as input in a real-time software that processes the images acquired by an arbitrary camera-head equipped with the rigid endoscope, to provide the calibration of the complete endoscopic camera arrangement at every frame time instant, irrespective of the relative rotation of the lens scope with respect to camera-head or zoom settings. Also disclosed is an image software method that detects if the lens descriptor is not compatible with the rigid endoscope in use to prevent errors and warn the user about faulty situations.

Inventors:
DE ALMEIDA BARRETO JOÃO PEDRO (PT)
DOS SANTOS RAPOSO CAROLINA (PT)
ALMEIDA ANTUNES MICHEL GONCALVES (PT)
TEIXEIRA RUI JORGE MELO (PT)
Application Number:
PCT/US2020/054636
Publication Date:
April 15, 2021
Filing Date:
October 07, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
S&N ORION PRIME S A (PT)
International Classes:
A61B5/04
Domestic Patent References:
WO2018232322A12018-12-20
Foreign References:
US20100168562A12010-07-01
Other References:
MELO R; BARRETO J P; FALCAO G: "A new solution for camera calibration and real-time image distortion correction in medical endoscopy-initial technical evaluation", 2011, XP011489985, Retrieved from the Internet [retrieved on 20201220]
See also references of EP 4041049A4
Attorney, Agent or Firm:
HAINER, JR., Norman F. et al. (US)
Download PDF:
Claims:
SYSTEMS AND METHODS FOR CHARACTERIZATION OF AN ENDOSCOPE AND AUTOMATIC CALIBRATION OF AN ENDOSCOPIC CAMERA SYSTEM

CLAIMS

What is claimed is:

1. A method for calibrating an endoscopic camera comprising a rigid endoscope and a camera, the camera comprising a camera-head, the rigid endoscope comprising a lens scope, wherein the rigid endoscope or the lens scope has a Field Stop Mask (FSM) that renders an image boundary with center C and a notch P, and can rotate with respect to the camera-head by an angle d around a mechanical axis that intersects the image plane in point Q, wherein C, P and a principal point O undergo a 2D rotation of the same angle d around Q, the method comprising: acquiring one or more first calibration images of a calibration object with the endoscopic camera at the reference angular position; determining a first estimate of a focal length f, a distortion x, a rotation center Q, and a principal point Oo of the endoscopic camera with respect to the camera for each first calibration image; detecting a boundary with center Ci and notch Pi on the calibration images according to an image processing method; performing an iterative process comprising iterative performance of, one or more times: acquiring one or more further calibration images of a calibration object with the endoscopic camera at a further angular position of the lens scope with respect to the camera-head; determining an estimate of an angular displacement between the reference angular position and the further angular position; determining a further estimate of the focal length f, the distortion x, the rotation center Q, and the principal point Oo of the endoscopic camera with respect to the camera for each further calibration image of the iteration; and detecting a boundary with center Ci and notch Pi on the further calibration images of the iteration according to the image processing method; and refining f, x, and Oo according to the further estimates of the focal length f, the distortion x, the rotation center Q, and the principal point Oo and according to the estimated angular displacements.

2. The method of claim 1, further comprising: determining a first estimate of a 3D pose of the calibration object, the 3D pose comprising a rotation and a translation, for each first calibration image; wherein the iterative process further comprises: determining a further estimate of a 3D pose of the calibration object, the 3D pose comprising a rotation and a translation, for each further calibration image of the iteration; wherein refining f, x, and Oo is further according to the further estimates of the 3D pose of the calibration object.

3. The method of claim 1, wherein the calibration object comprises one or more of: a 2D plane with a checkerboard pattern; a 2D plane with a known pattern; or a known 3D object.

4. The method of claim 1, wherein refining the calibration parameters comprises one or more of:

(a) an iterative non-linear minimization of a reprojection error;

(b) an iterative non-linear minimization of a photogeometric error; or

(c) a cost function minimization that is not (a) and is not (b).

5. The method of claim 1, wherein the first estimate of the calibration parameters comprises a distortion model comprising one or more of: a Brown’s polynomial model; a rational model; a fish-eye model; or a division model with one or more parameters.

6. A method for updating calibration parameters of an endoscopic camera, the endoscopic camera comprising a rigid endoscope and a camera, the camera comprising a camera-head and a Camera Control Unit (CCU), wherein the rigid endoscope or lens scope has a Field Stop Mask (FSM) that renders an image boundary with center C and a notch P, and can rotate with respect to the camera-head by an angle d around a mechanical axis that intersects the image plane in point Q, wherein C, P and the principal point O undergo a 2D rotation of the same angle d around Q, and wherein the calibration parameters focal length f, distortion x, rotation center Q and principal point Oo, as well as a boundary with center Co and a notch Po, for a reference angular position i=0 of the lens scope with respect to the camera-head are known, the method comprising: acquiring a new frame j with the endoscopic camera; detecting a boundary center and a notch Pj in the new frame j; estimating an angular displacement d of the endoscopic lens with respect to the camera- head according to notch Po, notch Pj and Q; and estimating an updated principal point Oj of the endoscopic camera by performing a 2D rotation of the principal point Oo around Q by an angle d.

7. The method of claim 6, wherein the focal length f, distortion x, rotation center Q and principal point Oo, as well as a boundary with center Co and a notch Po, at the reference angular position i=0 are obtained by calibrating the endoscopic camera with the lens scope at the reference position or by retrieval from the CCU.

8. The method of claim 6, wherein the rotation center Q is determined using one or more of: three or more boundary centers Q; three or more notches Pj; or at least one boundary center and at least one notch Pj.

9. The method of claim 6 further comprising: filtering the estimation of the rotation center Q and the angular displacement d according to one or more of a recursive filter or a temporal filter.

10. A method for characterizing a rigid endoscope with a Field Stop Mask (FSM) that induces an image boundary with center C and a notch P by obtaining a descriptor F comprising a normalized focal length f, a distortion x, a normalized principal point 0 and a normalized rotation center Q, the method comprising: estimating calibration parameters (focal length f, distortion x, principal point Oo and rotation center Q) of a characterization camera at a reference position, the characterization camera comprising the rigid endoscope and a camera; detecting a boundary with center Co and a notch Po at the reference position; and determining a normalized focal length f, a normalized principal point 0, and a normalized rotation center Q according to center Co, notch Po, focal length f, principal point Oo and rotation center Q.

11. The method of claim 10, wherein the normalized focal length f is computed from f = with r being the distance between center Co=[Cx, Cy, 1]T and notch Po, and the normalized principal point O and rotation center Q are obtained by computing O = AO and Q = AQ, respectively, with A = and b being the angle between line CP and the down direction.

12. A method for calibrating an endoscopic camera, the endoscopic camera comprising a rigid endoscope and a camera, the camera comprising a camera-head and a Camera Control Unit (CCU), wherein the rigid endoscope has a descriptor F comprising a normalized focal length f, a distortion x, a normalized principal point 0 and a normalized rotation center Q and has a Field Stop Mask (FSM) that renders an image boundary with center C and a notch P, the method comprising: acquiring an image frame i with the endoscopic camera; detecting a boundary center Ci and a notch Pi in the image frame i; estimating a focal length f, a rotation center Q, and a principal point O according to center Ci, notch Pi, the normalized focal length f, the normalized principal point 0, and the normalized rotation center Q.

13. The method of claim 12, further comprising determining a radius r of the image frame. 14. The method of claim 13, wherein the focal length f, the principal point O and the rotation center Q are computed by f = rf , O = BO and Q = BQ , respectively, with B = rcos a rsin a Cx — rsin a rcos a cy and a being the angle between line IIC!P I and the down direction.

0 0 1

15. The method of claim 14, wherein the endoscopic lens descriptor F is obtained by one or more of: a camera; measuring the endoscopic lens using one or more of a caliper, a micrometer, a protractor, a gauge, or a robotic measurement apparatus; or using a computer-aided design (CAD) model of the endoscopic lens.

16. The method of claim 12, wherein the endoscopic lens descriptor F is obtained by one or more of: loading information into the CCU from a database; reading a QR code; obtaining information from a USB flash drive; receiving manual input of information from a user; reading engravings in the FSM; obtaining information from an RFID tag; or obtaining information over an internet connection.

17. The method of claim 12, wherein frame i comprises two or more frames, wherein the rotation center Q is determined using one or more of: three or more boundary centers G; three or more notches Pi; or at least one boundary center Ci and at least one notch Pi.

18. A method for detecting an anomaly in an endoscopic camera, the endoscopic camera comprising a rigid endoscope and a camera, the camera comprising a camera-head and a Camera Control Unit (CCU), wherein the rigid endoscope has a descriptor F comprising a normalized rotation center Q and has a Field Stop Mask (FSM) that renders an image boundary with center C and a notch P, the method comprising: acquiring at least two image frames by the endoscopic camera, wherein each of the at least two image frames is captured with the rigid endoscope in a different position with respect to the camera-head relative to every other one of the at least two image frames; detecting boundary centers Ci and notches Pi for each of the at least two image frames; estimating a first rotation center Q using one or more of the detected boundary centers Ci or the detected notches Pi; estimating a second rotation center Q according to the normalized rotation center Q, boundary centers Ci, and notches Pi; and comparing the first estimated rotation center Q to the second estimated rotation center Q; and determining that an anomaly exists according to the comparison.

19. The method of claim 18, wherein the endoscopic lens descriptor F is obtained by one or more of: loading information into the CCU from a database; reading a QR code; obtaining information from a USB flash drive; receiving manual input of information from a user; reading engravings in the FSM; obtaining information from an RFID tag; or obtaining information over an internet connection.

20. The method of claim 18, wherein the comparing the first estimated rotation center Q to the second estimated rotation center Q and determining that an anomaly exists according to the comparison are performed by one or more of an algebraic function, a classification scheme, a statistical model, a machine learning algorithm, thresholding, or data mining.

21. The method of claim 18, further comprising comparing a boundary detected at calibration time to a boundary detected during operation for identifying a cause of the anomaly. 22. The method of claim 21, further comprising providing an alert message to the user identifying the cause of the anomaly.

23. A method for detecting an image boundary with center C and a notch P in a frame acquired by using a rigid endoscope that has a Field Stop Mask (FSM) that induces the image boundary with center C and the notch P, the method comprising: using an initial estimation of the boundary with center C and notch P to render a ring image by interpolating and concatenating image signals extracted from the acquired frame at concentric circles centered in C, wherein the notch P is mapped to the center of the ring image; detecting two or more edge points in the ring image; repeating the following until the detected two or more edge points are collinear: mapping the detected two or more edge points into a space of the acquired frame and fitting a fitted circle with center C to the mapped edge points; rendering a new ring image by making use of the fitted circle; and re-detecting two or more edge points in the new ring image; and detecting the notch P in the final ring image using correlation with a known template.

24. The method of claim 23, wherein the FSM contains more than one notch, each notch unique in size, shape, or size and shape from each other notch.

25. The method of claim 23, wherein the initial estimation of the boundary with center C and notch P is obtained according to one or more of deep learning, machine learning, image processing, a statistical -based approach, or a random approach.

26. The method of claim 23, wherein one or more of the detected center C or the detected notch P is used for the estimation of the angular displacement of the rigid endoscope with respect to the camera-head.

27. A method for updating calibration parameters of an endoscopic camera, the endoscopic camera comprising a rigid endoscope and a camera, the camera comprising a camera-head and a Camera Control Unit (CCU), wherein the rigid endoscope or lens scope can rotate with respect to the camera-head by an angle d around a mechanical axis that intersects the image plane in point Q, wherein the principal point O undergoes a 2D rotation of the same angle d around Q, wherein the endoscopic camera is equipped with a mechanism for measuring angle d, and wherein the calibration parameters focal length f, distortion x, rotation center Q and principal point Oo for a reference angular position i=0 of the lens scope with respect to the camera-head are known, the method comprising: estimating an angular displacement d of the endoscopic lens with respect to the camera- head using the mechanism; and estimating an updated principal point Oi of the endoscopic camera by performing a 2D rotation of the principal point Oo around Q by an angle d.

28. The method of claim 27, wherein the mechanism for measuring angle d is one or more of: a rotary encoder attached to the camera-head or; an optical tracking system for determining the position of an optical marker attached to the rigid endoscope.

Description:
SYSTEMS AND METHODS FOR CHARACTERIZATION OF AN ENDOSCOPE AND AUTOMATIC CALIBRATION OF AN ENDOSCOPIC CAMERA SYSTEM

CROSS-REFERENCE TO RELATED APPLICATIONS [0001] This application claims priority to and the benefit of U.S. Provisional Patent Application No. 62/911,950, filed October 7, 2019 (“’950 application”) and U.S. application no. 62/911,986, filed on October 7, 2019 (“’986 application”). The ’950 application and ’986 application are hereby incorporated herein by reference in their entireties for all purposes.

FIELD

[0002] The disclosure generally relates to the fields of computer vision and photogrammetry, and in particular, but not by way of limitation, the present disclosed embodiments are used in the context of clinical procedures of surgery and diagnosis for the purpose of calibrating endoscopic camera systems with exchangeable, rotatable optics (the rigid endoscope), identifying if a particular endoscope is being in use, or verifying it is correctly assembled to the camera head. These endoscopy systems are used in several medical domains, such as orthopedics (arthroscopy) or abdominal surgery (laparoscopy), and the camera calibration enables applications in Computer- Aided Surgery (CAS) and enhanced visualization.

BACKGROUND

[0003] Video-guided procedures, such as arthroscopy and laparoscopy, make use of a video camera equipped with a rigid endoscope to provide the surgeon with the possibility of visualizing the interior of the anatomical cavity of interest. The rigid endoscope, which, depending on the medical specialty, can be an arthroscope, laparoscope, neuroscope, etc., is combined with a camera comprising a camera-head and a Camera Control Unit (CCU), to form an endoscopic camera. These cameras are different from conventional ones mainly because of two characteristics. The first one is that the rigid endoscope, also referred to as lens scope or optics, is usually exchangeable for the sake of easy sterilization, with the endoscope being attached to the camera-head by the surgeon in the Operating Room (OR) before starting the medical procedure. This attachment is accomplished with a connector that allows the endoscopic lens to rotate with respect to the camera- head around its symmetry axis (the mechanical axis in FIG. 2), allowing the surgeon to change the direction of viewing without having to move the endoscopic camera in translation. The second distinctive characteristic of an endoscopic camera is that it usually contains a Field Stop Mask (FSM) somewhere along the image forwarding system that causes the acquired image to have meaningful content in a circular region which is surrounded by a black frame. The FSM usually contains a mark in the circular boundary whose purpose is to allow the surgeon to infer the down direction. This mark in the periphery of the circular image will be henceforth referred to as the notch (FIG. 2).

[0004] An important enabling step for computer-aided arthroscopy or laparoscopy is camera calibration such that 2D image information can be related with the 3D scene for the purpose of enhanced visualization, improved perception, measurement and/or navigation. The calibration of the camera system, that in this case comprises a camera-head equipped with a lens scope, consists in determining the parameters of a projection model that maps projection rays in 3D into points in pixel coordinates in the image, and vice-versa (FIG. 1). In the context of medical endoscopy, the applications of calibration are vast, ranging from distortion correction and rendering of virtual views for enhanced visualization, to surgical navigation, where the camera is used to measure 3D points and distances and the relevant information is typically overlaid with the patient’s anatomy. [0005] In endoscopic cameras with rotatable optics, the motion between the rigid endoscope and the camera sensor causes changes in the calibration parameters of the camera system which means that the projection model is not constant along time as it happens in conventional cameras. Since it is impractical to perform independent calibration for every possible position of the optics with respect to camera-head, the calibration parameters must be updated according to a camera model that accounts for this relative motion. Solutions for determining this relative rotation and updating the calibration accordingly have been proposed in the literature with examples including the use of a rotary encoder attached to the camera head [1], or the employment of an optical tracking system for determining the position of an optical marker attached to the scope cylinder [2] These approaches present a serious drawback that is the need of additional equipment and instrumentation that is costly, occupies space in the OR, and disrupts the established surgical workflow.

[0006] USPatentNo. 9,438,897 discloses a method to solve some of the aforementioned issues for accomplishing endoscopic camera calibration without requiring any additional instrumentation. The rotation of the lens scope is estimated at each frame time instant using image processing and the result is used as input into a model that, given the camera calibration at a particular angular position of the lens scope with respect to camera-head (the reference position), outputs the calibration at the current angular position. However, this method presents the following drawbacks: (i) calibration at the reference position requires the acquisition of one or more frames of a known checkerboard pattern (the calibration grid) in the OR, which, besides requiring user intervention, is typically a time-consuming process that must be performed with a sterile grid, and thus is undesirable and should be avoided, (ii) the disclosed method does not allow changes in the optical zoom during operation as it is not capable of updating the calibration parameters to different zoom levels, and (iii) it requires the lens scope to only rotate, and never translate, with respect to camera-head with the point where the mechanical axis intersects the image having to be explicitly determined.

[0007] The presently disclosed embodiments refer to a method that avoids the need of calibrating the endoscopic camera at a particular angular position in the OR after assembling the rigid scope in the camera head. This patent discloses models, methods and apparatuses for characterizing a rigid endoscope in a manner that enables to determine the calibration of any endoscopic camera system comprising the endoscope independently of the camera-head that is being used, the amount of zoom introduced by that camera-head, and the relative rotation or translation between the scope and the camera-head at a particular frame-time-instant. This allows the surgeon to change the endoscope and/or the camera-head during the surgical procedure and adjust the zoom as desired for better visualization of certain image contents without causing any disruption to the workflow of the procedure.

[0008] The present disclosure shows how to calibrate the rigid endoscope alone to obtain a set of parameters - the lens descriptor - that fully characterize the optics. The lens calibration is performed in advance (e.g. at the moment of manufacture) and the descriptor is then loaded into the Camera Control Unit (CCU) to be used as input in a real-time software that automatically provides the calibration of the complete endoscopic camera arrangement, comprising both camera- head and lens, at every frame instant, irrespective of the relative rotation between the two components. This is accomplished in a seamless manner to the user.

[0009] Since this descriptor characterizes a lens or a batch of lenses, it can also be used for the purposes of identification and quality control. Thus, and building on this functionality, it is also disclosed a method for detecting inconsistencies between the lens descriptor loaded in the CCU and the actual rigid endoscope assembled in the camera-head. This method is useful to warn the user if the lens being used is not the correct one and/or if it is damaged or has not been properly assembled in the camera-head.

[0010] The present disclosure can be used in particular, but not by way of limitation, in conjunction with the methods disclosed in US Patent No. 9,438,897 to correct image radial distortion and enhance visual perception, or with the methods disclosed in US 20180071032 A1 to provide guidance and navigation during arthroscopy, for the purpose of accomplishing camera calibration at every frame time instant, which is a requirement for those methods to work.

SUMMARY

[0011] Systems, methods and apparatuses for determining the calibration of an endoscopic camera consisting in a camera-head equipped with exchangeable, rotatable optics (the rigid endoscope), such that 2D image points can be related with 3D projection rays in applications of computer-aided surgery, and where the rigid endoscope is characterized in advance (e.g. in factory at manufacture time) to accomplish camera calibration without requiring any user intervention in the Operating Room (OR).

[0012] A method for characterizing a rigid endoscope through a set of parameters f that are then used as input in a real-time software that processes the images acquired by an arbitrary camera-head equipped with the rigid endoscope to provide the calibration of the complete endoscopic camera arrangement at every frame time instant, irrespective of the relative rotation between the lens scope and the camera-head or zoom settings.

[0013] An image software method detects if the lens descriptor f is not compatible with the rigid endoscope in use, which is useful to prevent errors and warn the user about faulty situations such as usage of the incorrect lens, defects in the lens or camera-head or improper assembly of the lens in the camera-head.

BRIEF DESCRIPTION OF THE DRAWING [0014] For a more complete understanding of the present disclosure, reference is made to the following detailed description of exemplary embodiments considered in conjunction with the accompanying drawings.

[0015] FIG. 1 is an embodiment of a camera projection model depicting the mapping of a projection ray, that is defined by 3D point X in the scene and the projection center of the camera, onto a 2D point x in pixel coordinates in the image plane. The model can be seen as a composition of a projection P according to the pin-hole model that maps the 3D point X onto 2D, a function G that accounts for the non-linear effect of radial distortion introduced by the optics that is quantified by x, and the intrinsic parameters of the camera K(f, O) that depend on the focal length f and principal point O where the optical axis is projected onto the image.

[0016] FIG. 2 shows exemplary embodiments of the image formed by an endoscopic camera with exchangeable, rotatable optics (the rigid endoscope). The lens is cylindrical shaped and rotates around its longitudinal, symmetry axis, henceforth referred to as the mechanical axis, that intersects the image plane in the rotation center Q. The rigid endoscope has a Field Stop Mask (FSM) somewhere along the image forwarding system that is projected onto the image plane as a black frame around a circle W with center C (the circular boundary) that contains the meaningful visual contents (refer to FIG. 3 for an illustration). The image boundary typically has a mark in an image point P, henceforth referred to as the notch, that allows the surgeon to infer the rotation between scope and camera-head. If the scope rotates around the mechanical axis with respect to the camera-head by an angle d, then the circle center C, notch P and principal point O also rotate by that same amount d, around Q.

[0017] FIG. 3 A shows the sequence of steps for detecting the circular boundary C and notch P at each frame time instant. Starting with an initial estimate for the boundary W and notch P, the algorithm comprises rendering a ring image having P mapped in the center (step 1), detecting edge points on that ring image (step 2), and iterating through the following steps until the stopping criterion is met: mapping the edge points back to the image space for the estimation of a boundary circle proposal (step 3), and rendering a new ring image for detecting more accurate edge points (step 4). This cycle stops when the edge points in the ring image are collinear. The final step consists in detecting the notch P in the ring image using a correlation-based strategy with a known notch template. The detected boundary W and notch P will be used as initialization in the following frame time instant.

[0018] FIG. 3B illustrates the advantage of detecting multiple edge points for each ring image column. In case of strong light dispersion (dashed ellipse), the edge points (red points) corresponding to the first local maxima along each column of the ring image do not always correspond to the correct image boundary, and the circle boundary estimation fails (yellow circle). This is solved by detecting multiple local maxima for each column of the ring image, and using these edge points as input to a robust circle estimation scheme (green circle).

[0019] FIG. 3C illustrates the strategy used for solving the issue of generating ring images in which the lens notch region is not contiguous. (Step 1) A fixed angular position (arrow) for the beginning of the ring image can cause notch P to be partially cut. (Step 2) The issue of step 1. can be solved by centering the ring image using an initial estimate of the lens notch P.

[0020] FIG. 3D shows the strategy for obtaining a lens specific notch template. At calibration time, and after generating the ring image (step 1), a specific region around the lens notch location P is extracted (dashed rectangle of step 2). Then, an adaptive binarization strategy is employed for generating a notch template composed of a bright triangular shape with a dark rectangular background (step 3).

[0021] FIG. 4A illustrates the estimation of the rotation center Q from two frames in which the notches Pi, P j have been detected. Q is estimated by simply intersecting the bisectors of segments and P,P j, where Ci, Q are the centers of the boundaries detected in both frames. [0022] FIG. 4B illustrates the estimation of the rotation center Q from three frames by making use solely of the centers Ci, Q, C k of the boundaries detected in these frames. Q is the intersection of the bisectors of segments C^C j and C j C k.

[0023] FIG. 5 illustrates an embodiment of a Field Stop Mask (FSM) containing multiple marks with different shapes so that they can be uniquely identified. The objective is to have redundancy to improve the accuracy of rotation estimation and be resilient to situations where the circular boundary is partially projected beyond the frame limits and a single notch might not be visible at all times, precluding or hampering estimation.

[0024] FIG. 6 illustrates an embodiment of a Field Stop Mask (FSM) having an elliptic shape such that its lack of circular symmetry allows the location of the notch P to be inferred at all times. [0025] FIG. 7 shows the sequence of steps for obtaining the camera calibration at the reference position i = 0. With the lens scope at the reference angular position, the user starts by acquiring K > 1 calibration images. Intrinsic camera calibration is then performed as well as the detection of the circular boundary and notch. This data is stored in memory, giving the user the possibility to change the position of the lens scope and repeating the above steps N times. A final optimization scheme that minimizes the re-projection error for all calibration images simultaneously is then performed. [0026] FIG. 8 illustrates the optimization scheme employed when the calibration images are acquired at N>1 angular positions i of the scope with i=0 being the reference position and i=l,

N-l being the additional positions. This optimization step enforces the rotation model of the scope, which is a plane rotation of angle 5i around the rotation center Q that transforms the principal point Oo and notch Po into points Oi and Pi , respectively. The optimization scheme serves to estimate the calibration parameters at the reference position, while minimizing the re-projection error in all acquired calibration images and enforcing this model for the scope rotation for all sampled angular positions simultaneously. An initialization for the rotation center Q may be obtained from the method schematized in FIG. 7.

[0027] FIG. 9 shows the sequence of steps of the online update of the calibration parameters. Every time a new frame j is acquired, the circular boundary and notch are detected and the angular displacement is computed using an estimate of the rotation center Q which is obtained either from the offline calibration result (Mode 1) or by estimation from successive frames (Mode 2). As a final step, the calibration parameters are updated.

[0028] FIG. 10 illustrates the process of decoupling the parameters of a lens included in a camera system, denoted as characterization camera, from the contribution of the camera head, yielding a lens descriptor F = (f, x, 0, Q), as well as the estimation of the calibration parameters of a new camera system (the application camera ), equipped with the same lens, by making use of its descriptor F. From a frame acquired with the characterization camera, an auxiliary reference frame attached to the boundary, that is obtained by transforming the reference frame of the image by a similarity transformation A, is considered. By representing the calibration parameters in boundary coordinates, the lens descriptor F is obtained. In order to obtain the calibration of the application camera, that is equipped with the same endoscopic lens, a similarity transformation B is employed for converting the entries of the lens descriptor F, that are represented in boundary coordinates, into image coordinates.

[0029] FIG. 11 is an embodiment of the components of an endoscope, illustrating how the rotation center and the center of the FSM are related. This allows to better understand the physical meaning of the lens descriptor disclosed in this patent.

[0030] FIG. 12 shows the sequence of steps of the anomaly detection procedure. For each acquired frame j, detection of the boundary and notch is performed both for obtaining the updated calibration from the lens descriptor and for estimating the rotation center using the previously detected boundary and notch. This yields two different estimates for the rotation center (Q j and Q j ), that are compared and allow the detection of an anomaly.

[0031] FIG. 13 is a diagrammatic view of an example computing system that includes a general purpose computing system environment.

DETAILED DESCRIPTION

[0032] It should be understood that, although an illustrative implementation of one or more embodiments is provided below, the various specific embodiments may be implemented using any number of techniques known by persons of ordinary skill in the art. The disclosure should in no way be limited to the illustrative embodiments, drawings, and/or techniques illustrated below, including the exemplary designs and implementations illustrated and described herein. Furthermore, the disclosure may be modified within the scope of the appended claims along with their full scope of equivalents.

[0033] In this patent, 2D and 3D vectors are written in bold lower and upper case letters, respectively. Functions are represented by lower case italic letters, and angles by lower case Greek letters. Points and other geometric entities in the plane are represented in homogeneous coordinates, as it is commonly done in projective geometry, with 2D linear transformations in the plane being represented by 3x3 matrices and equality being up to scale. In addition, throughout the text different sections are referenced by the numbers of their paragraphs using the symbol §. [0034] 1. Camera model for endoscopic systems with exchangeable, rotatable optics (the rigid endoscope)

[0035] Camera calibration is the process of determining the camera model that projects 3D points X in the camera reference frame into 2D image points x in pixel coordinates. Alternatively, the camera model can be interpreted as the function that back-projects image points x into light rays going through the 3D point X in the scene. This process is illustrated in FIG. 1. Camera calibration is a key component in many applications, ranging from visual odometry to 3D reconstruction, and also including the removal of image artifacts for enhanced visual perception such as radial distortion.

[0036] Conventional, commonly used cameras are described by the so-called pin-hole model that can be augmented with a radial distortion model that accounts for non-linear effects introduced by small optics and/or fish-eye lenses. In this case, points X in the scene are projected onto points x in the image according to the formula x = K G x (RC) where x and X are represented in homogeneous coordinates with the equality being up to scale, P = [103 xi ] is a 3x4 projection matrix with I denoting the 3x3 identity matrix, K is the so-called matrix of intrinsic parameters with dimension 3x3, and G denotes a distortion function with parameters x. Henceforth, and without loss of generality, it will be assumed that the camera is skewless with unitary aspect ratio, yielding a model that approximates well the majority of modem cameras where the deviation of the skew from zero and of the aspect ratio from one is negligible. With this assumption, K depends solely on the focal length f and image coordinates of the principal point O = [O x , O y , 1] T such that

-f 0

K(f, 0) = 0 f

0 0

[0037] The distortion function G represents a mapping in 2D and can be any of the many distortion functions or models available in the literature that include, but are not limited to, the polynomial model (also known as Brown’s model), the division model, the rational model, the fish-eye lens model, etc., in either its first order or higher order (multi-parameter) versions with x respectively being a scalar or a vector.

[0038] An endoscopic camera, that results from combining a rigid endoscope with a camera, has exchangeable optics for the purpose of easy sterilization, with the endoscope having in the proximal end an ocular lens (or eye-piece) that is assembled to the camera using a connector that typically allows the surgeon to rotate the scope with respect to the camera-head. As illustrated in FIG. 2, this rotation is performed around a longitudinal axis of the endoscope (the mechanical axis) that intersects the image plane in point Q. The Field Stop Mask (FSM) in the lens scope projects onto the image plane as a black frame around a region with visual content that has a circular boundary W with center C and a notch P.

[0039] The mechanical axis is roughly coincident with the symmetry axis of the eye-piece that does not necessarily have to be aligned with the symmetry axis of the cylindrical scope and/or pass through the center of the circular region defined by the FSM. These alignments are aimed but never perfectly achieved because of mechanical tolerances in building and manufacturing the endoscope. Thus, the rotation center Q, the center of the circular boundary C and the principal point O are in general distinct points in the image, which complicates camera modeling but, and as disclosed ahead, can be used as a signature to identify a particular endoscope or batch of similar endoscopes. [0040] Consider that the endoscopic camera is calibrated for a certain position of the scope, such that K(f, O) is the matrix of intrinsic parameters and x is the distortion parameter quantifying radial distortion according to a chosen model G (FIG. 1). If the scope undergoes a rotation by an angle d with respect to the camera head, the distortion x and focal length f remain unchanged, but the principal point O rotates by the same amount d around Q (FIG 2). This causes the matrix of intrinsic parameters to become K(f, R(6, Q)0), where R(6, Q) is a 3x3 matrix representing a 2D rotation in image around point Q = [Q x , Q y , 1] T by an angle d.

[0041] Similarly to causing a rotation in the principal point O, such that it becomes O' = R(d, Q)0, the rotation of the scope with respect to the camera head causes circle W with center C and notch P to become circle W' with center C' = R(d, Q)C and notch P' = R(d, Q)P (FIG. 2). [0042] 2. Calibration of an endoscopic camera with exchangeable, rotatable optics

[0043] Summarizing, in order to obtain the correct calibration parameters of an endoscopic camera at all times, the focal length f, distortion x, and principal point O must be known for a particular rotation angle between camera-head and lens scope (the reference angular position) and the location of the principal point must be updated during operation according to O' = R(d, Q)0, which requires knowing the rotation center Q and the angular displacement d between current and reference angular positions at every frame time instant.

[0044] The calibration of the endoscopic camera at the reference angular position, which can be easily recognized by the position P of the notch, can be performed “off-line” before starting the clinical procedure by following the steps of FIG. 7. The determination of f, x and O requires the use of an intrinsic camera calibration method (Module A in FIG. 7) that receives as input one or more frames acquired at reference position P (or Po) as described in §§ [0045]-[0049] If the objective is to also determine the rotation center Q, then input frames must be acquired in additional angular positions Pi with i=l, ..., N, where these frames can be used to improve the accuracy in determining f, x and O at the reference angular position Po as disclosed in §§ [0065]- [0069]

[0045] The update of the camera model is carried “on-line” during the clinical procedure at every frame time instant by following the steps of FIG. 9 that are further disclosed in §§ [0070]- [0072] The angular displacement d can be determined from a multitude of methods, either using additional instrumentation, such as optical encoders [1] or optical tracking [2], or exclusively relying on image processing. The disclosed embodiments will consider, without loss of generality, that the relative rotation between the lens scope and the camera-head is determined using an image processing method that is also disclosed. This method detects and estimates the position of the boundary contour W ί with center Ci and notch Pi in every frame i (Module B in FIG. 9 further disclosed in §§ [0050]-[0057]), and then infers the corresponding angular displacement d with respect to reference position with, or without, prior knowledge of the rotation center Q (Module C in FIG. 9 further disclosed in §§ [0058]-[0064]).

[0046] 2.1 Camera calibration at a particular angular position including intrinsics K(f, O) and distortion x Module A in FIG. 7)

[0047] The literature is vast in methods for calibrating a pinhole camera with radial distortion which can be divided into two large groups: explicit methods and auto-calibration methods. The former use images of a known calibration obj ect, which can be a general 3D obj ect, a set of spheres, a planar checkerboard pattern, etc., while the latter rely on correspondences across successive frames of unknown, natural scenes. The two approaches can require more or less user supervision, ranging from manual to fully automatic depending on the particular method and underlying algorithms.

[0048] The disclosed embodiments will consider, without loss of generality, that the camera calibration at a particular angular position of the lens scope with respect to camera-head will be conducted using an explicit method that makes use of a known calibration object such as a planar checkerboard pattern or any other planar pattern that enables to establish point correspondences between image and calibration object. This approach is advantageous with respect to most competing methods because of the good performance in terms of robustness and accuracy, the ease of fabrication of the calibration object (planar grid), and the possibility of accomplishing full calibration from a single image of the rig acquired from an arbitrary position. However, other explicit or auto-calibration methods can be employed to estimate the focal length f, distortion x, and principal point O of the endoscopic camera for a particular relative rotation between camera- head and endoscope (the reference angular position).

[0049] The explicit calibration using a planar checkerboard pattern typically comprises the following steps: acquisition of a frame of the calibration object from an arbitrary position or 3D pose (rotation R and translation t of the object with respect to camera); employment of image processing algorithms for establishing point correspondences x, X between image and calibration object; execution of a suitable calibration algorithm that uses the point correspondences for the estimation of the focal length f, the principal point O and distortion parameters x, as well as the pose R, t of the object with respect to the camera.

[0050] The approach can be applied to multiple calibration frames l k , k = 0, K-l, instead of a single one, for the purpose of improving robustness and accuracy. In this case the calibration is independently carried for each frame and a last optimization step that minimizes the re projection error is used to enforce the same intrinsic parameters K(f, O) and distortion x across the multiple frames, while considering a different pose R k , t k for each frame.

[0051] 2.2 Detection of the circular boundary and notch (Module B in FIG. 7 and FIG. 9)

[0052] The circular boundary and the notch of the FSM can be detected as schematized in FIG.

3 A. The method starts by considering an initialization for the boundary W and notch P, which are used as input to a warping function that renders the so-called ring image. The initialization for the boundary W and notch P can be obtained from a multitude of methods which include, but are not limited to, deep/machine learning, image processing, statistical -based and random approaches. Exemplifying, and regarding the boundary W, it can be initialized by considering a circle centered in the image center and with a radius equal to half the minimum between the width and the height of the image, by radially searching for the transition between the image’s black frame and the region containing meaningful information, or by using a deep learning frame work for detecting circles, generic conics, or any other desired shape. Concerning the notch P, it can be initialized in a random location on the boundary or by using learning schemes and/or image processing for detecting the known shape of the notch. Referring to steps 1 and 2 of FIG. 3 A, the ring image is obtained by considering an inner circle W ί and an outer circle W 0 centered at C of W that have radii n<r and r 0 >r, respectively, where r is the radius of W. An uniform spacing between r, and r 0 defines a set of concentric circles W,. For each W,, the image signal is interpolated and concatenated. The hypothesized notch P is mapped in the center of the ring image.

[0053] Referring to step 2 of FIG. 3 A, the edge points on the ring image, which theoretically correspond to points that belong to the boundary, are detected by searching for sharp brightness changes along the direction from the periphery towards the boundary center. This is achieved by analyzing the magnitude of the 1-D spatial derivative response (gradient magnitude) along each column of the ring image. A possible solution for selecting these edge points would be to pick the first local maximum of the gradient magnitude for each column. However, and as depicted in FIG. 3B, there are situations in which this approach fails (e.g. situations of strong light dispersion near the boundary). In order to overcome this, for each column of the ring image, a set of M edge points corresponding to local maxima of the gradient magnitude are selected.

[0054] Then, the detected edge points are mapped back to the Cartesian image space so that the circle boundary can be estimated. This is performed using a circle fitting approach inside a robust framework. Given a set of noisy data points, which can be contaminated by outliers, the objective of circle fitting is to find a circle that minimizes or maximizes a particular error or cost function that quantifies how well a given circle fits the data points. The most widely used techniques either minimize the geometric or the algebraic (approximate) distances from the circle to the data points. In order to handle outlier data points, a robust framework such as RANSAC is usually employed. The steps of ring image rendering, detection of edge points and robust circle estimation are performed iteratively until the detected edge points are collinear, in a robust manner. If this occurs, the algorithm proceeds to the detection of the notch P by performing correlation with a known template of the notch. The output of this algorithm is the notch location P and the circle W with center C and radius r.

[0055] As depicted in FIG. 3C, by centering the ring image using an initial estimation of the notch location P, it is guaranteed that the image part corresponding to the notch is contiguous, enabling its detection at all times. Moreover, the collinearity of the edge points is chosen as the stopping criterion because the edge points belong to a straight line if and only if the estimated boundary is perfectly concentric with the real boundary.

[0056] As shown in FIG. 3D, a lens specific notch template is extracted at calibration time, which is usually composed by a bright triangle and a dark rectangular background.

[0057] The disclosed method for boundary and notch detection can have other applications such as the detection of engravings in the FSM for reading relevant information including, but not limited to, particular characteristics of the lens.

[0058] In addition, although the implementation of this method assumes that the boundary can be accurately represented by a circle, generic conic fitting can be used in the method without major modifications.

[0059] 2.3 Image-based measurement of the relative rotation between endoscope and camera- head (Module C in FIG. 9) [0060] As previously mentioned, finding the calibration for current frame i can be accomplished by rotating the principal point O (or Oo) at the reference angular position by angle d ί around the rotation center Q. In this case, both center Q and the angular displacement d ί between frame i and frame 0 corresponding to the reference position must be estimated.

[0061] FIG. 4A depicts the process of estimating Q from two frames i and j acquired at two different angular positions. This is performed by simply intersecting the bisectors of the line segments whose endpoints are respectively the centers Ci, and the notches Pi, P j that are determined by applying the steps of FIG. 3 to each frame. If the notches cannot be detected in the images, due to occlusions, poor lighting, over-exposure with light dispersion, etc., then it is possible to estimate Q using solely the centers of the circular boundaries detected in three frames acquired at different angular positions. This process is illustrated in FIG. 4B, where it can be seen that Q is the intersection of the line segments obtained by joining the centers of the boundaries detected in frames i, j and k.

[0062] If the rotation center Q is known and the center and notch at the reference angular position are respectively Co and Po, then the angular displacement d ί can be inferred from the notch Pi, the boundary center Ci, or both simultaneously (6 j = P 0 QP j = C 0 QC j ), with their positions being determined by applying the steps of FIG. 3 to the current frame i. If notch P is not visible, d ί can be determined from Co, Ci .

[0063] Since the distance from the rotation center Q to the notch P is significantly larger than that between Q and C, estimations using the notch P are in general more robust and accurate, and thus it is important that it can be detected in all frames. Since its detection is mostly affected by situations of occlusion, one solution is to consider multiple notches in the FSM to ensure that at least one is always visible in the frame. FIG. 5 presents one exemplary FSM containing multiple marks with different shapes, allowing their identification, with one of these marks being used as the point of reference or standard notch P. An alternative solution is to consider an FSM that projects onto a black frame that renders an image boundary with a shape that does not have circular symmetry, in which case this lack of circular symmetry of the detected shape can be used to infer the location of a notch or point of reference P that is not visible. FIG. 6 presents one exemplary FSM that renders an elliptic shaped boundary with the major axis going through the notch P which enables to infer its position at all times. [0064] The algorithm described in §§ [0050]-[0057] for detecting the notch can be extended to the case when the FSM contains multiple notches. For this, the last step in FIG. 3 A is modified by determining the correlation signal for each notch independently, fusing all signals together using the known relative location of the notches and finding the point of highest correlation. With this approach it is guaranteed that at least one notch is detected, even if one or more of them are occluded.

[0065] Without loss of generality, it is assumed in the remainder of this patent that the FSM has only one notch P that is always visible and the rotation center Q is determined from two frames. [0066] Whenever more than two frames acquired at different angular positions are available, and in order to filter out possible noisy estimations of the rotation center Q and the relative rotation d ί , a filtering approach can be applied. The filter can take as input the previous estimation for Q and the current boundary and notch and output the updated location of Q and an estimation for the relative rotation d ί. This filtering technique can be implemented using any temporal filter such as a Kalman filter or an Extended Kalman filter.

[0067] 2.4 Offline calibration at the reference angular position

[0068] FIG. 7 gives a schematic description of the procedure for obtaining the camera calibration at reference angular position i = 0 that can correspond to any relative angle between the rigid endoscope and the camera-head. With the lens scope at the reference angular position, the user starts by acquiring K > 1 calibration images. Intrinsic camera calibration as described in §§ [0045]-[0049] (Module A) is then performed by extracting 2D-3D correspondences x k , X k for each image, and retrieving the calibration object poses R k , t, k , as well as a set of intrinsic parameters K(f, Oi) and distortion x,. The circular boundary with center C,, radius h and notch P, are also detected by following the method described in §§ [0050]-[0057] (Module B). This data is stored in memory, giving the user the option of changing the angular position by rotating the lens scope with respect to camera-head, and repeating the processes of image acquisition, intrinsic calibration and boundary/notch detection. This is performed for a total of N > 1 distinct angular positions i, with i=0,l,... N-l and i=0 being the reference angular position for which the final calibration is obtained after a global optimization step that fuses the estimates at each position i and minimizes the re-projection error for all calibration images in simultaneous (FIG. 8).

[0069] The off-line calibration method of FIG. 7 can be carried using frames acquired at a single angular position, in which case N=1 and the position is the reference position i=0, or at multiple angular positions, in which case N>1. For each different position it can be acquired either a single calibration frame (K=l), or multiple calibration frames (K>1).

[0070] The case N=1 and K=1 is the one that requires minimum user effort, being particularly well suited for fast calibration in the OR where the surgeon just has to acquire a single image of the checkerboard pattern after assembling the endoscope in the camera head. The accuracy in the estimation of the calibration parameters tends to improve for an increasing number K of frames. [0071] The use of information from two or more angular positions (N>1) makes it possible to estimate the rotation center Q in conjunction with f, x and Oo, independently of the number K of frames acquired at each position. This can be accomplished by following the approach depicted in FIG. 4 and disclosed in §§ [0058]-[0064] For N > 1, the calibrations obtained at different angular positions are fused in a large-scale optimization step that enforces the rotation model, as illustrated in FIG. 8 for N = 3. It can be observed that for any two angular positions, the principal point O and notch P rotate by the same amount d around the rotation center Q. The optimization scheme serves to estimate the distortion and the calibration parameters at the reference position, while minimizing the re-projection error in all acquired calibration images and enforcing this model for the scope rotation for all sampled angular positions simultaneously. The expression present in FIG. 8 provides the mathematical formulation for this optimization scheme for the case of N = 3 angular positions, being straightforward to extend to a generic value N. Function r computes the squared reprojection error by projecting points X k onto the image plane, yielding x k , and outputting the squared distances d(x k ,x k ) 2 , with r/ being the Euclidean distance between points x k and xjf K t is the number of calibration images acquired with the scope in the angular position i. As evinced by the mathematical expression, the proposed optimization scheme finds the values for the distortion x, rotation center Q, intrinsic parameters f and Oo , as well as the calibration object poses R k , t k , that minimize the sum of the reprojection error computed for all frames k and angular positions i. [0072] 2.5 Online update of the calibration parameters

[0073] During operation, every time a new frame j is acquired, an on-the-fly procedure must detect and measure the angular displacement with respect to the reference position and update the calibration and camera model accordingly. FIG. 9 gives a schematic description of this procedure. Frame j is processed for the detection of the circular boundary center and the notch P j , which can be accomplished by following the steps disclosed in FIG. 3 and §§ [0050]-[0057]. Afterwards, the estimation of the angular displacement 5 j is performed as disclosed in §§ [0058]-[0064] for which the rotation center Q must be known.

[0074] There are two possible modes of operation for retrieving Q: in mode 1 the rotation center is known in advance from the offline calibration step that used frames acquired in N>1 angular positions as disclosed in §§ [0065]-[0069]; in mode 2, the rotation center is not known ‘a priori’ but estimated on-the-fly from successive frames for which the notch P and/or center of circular boundary C are determined such that the methods disclosed in §§ [0058]-[0064] and FIG. 4 can be employed. As illustrated in FIG. 9, the current notch P j and center are used in conjunction with the ones detected on the previous frame j-1, which is accessed through a delay operation, P j -i and center C j -i, to estimate the rotation center Q. As a final step, the calibration parameters are updated by applying a plane rotation to the principal point Oo corresponding to the reference position, i.e., by computing the updated principal point as O j =R(5 j ,Q)Oo [0075] 3. Off-site lens calibration to avoid explicit calibration steps in the OR

[0076] It has been disclosed a method to determine the calibration of an endoscopic camera at all times that comprises two steps or stages: an offline step that aims to estimate the focal length f, the distortion x, and the principal point O for an arbitrary reference angular position, and an online step that determines at every frame time instant the angular displacement with respect to the reference and updates the position of the principal point to provide the calibration for the current frame. Since the lens of the endoscopic camera is exchangeable, both offline and online steps are carried on-site in the OR after the surgeon assembles the endoscope in the camera-head. While the online step is meant to run on-the-fly, in parallel with image acquisition in a seamless manner to user, the offline step requires explicit user intervention to acquire one or more calibration frames, which is undesirable.

[0077] In order to minimize disruption to the existing surgical workflow, US 9438897 B2 describes a method that is the particular situation of N=1 and K=1 of the offline step disclosed in §§ [0065]-[0069] and FIG. 7. The effort of the surgeon is minimized by requiring the acquisition of a single frame at the reference position, and the rotation center Q is determined in the online step as in mode 2 of FIG. 9. Nevertheless, the method still requires surgeon intervention in the OR, which is still time-consuming and a disruption to the workflow, and it requires the use of a sterile calibration object (in this case a checkerboard pattern) which is not always easy to produce and adds cost. [0078] This patent overcomes these problems by disclosing a method for calibrating the endoscopic lens alone, which can be performed off-site (e.g. at manufacture) with the help of a camera or other means, and that provides a set of parameters that fully characterize the rigid endoscope leading to a lens descriptor F that can be used for different purposes. One of these purposes is to accomplish calibration of any endoscopic camera system that is equipped with the lens, in which case the descriptor F is loaded in the Camera Control Unit (CCU), to be used as input in an online method that runs on-the-fly, and that outputs the complete calibration of the camera-head + lens arrangement at every frame time instant.

[0079] Since the lens calibration can be carried off-site, namely in factory at the time of manufacture, and the online calibration runs on-the-fly in a seamless manner to the user, there is no action to be carried in the OR by the surgeon, which means that endoscopic camera calibration is accomplished at all times with no change or disruption of the established routines. Moreover, and differently from what is possible with the method disclosed in US 9438897 B2, calibration is accomplished even in situations of variable zoom and/or translation of the lens with respect to camera-head.

[0080] The method of off-site, offline calibration of the rigid endoscope to generate the descriptor F is disclosed in §§ [0079]-[0083], while the online method to accomplish calibration of endoscopic camera comprising camera-head and optics is described in §§ [0084]-[0085]

[0081] 3.1 Off-site, offline lens calibration

[0082] The rigid endoscope is assembled in an arbitrary camera-head, henceforth referred to as the Characterization Camera, and the offline calibration method of FIG. 7, §§ [0065]-[0069] is employed, which requires acquiring K calibration images at N distinct angular positions. This enables to calibrate at the reference angular position i=0, which includes knowing the intrinsic parameters K(f, O), the distortion x, the notch P, the circular boundary W with center C and radius r, and, if N>1, the rotation center Q.

[0083] The calibration result refers to the compound arrangement of camera-head with rigid endoscope, with the measurements depending on the particular camera-head in use, as well as on the manner the lens is mounted in the camera-head. Since the objective is to characterize the endoscope alone, the influence of the camera-head must be removed such that the final descriptor only depends on the lens and is invariant to the camera and/or equipment employed to generate it. [0084] The method herein disclosed accomplishes this objective by building in two key observations: (i) the camera-head usually follows an orthographic (or nearly orthographic) projection model, which means that it only contributes to the imaging process with magnification and conversion of metric units to pixels; and (ii) the images of the Field-Stop-Mask (FSM) always relate by a similarity transformation, which means the FSM can be used as a reference to encode information about the lens that is invariant to rigid motion and scaling.

[0085] Let the calibration result after applying the offline method of FIG. 7 comprise the focal length f, the distortion x, the principal point O, the notch P, and the circular boundary W with center C and radius r. The lens descriptor is F = (f, x, 0), with f = f/r , where the division by r works as a normalization to account for the magnification introduced by the camera-head, x is as measured in the offline calibration step because it is a characteristic intrinsic to the optics that is not influenced by the camera-head, and 0 is the principal point referenced in a system of coordinates attached to the circular boundary (the lens coordinate system), with center in C and x axis aligned with the segment joining the center C and the notch P after being scaled by r (FIG. 10). For this particular choice of lens reference frame, the change of coordinates between image and lens is performed by a similarity transformation A such that O = AO with A =

, and b is angle between the x axes of the image and boundary reference frames. If the rotation center Q is known, then it can also be represented in lens coordinates by making Q = AQ and stacked to the descriptor, that becomes F = (f, x, 0, Q). These particular choices of image and lens reference frames are arbitrary, and other reference frames, related by rigid transformations with the chosen ones, could have been considered without compromising the disclosed methods.

[0086] 3.2 Online camera calibration using the lens descriptor

[0087] When the lens with descriptor F is mounted on an arbitrary camera head, henceforth referred to as application camera, it is possible to automatically obtain the calibration of the full arrangement camera + lens by proceeding as follows: for each frame j, apply the method of FIG. 3 to detect the position of both the notch P j and the circular boundary with center and radius h; find the location of the lens reference frame in the image and determine the similarity transformation B that maps lens coordinates into current image coordinates, with B = cos a sin a Cjx — h sin a h cos a C jy , where a is the angle between the x axes of the two reference frames; 0 0 1 finally, the calibration of the endoscopic camera for the current angular position can be determined by decoding the different descriptor entries, in which case the focal length becomes f j = r j f, the principal point is now O j = B0 and the distortion x is the same because it is inherent to the optics. If the descriptor also comprises the rotation center, then its position in frame j can be determined in a similar manner by making Q j = BQ.

[0088] 3.3 Relevant considerations

[0089] Off-site offline lens calibration using a single image: One important consideration is that the calibration approach disclosed in this patent does not require the knowledge of the rotation center Q for determining the calibration of the endoscopic camera at every frame time instant. Thus, if time and effort of the off-site calibration procedure is a concern, the lens descriptor can be generated by acquiring a single calibration image in which case the offline method of FIG. 7, §§ [0065]-[0069] is run with K=1 and N=l. In this case the descriptor will not include the entry

Q

[0090] Accommodation of relative rotation (calibration by detection or by tracking): Since a rotation of the endoscope with respect to the camera-head 6 j causes a similar rotation to the lens reference frame in the image, the update of the calibration at every frame j can be performed implicitly without having to compute an angular displacement 6 j and explicitly rotate the principal point around the center Q. In this case, the disclosed approach based on the lens descriptor can be used alone, with F being decoded at every frame time instant by the online method of §§ [0084]- [0085] (calibration by detection). An alternative is to employ the method of FIG. 9, §§ [0070]- [0072], in which case the lens descriptor is used to obtain the calibration at an arbitrary reference position, and this calibration is then updated by determining angular displacements 6 j and rotating the principal point (calibration by tracking).

[0091] Adaptation to optical zoom and/or translation of the lens scope along the plane orthogonal to the mechanical axis: In the disclosure, the focal length fj is determined at each frame time instant by scaling the normalized focal length f by the magnification introduced by the application camera, which is inferred from the radius h of the circular boundary. If the magnification is constant, then f j is also constant across successive frames j. However, if the application camera has optical zoom that varies, then f j will vary accordingly. Thus, and unlike the method described in US 9438897 B2, the method herein disclosed can cope with changes in zoom, as well as with translations of the lens scope along the plane orthogonal to the mechanical axis. The adaptation to the former stems from the fact that changes in zoom lead to changes in the radius of the boundary h that is used to decode the relevant entries in the lens descriptor, namely f j , O j and Q j , providing the desired adaptation. The adjustment to the latter arises from the fact that the circular boundary, to which the lens coordinate system is attached, translates with the lens, and the image coordinates of the decoded Oj and Qj translate accordingly .

[0092] Alternative means to generate the lens descriptor: The descriptor F = (f, x, 0, Q] characterizes the lens through parameters or features that have a clear physical meaning. For example, the mechanical axis of the endoscope, that is in general defined by the symmetry axis of the eye-piece in the proximal end of the lens, should go through the center of the circle defined by the FSM. If this condition holds, then the center C and the rotation center Q are coincident and Q = [0,0,1] T . In general, the condition is not verified, as illustrated in FIG. 11, because of mechanical tolerances in the manufacturing process, in which case the non-zero Q accounts for the misalignment between eye-piece and FSM. Since it is a mechanical misalignment, it can be potentially measured by other means than the ones disclosed in §§ [0079]-[0083] using camera calibration and image processing to generate F. Such alternative means include using a caliper, micrometer, protractor, gauges, robotic measurement apparatuses or any combination of thereof, to physically measure the distance between axis and center of the FSM.

[0093] Transmission of the lens descriptor to Application Camera : In the disclosed embodiment the lens descriptor is generated off-site with the help of a Characterization Camera, and must be then communicated to the CCU or computer platform connected to the Application Camera that will execute the online method of §§ [0084]-[0085] (FIG. 10). This transmission or communication can be accomplished through a multitude of methods that include, but are not limited to, manual insertion of the calibration parameters onto the CCU by means of a keyboard or other input interface, network connection and download from a remote server, retrieval from a database of lens descriptors, reading from a USB flash drive or any other storage medium, visual reading and decoding of a QR code, visual reading and decoding of information engraved in the FSM such as digits or binary codes as disclosed in PCT/US2018/048322. [0094] Descriptor for a batch of lenses : The descriptor F can either characterize a specific lens or be representative of a batch of lenses with similar characteristics. In this last case, the descriptor can be generated by either using as input to the off-line calibration method of FIG. 7 calibration frames acquired with different lenses in the batch, in which case a single descriptor is enforced in the final global optimization step or, in alternative, by generating a descriptor for each lens in the batch and averaging their entries to obtain a single representation. The characterization of a batch of lenses by a single average descriptor can avoid the need of loading a specific descriptor for each lens used in a certain Application Camera, or be used for the purpose of quality control in production, in which case the variance of the parameters in the descriptor is a measurement of the repeatability of the manufacturing processes.

[0095] 4. Detection of anomalies in the endoscopic camera

[0096] While the calibration approach presented in §§ [0041]-[0072] always provides a correct calibration of the endoscopic camera, as it is assembled and explicitly calibrated in the OR, the calibration method disclosed in this patent (§§ [0073]-[0092]) relies on prior assumptions such as the correct retrieval of stored calibration information and the proper assembly of the lens in the camera head. In case these assumptions are not satisfied, the camera + lens arrangement will not be accurately calibrated and malfunctions can occur in systems that use the calibration information for performing distortion correction, virtual views rendering, enhanced visualization, surgical navigation, etc.

[0097] This patent discloses a method that makes use of the lens descriptor F for detecting anomalies in the endoscopic camera calibration caused by a mismatch between the loaded calibration information and the lens in use and/or an incorrect assembly of the lens in the camera head or a defect of any of these components.

[0098] FIG. 12 provides a schematic description of this method for anomaly detection. For each acquired frame j, detection of the boundary and notch is performed both for obtaining the updated calibration from the loaded lens descriptor, as described in §§ [0084]-[0085], and for estimating the rotation center, as described in §§ [0058]-[0064] This yields two different estimates for the rotation center (Q j and Q j in FIG. 12), that can be compared to detect an anomaly. The intuition behind this approach is the following: Two lenses do not rotate in the exact same manner because of mechanical tolerances in building the optics. Thus, the way each lens rotates with respect to any camera-head can work as a signature to distinguish it from the others. In addition, if the lens is incorrectly assembled or damaged, because of a defect/damage in the eye piece connector, a defect/damage to the lens itself or any other aspect that leads to a defective fit between the camera and lens, it will also rotate differently from how it rotates when properly assembled. [0099] This change in the lens motion model can be used to detect the existence of an anomaly, as well as to quantify how serious the anomaly is, and warn the user to verify the assemblage and/or replace the lens.

[0100] This method only provides information on the existence of an anomaly, and does not specify which type of anomaly is occurring, which would allow the system to provide the user specific instructions for fixing the anomaly. To accomplish this, the approach for anomaly detection schematized in FIG. 12 can be complemented with another method for identifying the cause of anomaly. Since an incorrect assembly of the lens in the camera-head causes a modification to the projection of the FSM in the image plane, a feature that quantifies the difference between the boundaries detected at calibration and operation time can be used to distinguish between an anomaly caused by calibration-optics mismatch or a deficient assembly.

[0101] In particular, if the FSM is projected in the image plane onto a circle when the lens is properly assembled in the camera-head, this circle tends to evolve into an ellipse when the optics is not correctly assembled. Thus, in this case, the eccentricity of the boundary detected during operation can be measured to verify if the assemblage is correct, and it is not required to know the specific shape of the boundary detected during calibration of the lens.

[0102] This approach is valid if the FSM has a shape that can be represented parametrically, such as an ellipse or any other geometric shape. In addition, template matching or machine learning techniques can be used to compare the boundary detected during operation with the known shape. [0103] Summarizing, there exist two important features that can be used for detecting and identifying anomalies. The first one is the difference between the rotation center estimates obtained at calibration time and during operation, Q j and Q j , respectively. The second consists in the difference between the boundary contours detected at calibration time and during operation. While the first allows the detection of an anomaly, whether it is a mismatch between the loaded calibration and the camera+lens arrangement in use, an incorrect assembly of the lens in the camera head or a defect of any of these components, the second provides information on the type of anomaly since it only occurs when there is a deficient assemblage. [0104] Thus, the disclosed method for detection and identification of anomalies that makes use of these two distinct features can be implemented using a cascaded classifier that starts by using the first feature for the anomaly detection stage and then discriminates between a calibration mismatch and an incorrect camera+lens assembly by making use of the second feature. In alternative to the cascaded classifier, other methods such as different types of classifiers, machine learning, statistical approaches, data mining can be employed. In addition, depending on the desired application, these features can be used individually, in which case the first feature would allow the detection of an anomaly, without identification of the type of anomaly, and the second feature would solely serve to detect incorrect assemblages.

[0105] FIG. 13 is a diagrammatic view of an illustrative computing system that includes a general purpose computing system environment 1200, such as a desktop computer, laptop, smartphone, tablet, or any other such device having the ability to execute instructions, such as those stored within a non-transient, computer-readable medium. Furthermore, while described and illustrated in the context of a single computing system 1200, those skilled in the art will also appreciate that the various tasks described hereinafter may be practiced in a distributed environment having multiple computing systems 1200 linked via a local or wide-area network in which the executable instructions may be associated with and/or executed by one or more of multiple computing systems 1200. Computing system environment 1200, or portions thereof, may find use for the processing, methods, and computing steps of this disclosure.

[0106] In its most basic configuration, computing system environment 1200 typically includes at least one processing unit 1202 and at least one memory 1204, which may be linked via a bus 1206. Depending on the exact configuration and type of computing system environment, memory 1204 may be volatile (such as RAM 1210), non-volatile (such as ROM 1208, flash memory, etc.) or some combination of the two. Computing system environment 1200 may have additional features and/or functionality. For example, computing system environment 1200 may also include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks, tape drives and/or flash drives. Such additional memory devices may be made accessible to the computing system environment 1200 by means of, for example, a hard disk drive interface 1212, a magnetic disk drive interface 1214, and/or an optical disk drive interface 1216. As will be understood, these devices, which would be linked to the system bus 1206, respectively, allow for reading from and writing to a hard disk 1218, reading from or writing to a removable magnetic disk 1220, and/or for reading from or writing to a removable optical disk 1222, such as a CD/DVD ROM or other optical media. The drive interfaces and their associated computer- readable media allow for the nonvolatile storage of computer readable instructions, data structures, program modules and other data for the computing system environment 1200. Those skilled in the art will further appreciate that other types of computer readable media that can store data may be used for this same purpose. Examples of such media devices include, but are not limited to, magnetic cassettes, flash memory cards, digital videodisks, Bernoulli cartridges, random access memories, nano-drives, memory sticks, other read/write and/or read-only memories and/or any other method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Any such computer storage media may be part of computing system environment 1200.

[0107] A number of program modules may be stored in one or more of the memory/media devices. For example, a basic input/output system (BIOS) 1224, containing the basic routines that help to transfer information between elements within the computing system environment 1200, such as during start-up, may be stored in ROM 1208. Similarly, RAM 1210, hard drive 1218, and/or peripheral memory devices may be used to store computer executable instructions comprising an operating system 1226, one or more applications programs 1228 (such as an application that performs the methods and processes of this disclosure), other program modules 1230, and/or program data 1232. Still further, computer-executable instructions may be downloaded to the computing environment 1200 as needed, for example, via a network connection. [0108] An end-user, e.g ., a customer, retail associate, and the like, may enter commands and information into the computing system environment 1200 through input devices such as a keyboard 1234 and/or a pointing device 1236. While not illustrated, other input devices may include a microphone, a joystick, a game pad, a scanner, etc. These and other input devices would typically be connected to the processing unit 1202 by means of a peripheral interface 1238 which, in turn, would be coupled to bus 1206. Input devices may be directly or indirectly connected to processor 1202 via interfaces such as, for example, a parallel port, game port, firewire, or a universal serial bus (USB). To view information from the computing system environment 1200, a monitor 1240 or other type of display device may also be connected to bus 1206 via an interface, such as via video adapter 1242. In addition to the monitor 1240, the computing system environment 1200 may also include other peripheral output devices, not shown, such as speakers and printers.

[0109] The computing system environment 1200 may also utilize logical connections to one or more computing system environments. Communications between the computing system environment 1200 and the remote computing system environment may be exchanged via a further processing device, such as a network router 1252, that is responsible for network routing. Communications with the network router 1252 may be performed via a network interface component 1254. Thus, within such a networked environment, e.g., the Internet, World Wide Web, LAN, or other like type of wired or wireless network, it will be appreciated that program modules depicted relative to the computing system environment 1200, or portions thereof, may be stored in the memory storage device(s) of the computing system environment 1200.

[0110] The computing system environment 1200 may also include localization hardware 1256 for determining a location of the computing system environment 1200. In embodiments, the localization hardware 1256 may include, for example only, a GPS antenna, an RFID chip or reader, a Wi-Fi antenna, or other computing hardware that may be used to capture or transmit signals that may be used to determine the location of the computing system environment 1200.

[0111] In a first aspect of this disclosure, a method for calibrating an endoscopic camera is provided. The endoscopic camera results from combining a rigid endoscope with a camera, wherein the rigid endoscope or lens scope has a Field Stop Mask (FSM) that renders an image boundary with center C and a notch P, and can rotate with respect to the camera-head by an angle d around a mechanical axis that intersects the image plane in point Q, in which case C, P and the principal point O undergo a 2D rotation of the same angle d around Q, and wherein calibration consists in determining the focal length f, distortion x, rotation center Q and principal point Oo for a chosen angular position of the lens scope with respect to the camera-head, henceforth referred to as the reference angular position i=0. The method comprises: acquiring one or more calibration images of a calibration object with the endoscopic camera at angular position i without rotating the lens with respect to the camera head; determining a first estimate of the calibration parameters f, x and Oi of the endoscopic camera, as well as the 3D pose (rotation and translation) of the calibration object with respect to the camera for each calibration image; detecting a boundary with center Ci and notch Pi on the calibration images using an image processing method; rotating the lens scope with respect to the camera-head to a new angular position i and repeating the previous steps, with i being incremented to take successive values i=0, 1, ... , N-l where N > 1 is the number of different angular positions used for the calibration; determining a first estimate for the rotation center Q and for the angular displacements dί between the reference position i=0 and the successive calibration positions i=l,...N-l; and refining the calibration parameters f, x, Q, and Oo through a final optimization step that enforces the model of the principal point, boundary center, and notch undergoing a rotation by an angle d ί around the center Q for successive calibration positions i=0, ... N-l.

[0112] In an embodiment of the first aspect, the calibration object is either a 2D plane with a checkerboard pattern or any other known pattern, a known 3D object, or is non-existent, with the calibration input being a set of point correspondences across images, in which case the first estimate of the calibration parameters are respectively obtained by a camera calibration algorithm from planes, a camera calibration algorithm from objects, or a suitable auto-calibration technique. [0113] In an embodiment of the first aspect, the final optimization step is performed using an iterative non-linear minimization of a reprojection error, photogeometric error, or any other suitable optimization approach.

[0114] In an embodiment of the first aspect, the first estimate of the calibration parameters is determined from any calibration method in the literature.

[0115] In an embodiment of the first aspect, the first estimate of the calibration parameters includes any distortion model known in the literature such as Brown’s polynomial model, the rational model, the fish-eye model, or the division model with one or more parameters, in which case x is a scalar or a vector, respectively.

[0116] In an embodiment of the first aspect, the rotation center Q is known in advance, in which case the calibration can be accomplished from images acquired in one or more angular positions (N >=1), is determined from the image position of boundary centers Ci and notches Pi, in which case the calibration is accomplished from images acquired in two or more angular positions (N >=2), or is determined solely from image position of boundary centers Ci or notches Pi, in which case the calibration is accomplished from images acquired in three or more angular positions (N >=3).

[0117] In a second aspect of the present disclosure, a method for updating, at every frame time instant, the calibration parameters of an endoscopic camera is provided. The endoscopic camera results from combining a rigid endoscope with a camera comprising a camera-head and a Camera Control Unit (CCU), wherein the rigid endoscope or lens scope has a Field Stop Mask (FSM) that renders an image boundary with center C and a notch P, and can rotate with respect to the camera- head by an angle d around a mechanical axis that intersects the image plane in point Q, in which case C, P and the principal point O undergo a 2D rotation of the same angle d around Q, and wherein the calibration parameters focal length f, distortion x, rotation center Q and principal point Oo, as well as a boundary with center Co and a notch Po, for a reference angular position i=0 of the lens scope with respect to the camera-head are known. The method comprises: acquiring a new frame j by the endoscopic camera and detecting a boundary center and a notch Pp estimating an angular displacement d of the endoscopic lens with respect to the camera-head according to notch Po, notch P j and Q; and estimating an updated principal point O j of the endoscopic camera by performing a 2D rotation of the principal point Oo around Q by an angle d.

[0118] In an embodiment of the second aspect, the calibration parameters focal length f, distortion x, rotation center Q and principal point Oo, as well as a boundary with center Co and a notch Po, at the reference angular position i=0 are obtained by calibrating the endoscopic camera with the lens scope at the reference position or by retrieval from the CCU.

[0119] In an embodiment of the second aspect, the rotation center Q is determined using two or more boundary centers Cj and/or notches Pj.

[0120] In an embodiment of the second aspect, the angular displacement of the endoscopic lens is estimated by mechanical means and/or by making use of optical tracking, in which case the boundary centers Co and and the notches Po and P j do not have to be known.

[0121] In an embodiment of the second aspect, the method further comprises employing a technique for filtering the estimation of the rotation center Q and the angular displacement d including, but not limited to, any recursive or temporal filter known in the literature such as a Kalman filter or an Extended Kalman filter.

[0122] In a third aspect of the present disclosure, a method for characterizing a rigid endoscope with a Field Stop Mask (FSM) that induces an image boundary with center C and a notch P by obtaining a descriptor F comprising a normalized focal length f, a distortion x, a normalized principal point 0 and a normalized rotation center Q is provided, the method comprising: combining the rigid endoscope with a camera to obtain an endoscopic camera, referred to as characterization camera; estimating the calibration parameters (focal length f, distortion x, principal point Oo and rotation center Q) of the characterization camera at a reference position; detecting a boundary with center Co and a notch Po at the reference position; and determining the normalized focal length f, normalized principal point 0 and normalized rotation center Q according to center Co, notch Po, focal length f, principal point Oo and rotation center Q.

[0123] In an embodiment of the third aspect, the normalized focal length f is computed from f = /r, with r being the distance between center Co=[C x , C y , 1] T and notch Po, and the normalized principal point O and rotation center Q are obtained by computing O = AO and Q = AQ, respectively, with A = and b being the angle between line CP and the down direction.

[0124] In a fourth aspect of the present disclosure, a method for calibrating an endoscopic camera is provided. The endoscopic camera results from combining a rigid endoscope with a camera comprising a camera-head and a Camera Control Unit (CCU), wherein the rigid endoscope has a descriptor F comprising a normalized focal length f, a distortion x, a normalized principal point 0 and a normalized rotation center Q and has a Field Stop Mask (FSM) that renders an image boundary with center C and a notch P, and wherein calibration consists in determining the focal length f, distortion x, rotation center Q and principal point O for a particular angular position of the lens scope with respect to the camera-head. The method comprises: acquiring frame i by the endoscopic camera, detecting a boundary center Ci = [C x , C y , 1] T and a notch Pi, and determining a radius r = HC j P I; and estimating the calibration parameters of the endoscopic camera focal length f, rotation center Q and principal point O according to center center Ci, notch Pi, the normalized focal length f, the normalized principal point 0 and the normalized rotation center Q. [0125] In an embodiment of the fourth aspect, the focal length f, the principal point O and the rotation center Q are computed by f = rf , O = B0 and Q = BQ , respectively, with B = r cos a rsin a C J kX

—rsin a r cos a C y and a being the angle between line HC j P I and the down direction.

0 0 1

[0126] In an embodiment of the fourth aspect, the endoscopic lens descriptor F is obtained by using a camera, by measuring the endoscopic lens using a caliper, a micrometer, a protractor, gauges, robotic measurement apparatuses or any combination of thereof or by using a CAD model of the endoscopic lens. [0127] In an embodiment of the fourth aspect, the endoscopic lens descriptor F is obtained by loading information into the CCU from a database or using QR codes, USB flash drives, manual insertion, engravings in the FSM, RFID tags, an internet connection, etc.

[0128] In an embodiment of the fourth aspect, frame i comprises two or more frames, wherein the rotation center Q is determined using two or more boundary centers Ci and/or notches Pi. [0129] In an embodiment of the fourth aspect, the endoscopic camera can have an arbitrary angular position of the lens scope with respect to the camera-head and an arbitrary amount of zoom.

[0130] In a fifth aspect of the present disclosure, a method for detecting an anomaly caused by defects or incorrect assembly of a rigid endoscope in a camera-head or by a mismatch between a considered calibration and an endoscopic lens in use in an endoscopic camera that results from combining the rigid endoscope with a camera comprising the camera-head and a Camera Control Unit (CCU), wherein the rigid endoscope has a descriptor F comprising a normalized rotation center Q and has a Field Stop Mask (FSM) that renders an image boundary with center C and a notch P is provided, the method comprising: acquiring at least two frames by the endoscopic camera, having the rigid endoscope in different positions with respect to the camera-head, and detecting boundary centers Ci and notches Pi for each frame; estimating a rotation center Q using the detected boundary centers Ci and/or notches Pi; estimating a rotation center Q according to the normalized rotation center Q, boundary centers Ci and notches Pi; and comparing the two rotation centers Q and Q and deciding about the existence of an anomaly.

[0131] In an embodiment of the fifth aspect, the endoscopic lens descriptor F is obtained by loading information into the CCU from a database or using QR codes, USB flash drives, manual insertion, engravings in the FSM, RFID tags, an internet connection, etc.

[0132] In an embodiment of the fifth aspect, the comparison between the two rotation centers Q and Q and the decision about the existence of an anomaly are performed by making use of one or more of algebraic functions, classification schemes, statistical models, machine learning algorithms, thresholding, or data mining.

[0133] In an embodiment of the fifth aspect, comparing the boundaries detected at calibration time and during operation for identifying the cause of the anomaly.

[0134] In an embodiment of the fifth aspect, the method further comprises providing an alert message to the user, wherein the cause of the anomaly, whether it is a mismatch between the lens in use and the considered calibration or a physical problem with the rigid endoscope and/or the camera-head, is identified.

[0135] In a sixth aspect of the present disclosure, a method for detecting an image boundary with center C and a notch P in a frame acquired by using a rigid endoscope that has a Field Stop Mask (FSM) that induces the image boundary with center C and the notch P is provided, the method comprising: using an initial estimation of the boundary with center C and notch P for rendering a ring image, which is an image obtained by interpolating and concatenating image signals extracted from the acquired frame at concentric circles centered in C, wherein the notch P is mapped to the center of the ring image; detecting salient points in the ring image; repeating the following until the detected salient points are collinear; mapping the salient points into the space of the acquired frame and fitting a circle with center C to the mapped points; rendering a new ring image by making use of the fitted circle; detecting salient points in the new ring image; and detecting the notch P in the final ring image using correlation with a known template.

[0136] In an embodiment of the sixth aspect, the FSM contains more than one notch, all having different shapes and/or sizes so that they can be identified, in which case the template comprises a combination of notches whose relative location is known.

[0137] In an embodiment of the sixth aspect, the notches can have any desired shape.

[0138] In an embodiment of the sixth aspect, the notch that is mapped in the center of the ring image is an arbitrary notch.

[0139] In an embodiment of the sixth aspect, the initial estimation of the boundary with center C and notch P can be obtained from a multitude of methods which include, but are not limited to, deep/machine learning, image processing, statistical-based and random approaches.

[0140] In an embodiment of the sixth aspect, a generic conic is fitted to the mapped points.

[0141] In an embodiment of the sixth aspect, the detected center C and/or notch P are used for the estimation of the angular displacement of the rigid endoscope with respect to a camera-head it is mounted on.

[0142] While various embodiments have been described for purposes of this disclosure, such embodiments should not be deemed to limit the teaching of this disclosure to those embodiments. Various changes and modifications may be made to the elements and operations described above to obtain a result that remains within the scope of the systems and processes described in this disclosure. All patents, patent applications, and published references cited herein are hereby incorporated by reference in their entirety. It should be emphasized that the above- described embodiments of the present disclosure are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the disclosure. It will be appreciated that several of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. All such modifications and variations are intended to be included herein within the scope of this disclosure, as fall within the scope of the appended claims.

[0143] The described embodiments are to be considered in all respects only as illustrative and not restrictive and the scope of the presently disclosed embodiments is, therefore, indicated by the appended claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosed systems and/or methods.

REFERENCES

[0144] All of the references cited are expressly incorporated herein by reference. The discussion of any reference is not an admission that it is prior art to the presently disclosed embodiments, especially any reference that may have a publication date after the priority date of this application.

[1] T. Yamaguchi, M. Nakamoto, Y. Sato, K. Konishi, M. Hashizume, N. Sugano, H. Yoshikawa, and S. Tamura, “Development of a camera model and calibration procedure for oblique-viewing endoscopes,” Computer Aided Surgery, vol. 9, no. 5, pp. 203-214, February 2004.

[2] C. Wu, B. Jaramaz, and S. Narasimhan, “A full geometric and photometric calibration method for oblique-viewing endoscopes,” Computer Aided Surgery, vol. 15, no. 1-3, pp. 19-31, April 2010