Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHODS AND SYSTEMS FOR DETERMINING AN OPTICAL AXIS AND/OR PHYSICAL PROPERTIES OF A LENS AND USE OF THE SAME IN VIRTUAL IMAGING AND HEAD-MOUNTED DISPLAYS
Document Type and Number:
WIPO Patent Application WO/2017/134275
Kind Code:
A1
Abstract:
A method is proposed for determining an optical axis of a lens when the lens is provided at an unknown position and/or orientation. The method comprises: a) obtaining at least one direct image of a background comprising identifiable features; b) providing a lens between the background and a camera such that light rays pass from the background, through the lens before arriving at the camera; c) using the camera to obtain at least one indirect image comprising the background when viewed through the lens; d) identifying at least one identifiable feature in the direct image and a corresponding identifiable feature in the indirect image; and e) using the correspondences from d) to determine an optical axis of the lens without aligning the optical axis of the lens with respect to the camera.

Inventors:
LAFFONT PIERRE-YVES (CH)
MARTIN TOBIAS OSKAR (CH)
Application Number:
PCT/EP2017/052464
Publication Date:
August 10, 2017
Filing Date:
February 03, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
EIDGENOSSISCHE TECHNISCHE HOCHSCHULE ZURICH (CH)
International Classes:
G01M11/02; G06T15/20
Foreign References:
JPS59176644A1984-10-06
US20030015649A12003-01-23
US5307141A1994-04-26
DE102007057260A12009-06-04
FR3000233A12014-06-27
KR20060093596A2006-08-25
EP2830001A12015-01-28
US6142628A2000-11-07
US20120313955A12012-12-13
US201414000666A
US201213522599A2012-08-27
US201113634954A2011-03-18
US201213681402A2012-11-19
US201413976142A
US201414348044A
US7653221B22010-01-26
US7391900B22008-06-24
US70723707A2007-02-16
US7016824B22006-03-21
Other References:
AFIFI, M.; KORASHY, M: "Eyeglasses shop: Eyeglasses replacement system using frontal face image", ICMIS 2015 THE 4TH INTERNATIONAL CONFERENCE ON MATHEMATICS AND INFORMATION SCIENCE, ZEWAIL CITY OF SCIENCE AND TECHNOLOGY, 2015
AGARWAL, S.; MALLICK, S. P.; KRIEGMAN, D. J.; BELONGIE, S: "On refractive optical flow", ECCV, vol. 3022, 2004, pages 483 - 494, XP019005846
EYEMIO APP., 2015, Retrieved from the Internet
BEN-EZRA, M.; NAYAR, S: "What Does Motion Reveal about Transparency?", IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV, vol. 2, 2003, pages 1025 - 1032, XP010662494, DOI: doi:10.1109/ICCV.2003.1238462
BORZA, D.; DARABANT, A. S.; DANESCU, R: "Eyeglasses lens contour extraction from facial images using an efficient shape description", SENSORS, vol. 13, no. 10, 2013, pages 13638 - 13658, XP055227455, DOI: doi:10.3390/s131013638
"Brandimage designs the new retail concept for optic 2000", OPTICIAN STORES, 2015, Retrieved from the Internet
DE NIZ, O.; CASTRILLN, M.; LORENZO, J.; ANTN, L.; HERNANDEZ, M.; BUENO, G: "Computer vision based eyewear selector", JOURNAL OF ZHEJIANG UNIVERSITY SCIENCE C, vol. 11, no. 2, pages 79 - 91
DU, C.; SU, G.: "Eyeglasses removal from facial images", PATTERN RECOGN. LETT., vol. 26, 14 October 2005 (2005-10-14), pages 2215 - 2220, XP025292304, DOI: doi:10.1016/j.patrec.2005.04.002
ESSILOR, M'EYE FIT TOUCH, 2015, Retrieved from the Internet
NETROMETER., 2015, Retrieved from the Internet
FACESHIFT., 2015, Retrieved from the Internet
EYEWEAR INTELLIGENCE TALKS ABOUT US!, 2014, Retrieved from the Internet
ONLINE EYEGLASSES & CONTACT LENS SALES IN THE US: MARKET RESEARCH REPORT, 2015, Retrieved from the Internet
IHRKE, I.; KUTULAKOS, K. N.; LENSCH, H. P. A.; MAGNOR, M.; HEIDRICH, W.: "State of the art in transparent and specular object reconstruction", STAR PROCEEDINGS OF EUROGRAPHICS, pages 87 - 108
"LENSCRAFTERS", ACCUFIT., 2015, Retrieved from the Internet
LIU, D.; CHEN, X; YANG, Y.-H.: "Frequency-based 3d reconstruction of transparent and specular objects", COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014 IEEE CONFERENCE ON, 2014, pages 660 - 667, XP032649255, DOI: doi:10.1109/CVPR.2014.90
3D SENSOR MARKET BY TECHNOLOGY, PRODUCTS, TYPE, APPLICATION, AND GEOGRAPHY - ANALYSIS & FORECAST, 2014
"Intel says laptops and tablets with 3-d vision are coming soon", MIT TECHNOLOGY REVIEW, 2014, Retrieved from the Internet
VISUREAL., 2015, Retrieved from the Internet
OPTIC2000, 2015, Retrieved from the Internet
OPTIKAM TECH INC., 2015, Retrieved from the Internet
J. KITTLER AND M. S. NIXON: "Lecture Notes in Computer Science", vol. 2688, 2003, SPRINGER, article PARK, J.-S.; OH, Y. H.; AHN, S. C.; LEE, S.-W: "Glasses removal from facial image using recursive pca reconstruction. In AVBPA", pages: 369 - 376
PARK, J.-S.; OH, Y. H.; AHN, S. C.; LEE, S.-W.: "Glasses removal from facial image using recursive error compensation", PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE TRANSACTIONS, vol. 27, 5 May 2005 (2005-05-05), pages 805 - 811, XP011128404, DOI: doi:10.1109/TPAMI.2005.103
RIBEIRO, J. L. P: "Myopia glasses and optical power estimation: An easy experiment", THE PHYSICS TEACHER, vol. 53, no. 2, 2015, pages 101 - 102
RODIN, A.; KHADER, L.; RAJAN, S: "Lensmeter: Power of eyeglasses measuring application", FINAL REPORT, ECE 1778 - CREATIVE APPLICATIONS FOR MOBILE DEVICES, 2014
FREEFORM VERIFIER, 2015, Retrieved from the Internet
EYEWEAR MARKET -GLOBAL INDUSTRY ANALYSIS, SIZE, SHARE, GROWTH AND FORECAST 2012-2018, 2013, Retrieved from the Internet
VIDEO CONFERENCING MARKET - GLOBAL INDUSTRY ANALYSIS, SIZE, SHARE, GROWTH, TRENDS AND FORECAST 2014 - 2020, 2015, Retrieved from the Internet
VENTUREBEAT, GLASSES.COM'S MOBILE APP SCANS YOUR FACE IN 3D, LETS YOU TRY ON SUNGLASSES VIRTUALLY, 2013, Retrieved from the Internet
OTTO - ONE TOUCH TO OPTICAL, 2015, Retrieved from the Internet
WEISE, T.; BOUAZIZ, S.; LI, H.; PAULY, M.: "Realtime performance-based facial animation", ACM TRANS. GRAPH., vol. 77, 4 July 2011 (2011-07-04), pages 1 - 77
WU, C.; LIU, C.; SHUM, H.-Y.; XU, Y.-Q.; ZHANG, Z.: "Automatic eyeglasses removal from face images", PATTERN ANALYSIS AND MACHINE INTELLIGENCE, IEEE TRANSACTIONS ON, vol. 26, 3 March 2004 (2004-03-03), pages 322 - 336, XP011106115, DOI: doi:10.1109/TPAMI.2004.1262319
YE, M.; ZHANG, C.; YANG, R: "Video enhancement of people wearing polarized glasses: Darkening reversal and reflection reduction", COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013 IEEE CONFERENCE ON, 2013
YU, J.; QIU, Z.; FANG, F: "Mathematical description and measurement of refractive parameters of freeform spectacle lenses", APPL. OPT., vol. 53, 2014, pages 4914 - 4921, XP001590863, DOI: doi:10.1364/AO.53.004914
ZEISS, I.TERMINAL., 2015, Retrieved from the Internet
BO G , J. Y., CAMERA CALIBRATION TOOLBOX FOR MATLAB, 2008
B S , G: "The OpenCV Library", DR. DOBB'S JOURNAL OF SOFTWARE TOOLS, 2000
OLSON , E.: "Proceedings of the IEEE International Conference on Robotics and Automation (ICRA", 2011, IEEE, article "AprilTag: A robust and flexible visual fiducial system", pages: 3400 - 3407
GLASSNER, A. S.: "An Introduction to Ray Tracing", 1989, ACADEMIC PRESS LTD.
NELDER, J. A.; MEAD, R.: "A simplex method for function minimization", THE COMPUTER JOURNAL, vol. 7, no. 4, 1965, pages 308 - 313
DEBEVEC, P: "Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques", 1998, ACM, article "Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography", pages: 189 - 198
HUANG, F.-C.; WETZSTEIN, G.; BARSKY, B. A.; RASKAR, R.: "Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays", ACM TRANS. GRAPH., vol. 33, no. 59, 4 July 2014 (2014-07-04), pages 1 - 59
MATSUMOTO, M.; PAMPLONA, V. F.; HOFFMANN, M.; UZEJKA, G.; SHARPE, N.: "Frontiers in Optics", 2015, OPTICAL SOCIETY OF AMERICA, article "High-order power map and low order lensmeter using a smartphone add-on"
PEDROTTI, F. L.; PEDROTTI, L. M.; PEDROTTI, L. S: "Introduction to Optics, 3rd ed.", 2006, article "Chapter 18: Matrix Methods in Paraxial Optics."
WU, C.; ZOLLHFER, M.; NIENER, M.; STAMMINGER, M.; IZADI, S.; THEOBALT, C.: "Real-time shading-based refinement for consumer depth cameras", ACM TRANSACTIONS ON GRAPHICS, vol. 33, 2014, XP058060815, DOI: doi:10.1145/2661229.2661232
FINK, W.; MICOL, D.: "simeye: computer-based simulation of visual perception under various eye defects using Zernike polynomials", JOURNAL OF BIOMEDICAL OPTICS, vol. 11, no. 5, 2006, pages 054011 - 12
KONRAD, R.; COOPER, E. A.; WETZSTEIN, G: "Novel optical configurations for virtual reality: Evaluating user preference and performance with focus-tunable and monovision neareye displays", CHI '16., vol. 10, 2016, pages 11
KONRAD, R.; PADMANABAN, N.; COOPER, E.; WETZSTEIN, G: "Computational focus-tunable near-eye displays", SIGGRAPH '16 EMERGING TECHNOLOGIES, 2016
MALACARA-HERN 'ANDEZ, D.; MALACARA-HERN 'ANDEZ, Z.: "Handbook of optical design, 2nd ed.", vol. 4, 2003, CRC PRESS.
MEISTER, D.; SHEEDY, J. E., INTRODUCTION TO OPHTHALMIC OPTICS, vol. 3, no. 4, 2000, pages 7
MOHAN, K.; SHARMA, A.: "How often are spectacle lenses not dispensed as prescribed?", INDIAN JOURNAL OF OPHTHALMOLOGY, vol. 60, 2012, pages 553 - 555
VON HELMHOLTZ, H.; GULLSTRAND, A.; VON KRIES, J.; NAGEL, W.: "Handbuch der Physiologischen Optik", 1909, pages: 14
WEI, Q.; PATKAR, S.; PAI, D. K.: "Fast ray-tracing of human eye optics on graphics processing units", COMPUT. METHODS PROG. BIOMED., vol. 114, 2014, pages 302 - 314, XP028643863, DOI: doi:10.1016/j.cmpb.2014.02.003
Attorney, Agent or Firm:
OXLEY, Robin (GB)
Download PDF:
Claims:
Claims

1 ) A method for determining an optical axis of a lens, when the lens is provided at an unknown position and/or orientation, the method comprising:

a) obtaining at least one direct image of a background comprising identifiable features; b) providing a lens between the background and a camera such that light rays pass from the background, through the lens before arriving at the camera; c) using the camera to obtain at least one indirect image comprising the background when viewed through the lens; d) identifying at least one identifiable feature in the direct image and a corresponding identifiable feature in the indirect image; and e) using the correspondences from d) to determine an optical axis of the lens without aligning the optical axis of the lens with respect to the camera.

2) The method according to claim 1 wherein step e) comprises using the correspondences to determine at least one plane of symmetry of the lens and using said plane of symmetry to determine the optical axis of the lens.

3) The method according to claim 1 or 2 wherein step e) comprises identifying at least one meridional plane.

4) The method according to claim 3 wherein multiple meridional planes are identified in multiple images and combined to determine the optical axis.

5) The method according to claim 4 wherein the multiple images are obtained from multiple viewpoints.

6) The method according to any preceding claim further comprising estimating a position of the lens along the optical axis. The method according to any preceding claim further comprising ray-tracing to refine the determination of the optical axis and/or the position of the lens along the optical axis.

The method according to any preceding claim comprising measuring the deviation of rays intersecting the lens and identifying the lens optical centre at a position where the ray deviation is minimal.

The method according to any preceding claim further comprising estimating an optical power of the lens.

The method according to claim 9 wherein a model is employed to estimate the optical power using a thin-lens assumption and feature correspondences.

The method according to any preceding claim further comprising estimating physical properties of the lens.

The method according to claim 1 1 comprising ray-tracing to estimate the physical properties of the lens.

The method according to any preceding claim comprising extracting the at least one direct image from a video and/or extracting the at least one indirect image from a video.

The method according to any preceding claim comprising obtaining multiple direct images and/or multiple indirect images and/or extracting multiple corresponding identifiable features.

The method according to any preceding claim wherein the at least one direct image and the at least one indirect image is obtained from multiple cameras arranged at multiple viewpoints.

The method according to any preceding claim comprising using a lens model to compute and/or refine physical properties of the lens so that a deviation of light rays produced by the model substantially matches that observed in the indirect image. The method according to claim 16 comprising iteratively refining the lens model using a ray-tracing based simulation.

The method according to any preceding claim comprising moving at least one of the lens, the background, or camera when obtaining each image.

The method according to any preceding claim wherein the lens is spherical, toroidal, cylindrical, aspherical or atoroidal.

The method according to any preceding claim wherein the lens is a compound lens.

The method according to any preceding claim wherein the lens is rotationally- symmetric.

The method according to any preceding claim wherein the background is provided on an electronic display.

The method according to any preceding claim wherein the background comprises a textured or patterned surface.

The method according to claim 23 wherein the background is printed on a sheet of paper or card.

The method according to any one of claims 1 to 22 wherein the background comprises a whole or a portion of a user's face.

The method according to any preceding claim wherein the lens is an eyeglasses lens.

A method for determining at least one property of a lens comprising:

a) obtaining at least one direct image of a background comprising identifiable features; b) providing the lens between the background and a camera such that light rays pass from the background, through the lens before arriving at the camera, and wherein the lens is located at a known or estimated position and/or orientation; c) using the camera to obtain at least one indirect image comprising the background when viewed through the lens; d) identifying at least one identifiable feature in the direct image and a corresponding identifiable feature in the indirect image; and e) applying a ray-tracing technique to iteratively determine at least one parameter of a lens model, representing the at least one property of the lens being determined, so that a deviation of light rays produced by the model substantially matches that observed in the indirect image.

The method according to claim 27 wherein the at least one parameter of the lens model is a physical property of the lens.

The method according to claim 28 wherein the physical property comprises one or more of lens thickness, front curvature, back curvature and refractive index.

The method according to claim 27 wherein step e) is employed to refine the estimated position and/or orientation and/or optical power of the lens.

Use of physical lens properties obtained from the method of claim 1 1 or 27 and/or lens optical power obtained from the method of claim 9 or 30 in a virtual imaging application.

A method for virtual try-on of eyeglasses such that a user image or video is altered to create a virtual image taking into account physical properties of an eyeglasses lens obtained from the method of claim 1 1 or 27 and/or optical power of an eyeglasses lens obtained from the method of claim 9 or 30.

The method according to claim 32 wherein shading on a wearer's face caused by eyeglasses frames is included in the virtual image.

A method for virtually removing or modifying prescription eyeglasses in an image or video wherein the removal or modification takes into account a refractive effect of the prescription eyeglasses. The method according to claim 34 comprising applying a ray-tracing and/or warping technique to determine where identifiable features viewed through an eyeglasses lens would be if the refractive effect of the eyeglasses is removed or modified.

The method according to any one of claims 34 or 35 wherein physical properties and/or optical power of an eyeglasses lens are modified in a virtual image/video.

The method according to any one of claims 34 to 36 wherein one or more physical properties or optical power of a lens of the eyeglasses is obtained using the method of any one of claims 1 to 30.

A non-transitory computer-readable medium having stored thereon program instructions for causing at least one processor to perform the method according to any preceding claim.

A system for determining an optical axis of a lens, when the lens is provided between a background and a camera such that light rays pass from the background, through the lens before arriving at the camera, and wherein the lens is located at an unknown position and/or orientation, the system comprising: a) a memory into which a direct image is stored of a background comprising identifiable features and an indirect image is stored comprising the background when viewed through the lens; and b) a processor configured for: i. identifying at least one identifiable feature in the indirect image and a corresponding identifiable feature in the direct image; and ii. using the correspondences from i) to determine an optical axis of the lens without aligning the optical axis of the lens with respect to the camera.

40) The system according to claim 39 further comprising a camera for obtaining the direct image and/or indirect image. The system according to claim 40 wherein multiple cameras are employed to obtain multiple direct and/or indirect images from multiple viewpoints.

The system according to claim 40 or 41 wherein the camera comprises a still image camera, a video camera or an RGBD colour and depth sensor.

The system according to any one of claims 39 to 42 further comprising a moveable member on which the lens, background or camera is mounted such that at least one of the lens, background, or camera can be moved when obtaining multiple images.

The system according to any one of claims 39 to 43 wherein step ii) comprises using the correspondences to determine at least one plane of symmetry of the lens and using said plane of symmetry to determine the optical axis of the lens.

The system according to any one of claims 39 to 44 wherein step ii) comprises identifying at least one meridional plane.

The system according to claim 45 wherein multiple meridional planes are identified in multiple images and combined to determine the optical axis.

The system according to claim 46 wherein the multiple images are obtained from multiple viewpoints.

The system according to any one of claims 39 to 47 wherein step ii) comprises ray-tracing.

The system according to any one of claims 39 to 48 wherein the processor is further configured for estimating a position of the lens along the optical axis.

The system according to any one of claims 39 to 49 wherein the processor is further configured for measuring the deviation of rays intersecting the lens and identifying the lens optical centre at a position where the ray deviation is minimal.

The system according to any one of claims 39 to 50 wherein the processor is further configured for estimating an optical power of the lens. The system according to claim 51 wherein a model is employed to estimate the optical power using a thin-lens assumption and feature correspondences.

The system according to any one of claims 39 to 52 wherein the processor is further configured for estimating physical properties of the lens.

The system according to claim 53 comprising ray-tracing.

The system according to any one of claims 39 to 54 wherein the processor is further configured for extracting the at least one direct image from a video and/or extracting the at least one indirect image from a video.

The system according to any one of claims 39 to 55 wherein multiple direct images and/or multiple indirect images are employed and/or multiple corresponding identifiable features are extracted.

The system according to any one of claims 39 to 56 wherein the processor is further configured to use a lens model to compute and/or refine physical properties of the lens so that a deviation of light rays produced by the model substantially matches that observed in the indirect image.

The system according to claim 57 comprising iteratively refining the lens model using a ray-tracing based simulation.

The system according to any one of claims 39 to 58 wherein the lens is spherical, toroidal, cylindrical, aspherical or atoroidal.

The system according to any one of claims 39 to 59 wherein the lens is a compound lens.

The system according to any one of claims 39 to 60 wherein the lens is rotationally-symmetric.

The system according to any one of claims 39 to 61 wherein the background is provided on an electronic display.

The system according to any one of claims 39 to 62 wherein the background comprises a textured or patterned surface. The system according to claim 63 wherein the background is printed on a sheet of paper or card.

The system according to any one of claims 39 to 62 wherein the background comprises a whole or a part of a user's face.

The system according to any one of claims 39 to 65 wherein the lens is an eyeglasses lens.

A system for determining at least one property of a lens, when the lens is provided between a background and a camera such that light rays pass from the background, through the lens before arriving at the camera, and wherein the lens is located at an unknown position and/or orientation, the system comprising: a) a memory into which at least one direct image is stored of a background comprising identifiable features and at least one indirect image is stored comprising the background when viewed through the lens; and b) a processor configured for:

i. identifying at least one identifiable feature in the direct image and a corresponding identifiable feature in the indirect image; and ii. applying a ray-tracing technique to iteratively determine at least one parameter of a lens model, representing the at least one property of the lens being determined, so that a deviation of light rays produced by the model substantially matches that observed in the indirect image.

A method for determining an eyeglasses prescription comprising using any or all of the methods and systems of claims 1 to 67.

Use of an eyeglasses prescription determined according to claim 68 to compensate for vision deficiency in a head-mounted display (HMD).

A vision correcting system for a head-mounted display comprising:

a) a receiver for receiving an eyeglasses prescription determined according to claim 68; and b) a reconfigurable optical system configured to automatically adapt to compensate for the eyeglasses prescription such that a user will experience corrected vision whilst wearing the head-mounted display without wearing their eyeglasses.

The vision correcting system according to claim 70 wherein the eyeglasses prescription is determined using a smartphone.

The vision correcting system according to claim 71 wherein the head-mounted display is configured to receive the eyeglasses prescription from the smartphone.

The vision correcting system according to any one of claims 70 to 72 wherein a processor in the head-mounted display determines an optimum configuration for the optical system and moves one or more optical elements in the optical system so as to correct the user's vision in accordance with the eyeglasses prescription.

The vision correcting system according to any one of claims 70 to 73 wherein the optical system is configured to change a focal point of a lens in the optical system.

The vision correcting system according to any one of claims 70 to 74 wherein the optical system is configured to change an optical power of a lens in the optical system.

The vision correcting system according to any one of claims 70 to 75 wherein the optical system is configured to change a lens position in the optical system.

The vision correcting system according to any one of claims 70 to 76 wherein the optical system is configured to change an optical power or lens position of an auxiliary lens incorporated in the HMD.

The vision correcting system according to any one of claims 70 to 76 wherein the optical system is configured to change an optical power or lens position of an existing lens provided in the HMD.

A head-mounted display (HMD) comprising the vision correcting system according to any one of claims 70 to 78.

Description:
METHODS AND SYSTEMS FOR DETERMINING AN OPTICAL AXIS AND/OR PHYSICAL PROPERTIES OF A LENS AND USE OF THE SAME IN VIRTUAL IMAGING AND HEAD-MOUNTED DISPLAYS

Field of the Invention

[0001] The present invention relates to methods and systems for determining an optical axis and/or physical properties of a lens and use of the same in virtual imaging and head-mounted displays (HMDs).

Background

Estimation of refractive surfaces

[0002] An optical lens is made of a transparent material that deviates light rays due to refraction. This behavior can be described using Snell-Decartes law, which relates angles of incident and refracted rays within a plane containing the incident ray and a local surface normal. A large body of work on virtual reconstruction of refractive surfaces exists in academic research. It is important to note that most research focuses on general transparent objects (such as glass statues, bottles, vases, etc.), and not on the reconstruction of optical elements such eyeglasses lenses. A state of the art report [Ihrke et al. 2008] presents a taxonomy and reviews the most important reconstruction papers relating to this subject.

[0003] The methods of [Agarwal et al. 2004] and [Ben-Ezra and Nayar 2003] both utilise correspondences extracted from a background captured through a moving transparent object. The method of [Agarwal et al. 2004] generalises the optical flow equation to account for warping and attenuation caused by the refractive object. They further propose algorithms to compute the warp from a moving background.

[0004] The model-based approach of [Ben-Ezra and Nayar 2003] allows reconstructing the complete shape of a transparent surface. It assumes an unknown, distant background pattern and a known parametric model for the object as well as its refractive index. The method differs from other techniques in that it can recover the surface shape of complete objects and not just single surfaces. The method is applied to reconstruct various objects such as a sphere and a plano-convex lens. In addition, it is applied to synthetic data of bi-convex and bi-concave lenses. However, the paper makes simplifying assumptions, such as that that the background is very distant in comparison to the size of the lens. [0005] The system recently proposed in [Liu et al. 2014] comprises a camera and a background onto which a structured light pattern is projected. The deformation of this pattern observed through the lens is used to reconstruct the surface. However, the system only allows for partial reconstruction of the transparent object, namely it reconstructs the surfaces that are facing the camera.

Prescription measurement of eyeglasses

[0006] A prescription for eyeglasses is usually written by an eyewear prescriber such as an optometrist, and specifies the value of all parameters necessary to construct corrective lenses appropriate for a patient (e.g. spherical correction for nearsightedness or far-sightedness, cylinder power and axis for astigmatism). Lens characteristics describe the lens position, the lens geometry, and the lens material properties (i.e. index of refraction). After eyeglasses have been manufactured, the prescription for each lens is traditionally measured with existing devices known as lensmeters.

[0007] Lensmeters are widely used to measure the prescription of eyeglasses in a "clinical" environment, or a room in an optician's shop. A visit to an optometrist often starts with a professional measuring the prescription of the patient's current eyeglasses using a lensmeter, into which the glasses are inserted. However, lensmeters are dedicated devices and are generally large and expensive, require operator training, and need physical access to the eyeglasses.

[0008] A traditional lensmeter operates in two main steps: 1 ) the lens is inserted into the lensmeter's lens holder, and is translated and adjusted until the lens optical axis aligns with the lensmeter measurement axis; and 2) once the lens is positioned appropriately, its prescription is measured by adjusting the lensmeter focus. Earlier lensmeters required the operator to manually adjust the lens position and focus; while some recent models can perform both steps automatically. However, in both cases the lensmeter is a dedicated device which is only available to professionals.

[0009] EyeNetra [Matsumoto et al. 2015] overcomes some of the limitations of traditional lensmeters, through the release of a portable lensmeter device called Netrometer [EyeNetra Inc. 2015]. This device has a "gunlike" form factor and is driven by a smartphone, which is embedded into the device during operation. The device acts as a portable lensmeter, into which the glasses of the patient are inserted and held in a fixed position via a clamping mechanism. The eyeglasses prescription is then estimated based on the processing of image data captured by the smartphone camera in conjunction with the smartphone flashlight.

[0010] Note that all traditional lensmeters, and the Netrometer, require physical access to the eyeglasses.

[0011] A semester project [Rodin et al. 2014] and short teaching study [Ribeiro 2015] demonstrate how prescriptions for spherical eyeglass lenses can be estimated from still image data by analysing the lens minification or magnification. However, this estimation is coarse and far from the optical precision obtained with traditional lensmeters.

On-face measurement of eyeglasses

[0012] A recent trend in the optical industry is the use of portable electronic devices to measure eyeglass frame and lens centration parameters. For example, the pupillary distance is traditionally measured using a simple ruler and marker. The visuReal application [Ollendorf N.A. LLC 2015] is a portable measurement system, which estimates centration parameters based on a digital photograph of the user wearing the eyeglasses with a specific calibration clip. Another application, which runs on portable tablet devices, is one-touch-to-optical [VSP Optics Group 2015] by VSP Optics Group. This offers a range of tools related to optometry such as capturing photos and measuring pupillary distance, segment height, pantoscopic tilt, vertex distance, and wrap angle. Related tablet applications are the OptikamPad [Optikam Tech Inc. 2015], i.Terminal [ZEISS 2015], m'Eye Fit [Essilor 2015], EyeMio [BBGR 2015], and AccuFit [LensCrafters 2015]. While these mobile diagnostic products are competing with each other for the digital measurement of specific parameters related to lens centration, none has attempted to tackle the prescription of eyeglasses.

Virtual try-on of eyeglasses

[0013] A virtual try-on application in the domain of optometry allows placing virtual eyewear, such as conventional eyeglasses frames or sunglasses, onto an image of a user's face captured by a color camera. A few low-profile academic publications exist on this topic (e.g., see [D ' eniz et al. 2010; Afifi and Korashy 2015]). Different commercial applications allow adding eyewear to captured images or directly into live video streams, for example, glasses.com [VentureBeat 2013] and FittingBox [FittingBox 2014]. Virtual manipulation of eyeglasses in images

[0014] Methods for the removal of eyeglasses from still images have been described in the scientific literature and in two associated patents [Kim et al. 2008; Gu 2010]. The methods proposed in [Park et al. 2003; Afifi and Korashy 2015; Borza et al. 2013; Du and Su 2005; Park et al. 2005; Wu et al. 2004] all attempt to remove glasses from a still, frontal, facial image. The main focus in these methods is on the detection of the eyeglasses frames and resolving the occlusion problem caused by them, and on removing reflections caused by the lenses.

[0015] Very little work on manipulating eyeglasses in videos has been published. One method [Ye et al. 2013] takes a video as input, compensates the darkening caused by the lenses in the eye region, and reduces reflections on the lenses.

Head-Mounted Displays

[0016] Most head-mounted displays (HMD) such as those employed as virtual reality (VR) headwear are designed for users with perfect eyesight. However, a majority of the population wears prescription eyeglasses (e.g., 65% of the American population as measured in 2007) to correct vision defects such as myopia, hyperopia or astigmatism. In order to use HMDs, users that wear eyeglasses can either try to wear their eyeglasses underneath the HMD (which is generally awkward and uncomfortable and may even cause injury) or they can use the HMD without their eyeglasses (in which case, their vision will be compromised). While some HMDs allows users to adjust the focus or distance between the eyepieces, these adjustments are manual, subjective and need to be re-done for each user.

[0017] It is an aim of the present invention to provide methods and systems for determining an optical axis and/or physical properties of a lens and use of the same in virtual imaging and head-mounted displays.

Summary of the invention

[0018] Aspects of the invention relate to an image-based approach for estimating the optical power and physical properties of ophthalmic/prescription lenses, along with their pose. The approach takes as input a sequence of images captured with a camera (e.g. smartphone) from multiple viewpoints. As in ray optics, the approach represents light propagation in terms of individual rays. The approach is based on analysing the observed deviation of rays that pass through the prescription lenses, using pixel correspondences extracted from the captured images and leveraging the multiple viewpoints.

[0019] Specific aspects of the invention include:

• a method for locating the optical axis of a rotationally symmetric lens

• a method for estimating position of a spherical lens along its optical axis and its optical power, under a thin-lens assumption

• a physically-based method for refining the lens pose and lens power, and for estimating the physical properties of a thick spherical lens (which may be spherical, cylindrical, or toroidal), using 3D raytracing; and

• applications of the above in i) on-face measurement of eyeglasses; ii) virtual try-on of eyeglasses; iii) virtual manipulation of eyeglasses in images; and v) vision-corrected head-mounted displays, which automatically adjust lens settings according to a user ' s prescription measured using the above methods.

[0020] In accordance with a first aspect of the invention there is provided a method for determining an optical axis of a lens, when the lens is provided at an unknown position and/or orientation, the method comprising:

a) obtaining at least one direct image of a background comprising identifiable features; b) providing a lens between the background and a camera such that light rays pass from the background, through the lens before arriving at the camera; c) using the camera to obtain at least one indirect image comprising the background when viewed through the lens; d) identifying at least one identifiable feature in the direct image and a corresponding identifiable feature in the indirect image; and e) using the correspondences from d) to determine an optical axis of the lens without aligning the optical axis of the lens with respect to the camera.

[0021] Embodiments of the invention therefore provide a method for determining the optical axis of a lens when the lens is provided at an unknown position and/or orientation. The method proposed can be used with eyeglasses and does not require inserting each of the lenses of the eyeglasses into a dedicated device nor positioning each lens such that its optical axis is aligned with a camera measurement axis. This may be useful for a variety of applications and, in particular, in a method for determining at least one property of a lens in accordance with the second aspect of the invention described below.

[0022] Embodiments of the invention use a contactless image-based approach which can be performed anywhere. This is significantly different to all known techniques that require physical access to lenses and eyeglasses in particular. Advantageously, no specialist lensmeter or other equipment is required. Accordingly, the cost of implementing the method is reduced along with the cost of operator training. Furthermore, as the information concerning a lens property, pose or optical power (or eyeglasses prescription) is obtained from images, the technique can be performed remotely, for example, over the internet. Moreover, embodiments of the present invention allow the lens properties to be measured from images without requiring an operator to position the lens inside a device or at a specific location with respect to the background. Embodiments of the invention leverage images from multiple views to estimate lens pose (i.e. position and orientation) and approximate power by analysing how light rays are deviated at the lens surfaces due to refraction. These values may then be further refined through use of a physically-based lens model to obtain further parameters such as lens thickness, index of refraction and front and back surface curvature.

[0023] The identifiable features may comprise points, lines, regions or other shapes.

[0024] Step e) may comprise using the correspondences to determine at least one plane of symmetry of the lens and using said plane of symmetry to determine the optical axis of the lens. Advantageously, the symmetrical properties of the lens can be used to determine the location of the optical axis.

[0025] Step e) may comprise identifying at least one meridional plane. This is useful for the spherical lens case where meridional planes are also planes of symmetry. Multiple meridional planes may be identified in multiple images and combined to determine the optical axis.

[0026] Multiple images may be obtained from multiple viewpoints.

[0027] The method may further comprise estimating a position of the lens along the optical axis.

[0028] The method may further comprise ray-tracing to refine the determination of the optical axis and/or the position of the lens along the optical axis.

[0029] The method may comprise measuring the deviation of rays intersecting the lens and identifying the lens optical centre at a position where the ray deviation is minimal. [0030] The method may further comprise estimating an optical power of the lens. A model may be employed to estimate the optical power using a thin-lens assumption and feature correspondences. For example, correspondences between image regions can be identified, from which approximate scaling due to the lens optical power can be calculated and the actual optical power can then be inferred from the amount of scaling.

[0031] The method may further comprise estimating physical properties of the lens. The method may comprise ray-tracing to estimate the physical properties of the lens.

[0032] The method may comprise extracting the at least one direct image from a video and/or extracting the at least one indirect image from a video.

[0033] The method may comprise obtaining multiple direct images and/or multiple indirect images and/or extracting multiple corresponding identifiable features.

[0034] The at least one direct image and the at least one indirect image may be obtained from multiple cameras arranged at multiple viewpoints (e.g. in an array).

[0035] The method may comprise using a lens model to compute and/or refine physical properties of the lens so that a deviation of light rays produced by the model substantially matches that observed in the indirect image. The method may comprise iteratively refining the lens model using a ray-tracing based simulation.

[0036] The method may comprise moving at least one of the lens, the background, or camera when obtaining each image.

[0037] The lens may be spherical, toroidal, cylindrical, aspherical or atoroidal.

[0038] The lens may comprise a compound lens.

[0039] The lens may be rotationally-symmetric.

[0040] The lens may be an eyeglasses lens having an eyeglasses prescription comprising one or more of the optical axis (e.g. pose or lens orientation), lens type, physical properties (e.g. front and back surface curvature, lens thickness and refractive index) and optical power of the lens.

[0041] The background may be provided on an electronic display.

[0042] The background may comprise a textured or patterned surface. The background may be printed on a sheet of paper or card.

[0043] The background may comprise a whole or a portion of a user's face. Accordingly, the user may wear his/her eyeglasses in the indirect image. In which case, the identifiable features may be provided by eye features such as eyelashes, wrinkles or other facial markings in the proximity of the user's eyes. Thus, embodiments of the invention provide for an on-face measurement of lens properties which may include an eyeglasses prescription. [0044] The lens may be an eyeglasses lens.

[0045] In accordance with the second aspect of the invention there is provided a method for determining at least one property of a lens comprising:

a) obtaining at least one direct image of a background comprising identifiable features; b) providing the lens between the background and a camera such that light rays pass from the background, through the lens before arriving at the camera, and wherein the lens is located at a known or estimated position and/or orientation; c) using the camera to obtain at least one indirect image comprising the background when viewed through the lens; d) identifying at least one identifiable feature in the direct image and a corresponding identifiable feature in the indirect image; and e) applying a ray-tracing technique to iteratively determine at least one parameter of a lens model, representing the at least one property of the lens being determined, so that a deviation of light rays produced by the model substantially matches that observed in the indirect image.

[0046] The at least one parameter of the lens model may be a physical property of the lens.

[0047] The physical property may comprise one or more of lens thickness, front curvature, back curvature and refractive index.

[0048] Step e) may be employed to refine the estimated position and/or orientation and/or optical power of the lens.

[0049] A third aspect of the invention relates to use of physical lens properties obtained from one or more of the methods above and/or lens optical power obtained from one or more of the methods above in a virtual imaging application.

[0050] A fourth aspect of the invention relates to a method for virtual try-on of eyeglasses such that a user image or video is altered to create a virtual image taking into account physical properties of an eyeglasses lens obtained from any one of the methods above and/or optical power of an eyeglasses lens obtained from any one of the methods above. [0051] This is advantageous because refraction artifacts as they occur on real eyeglasses lenses generally change the experience of how a wearer perceives his/her potentially new pair of eyeglasses. However, most virtual try-on solutions ignore the effects of refraction caused by eyeglasses lenses. Notably, the present virtual try-on technique can be applied to still images or video images.

[0052] Shading on a wearer's face caused by eyeglasses frames may be included in the virtual image. This may be achieved using a re-lighting technique based on [Debevec 1998].

[0053] A fifth aspect of the invention relates to a method for virtually removing or modifying prescription eyeglasses in an image or video wherein the removal or modification takes into account a refractive effect of the prescription eyeglasses.

[0054] The prior art on virtual eyeglasses removal does not aim to compensate for the significant distortion caused by refraction effects due to the corrective lenses in eyeglasses. The methods described in this patent application for removing eyeglasses in still images cannot be trivially extended to the video case. Applying these methods independently to each video frame would cause severe flickering artifacts due to a lack of temporal coherency. Furthermore, the fact that those methods are designed solely for frontal face images implies that the methods will fail when the person wearing the eyeglasses in the video moves from a frontal view to a profile view. This is natural behavior in video conferencing, as during a conversation, participants tend to gaze around and not look straight into the camera at every point in time. However, embodiments of the present invention can address these deficiencies by applying a ray-tracing technique to determine where pixel points viewed through the eyeglasses lens would be if the refraction effect of the eyeglasses is removed. In other embodiments, the prescription of the eyeglasses may be modified (e.g. lessened) so as to reduce the refraction effect while maintaining the eyeglasses frames in the video.

[0055] The method may comprise applying a ray-tracing and/or warping technique to determine where identifiable features viewed through an eyeglasses lens would be if the refractive effect of the eyeglasses is removed or modified.

[0056] The physical properties and/or optical power of an eyeglasses lens may be modified in a virtual image/video.

[0057] One or more of the physical properties or optical power of a lens of the eyeglasses may be obtained using any one of the methods above. [0058] A sixth aspect of the invention relates to a non-transitory computer-readable medium having stored thereon program instructions for causing at least one processor to perform one or more of the methods above.

[0059] A seventh aspect of the invention relates to a system for determining an optical axis of a lens, when the lens is provided between a background and a camera such that light rays pass from the background, through the lens before arriving at the camera, and wherein the lens is located at an unknown position and/or orientation, the system comprising:

a) a memory into which a direct image is stored of a background comprising identifiable features and an indirect image is stored comprising the background when viewed through the lens; and b) a processor configured for: i. identifying at least one identifiable feature in the indirect image and a corresponding identifiable feature in the direct image; and ii. using the correspondences from i) to determine an optical axis of the lens without aligning the optical axis of the lens with respect to the camera.

[0060] The system may further comprise a camera for obtaining the direct image and/or indirect image.

[0061] Multiple cameras may be employed to obtain multiple direct and/or indirect images from multiple viewpoints.

[0062] The camera may be constituted by a still image camera, a video camera or an RGBD colour and depth sensor.

[0063] The system may further comprise a moveable member on which the lens, background or camera is mounted such that at least one of the lens, background, or camera can be moved when obtaining multiple images.

[0064] Step ii) may comprise using the correspondences to determine at least one plane of symmetry of the lens and using said plane of symmetry to determine the optical axis of the lens.

[0065] Step ii) may comprise identifying at least one meridional plane.

[0066] Multiple meridional planes may be identified in multiple images and combined to determine the optical axis.

[0067] Multiple images may be obtained from multiple viewpoints. [0068] Step ii) may comprise ray-tracing.

[0069] The processor may be further configured for estimating a position of the lens along the optical axis.

[0070] The processor may be further configured for measuring the deviation of rays intersecting the lens and identifying the lens optical centre at a position where the ray deviation is minimal.

[0071] The processor may be further configured for estimating an optical power of the lens.

[0072] A model may be employed to estimate the optical power using a thin-lens assumption and feature correspondences.

[0073] The processor may be further configured for estimating physical properties of the lens.

[0074] The processor may be further configured for ray-tracing.

[0075] The processor may be further configured for extracting the at least one direct image from a video and/or extracting the at least one indirect image from a video.

[0076] Multiple direct images and/or multiple indirect images may be employed and/or multiple corresponding identifiable features may be extracted.

[0077] The processor may be further configured to use a lens model to compute and/or refine physical properties of the lens so that a deviation of light rays produced by the model substantially matches that observed in the indirect image.

[0078] The processor may be further configured for iteratively refining the lens model using a ray-tracing based simulation.

[0079] The lens may be spherical, toroidal, cylindrical, aspherical or atoroidal.

[0080] The lens may comprise a compound lens.

[0081] The lens may be rotationally-symmetric.

[0082] The background may be provided on an electronic display.

[0083] The background may comprise a textured or patterned surface.

[0084] The background may be printed on a sheet of paper or card.

[0085] The background may comprise a whole or a part of a user's face.

[0086] The lens may be an eyeglasses lens having an eyeglasses prescription.

[0087] The eyeglasses prescription may comprise the optical axis, optical power, physical properties, lens type and lens orientation for each lens in the eyeglasses.

[0088] The images may be obtained from an image capture device (e.g. camera). [0089] The at least one direct image may be provided in the form of a still image or photograph. Alternatively, the at least one direct image may be extracted from a background video.

[0090] Similarly, the at least one indirect image may be provided in the form of a still image or photograph. Alternatively, the at least one indirect image may be extracted from an eyeglasses video.

[0091] The methods described above may comprise moving at least one of the eyeglasses, the background image, or the image capture device (i.e. in both the background video and the eyeglasses video). This is advantageous in that it will result in a number of different images which can be extracted in order to obtain correspondences.

[0092] An eighth aspect of the invention relates to a system for determining at least one property of a lens, when the lens is provided between a background and a camera such that light rays pass from the background, through the lens before arriving at the camera, and wherein the lens is located at an unknown position and/or orientation, the system comprising:

a) a memory into which at least one direct image is stored of a background comprising identifiable features and at least one indirect image is stored comprising the background when viewed through the lens; and b) a processor configured for: i. identifying at least one identifiable feature in the direct image and a corresponding identifiable feature in the indirect image; and ii. applying a ray-tracing technique to iteratively determine at least one parameter of a lens model, representing the at least one property of the lens being determined, so that a deviation of light rays produced by the model substantially matches that observed in the indirect image.

[0093] A ninth eighth aspect of the invention relates to a method for determining an eyeglasses prescription comprising any or all of the above methods and systems.

[0094] A tenth aspect of the invention relates to use of an eyeglasses prescription determined using any or all of the above methods and systems to compensate for vision deficiency in a head-mounted display (HMD) (e.g. virtual reality headwear). [0095] An eleventh aspect of the invention relates to a vision correcting system for a head-mounted display comprising:

a) a receiver for receiving an eyeglasses prescription determined using any or all of the above methods and systems; and

b) a reconfigurable optical system configured to automatically adapt to compensate for the eyeglasses prescription such that a user will experience corrected vision whilst wearing the head-mounted display without wearing their eyeglasses.

[0096] A twelfth aspect of the invention relates to a head-mounted display comprising the vision correcting system above.

[0097] Notably, the adjustment of the optical system is automatic as opposed to manual and is not subjective since it is based on a known eyeglasses prescription. Accordingly, eyeglasses wearers can experience virtual reality without wearing their eyeglasses whilst still enjoying a sharp image regardless of their prescription. A single head-mounted display can be used by multiple users with different eyesight prescriptions and can be quickly and automatically adjusted to suit the optical requirements of each individual user.

[0098] It should be noted that whenever head-mounted displays are referred to in this specification, they encompass virtual reality (VR) headwear, augmented reality headwear, mixed reality headwear and headwear or headsets for other visual technologies.

[0099] The eyeglasses prescription may be determined using any or all of the above methods and systems. In some embodiments, the eyeglasses prescription can be determined and/or stored using a smartphone. The head-mounted display is configured to receive the eyeglasses prescription from the smartphone. A processor in the head-mounted display may determine an optimum configuration for the optical system and may move one or more optical elements in the optical system so as to correct the user's vision in accordance with the eyeglasses prescription.

[00100] The optical system may be configured to change a focal point of a lens in the optical system for myopia/hyperopia correction. In other words, the optical power of the lens may be adjusted according to the prescription. [00101] The optical system may be configured to change a lens position to change the focal point of the lens and/or, for example, to account for interpupillary distance (IPD) so as to provide an optimum eye to lens to screen alignment.

[00102] The optical system may be configured to change an optical power or lens position of an auxiliary lens incorporated in the HMD. Alternatively, the optical system may be configured to change an optical power or lens position of an existing lens provided in the HMD.

Brief description of the drawings

[00103] Embodiments of the invention will now be described, by way of example only, with reference to the following drawings, in which:

Figure 1 shows a method for determining an optical axis of a lens in accordance with an embodiment of the present invention;

Figure 2(a) shows an embodiment of the invention in which eyeglasses lenses remain stationary in front of a background while a video camera is moved;

Figure 2(b) shows an embodiment of the invention in which the eyeglasses are moved in front of a background while the background and video camera remain stationary;

Figure 3(a) illustrates input images to a method according to an embodiment of the invention;

Figures 3(b) and 3(b') illustrate extraction of ray deviation in accordance with an embodiment of the invention;

Figures 3(c) and 3(c') illustrate a lens pose estimation step in accordance with an embodiment of the invention;

Figures 3(d) and 3(d') illustrate ray tracing optimisation in accordance with an embodiment of the invention;

Figure 3(e) illustrates an output lens model in accordance with an embodiment of the present invention;Figure 4(a) illustrates the shape of a typical ophthalmic lens;

Figure 4(b) shows the refraction of light through a lens in accordance with some embodiments of the invention; Figure 5 shows a meridional plane through a lens which includes an off-axis camera position;

Figure 6 shows a non-meridional plane through a lens which includes an off-axis camera position; Figure 7(a) shows a first view of an energy function for a bi-spherical lens;

Figure 7(b) shows a second view of the energy function shown in Figure 7(a);

Figure 8 shows observed optical flow for a first toroidal lens with an optical axis aligned with the observer and a second toroidal lens with an optical axis tilted with respected to the observer;

Figure 9(a) shows an overview of a virtual scene in accordance with an embodiment of the invention;

Figure 9(b) shows an overview of a real scene corresponding to that in Figure 9(a) in accordance with an embodiment of the invention;

Figure 9(c) shows an enlarged view ilustrating the refraction effect by a virtual lens;

Figure 9(d) shows an enlarged view illustrating the refraction effect by a real lens;

Figure 9(e) shows a blend of the images from Figures 9(c) and 9(d) where the virtual lens and the real lens have the same refractive properties;

Figure 10 shows an optical rail measurement set-up to measure a ground truth lens pose of real lenses;

Figure 1 1 (a) shows a comparison of an embodiment of the present invention with respect to a prior art method for a bispherical lens with a power of -2 Dioptres;

Figure 1 1 (b) shows a comparison of an embodiment of the present invention with respect to a prior art method for a bispherical lens with a power of +2 Dioptres;

Figure 12 shows shows a further embodiment in which a camera array is employed;

Figure 13 shows an embodiment of the invention in which the eyeglasses prescription is measured on the wearer's face; Figure 14 shows an embodiment of the invention in which a user's eyeglasses are virtually removed from a video stream;

Figure 15(a) shows a first step of determining a user's eyeglasses prescription in accordance with an embodiment of the invention;

Figure 15(b) shows the eyeglasses prescription determined in Figure 15(a) being stored on a smartphone;

Figure 15(c) shows transmission of the eyeglasses prescription to a head-mounted display for automatic adjustment of the head-mounted display;

Figure 16(a) shows a ray diagram for a standard head-mounted display and a user with perfect eyesight;

Figure 16(b) shows a ray diagram for a standard head-mounted display and a user with myopia;

Figure 16(c) shows a ray diagram for a head-mounted display comprising an additional lens according to an embodiment of the invention for a user with myopia;

Figure 16(d) shows a ray diagram for a head-mounted display with a translated lens according to an embodiment of the invention for a user with myopia;Figure 17(a) shows a rear view of an embodiment of the invention in which additional adjustable lenses are provided in a head-mounted display to replace a user's eyeglasses;

Figure 17(b) shows a front view of an embodiment of the invention in which additional adjustable lenses are provided in a head-mounted display to replace a user's eyeglasses;

Figure 18(a) shows a rear view schematic of an embodiment of the invention in which lenses are translated in a head-mounted display to account for a user's eyeglasses prescription;

Figure 18(b) shows a rear view of the set-up of Figure 18(a) in a prototype model;

Figure 18(c) shows a rear view of the set-up of Figure 18(b) when housed in a head- mounted display;

Figure 19(a) shows ray tracing simulation with 0 Dioptre correction; Figure 19(b) shows ray tracing simulation with -6 Dioptre correction;

Figure 20 shows a 2D overview of a Gullstrand eye model showing six refractive surfaces;

Figure 21 shows an optical system for a head-mounted display and a model of the human eye;

Figure 22 shows a device for adjusting the separation of a pair of lenses to account for interpupillary distance (IPD);

Figure 23(a) shows an uncorrected image of a simulated VR headset session for a user with -2D myopia in both eyes;

Figure 23(b) shows a corrected image of a simulated VR headset session for a user with -2D myopia in both eyes;

Figure 24(a) shows an uncorrected image of a simulated VR headset session for a user with -4D myopia in both eyes;

Figure 24(b) shows a corrected image of a simulated VR headset session for a user with -4D myopia in both eyes;

Figure 25(a) shows an uncorrected image of a simulated VR headset session for a user with -6D myopia in both eyes; and

Figure 25(b) shows a corrected image of a simulated VR headset session for a user with -6D myopia in both eyes;.

Detailed description of certain embodiments

[00104] In accordance with a first embodiment of the present invention there is provided a method 10 for determining an optical axis of a lens, when the lens is provided at an unknown position and/or orientation, as illustrated in Figure 1 (a). In particular, the method 10 comprises the following steps:

Step 12: obtaining at least one direct image of a background comprising identifiable features; Step 14: providing a lens between the background and a camera such that light rays pass from the background, through the lens before arriving at the camera;

Step 16: using the camera to obtain at least one indirect image comprising the background when viewed through the lens; and

Step 18: identifying at least one identifiable feature in the direct image and a corresponding identifiable feature in the indirect image; and

Step 19: using the correspondences from step 18 to determine an optical axis of the lens without aligning the optical axis of the lens with respect to an optical axis of the camera.

[00105] Thus, embodiments of the invention provide a method for determining an optical axis of a lens, when the lens is provided at an unknown position and/or orientation. The optical axis can then be used as input to a method for determining at least one property of a lens in accordance with a second aspect of the invention. The property may comprise optical power and/or physical characteristics such as lens thickness, curvature or refractive index.

[00106] In one embodiment, the method of determining at least one property of a lens comprises:

a) obtaining at least one direct image of a background comprising identifiable features; b) providing the lens between the background and a camera such that light rays pass from the background, through the lens before arriving at the camera, and wherein the lens is located at a known or estimated position and/or orientation; c) using the camera to obtain at least one indirect image comprising the background when viewed through the lens; d) identifying at least one identifiable feature in the direct image and a corresponding identifiable feature in the indirect image; and e) applying a ray-tracing technique to iteratively determine at least one parameter of a lens model, representing the at least one property of the lens being determined, so that rays traced towards the identifiable features in (a) and refracted according to the lens model pass through the corresponding identifiable features extracted in (d). [00107] Thus, embodiments of the invention utilize correspondences in a background image viewed directly and the same image when viewed through a lens, to determine lens properties (including pose, optical power and physical characteristics). Advantageously, the embodiments of the invention can be implemented using a lightweight, contactless video set-up with no dedicated lensmeter equipment being required.

[00108] Embodiments of the invention may further comprise estimating the lens characteristics and the lenses may be those in eyeglasses. Consequently, it is possible to build a virtual model of the lens/eyeglasses taking into account the prescription for each lens. The virtual model can then be used to manipulate digital images (e.g. videos) in order to virtually insert, modify or remove prescription lenses/eyeglasses, taking into account the refractive effects of the lenses. This results in a more realistic simulation for virtual try-on and removal of eyeglasses from images or videos. Embodiments of the invention may further comprise adapting video streams to compensate for the refractive power of eyeglasses (e.g. to reduce or eliminate the refractive effect of the lenses).

[00109] Embodiments of the invention may comprise estimating an eyeglasses prescription using video inputs. This may comprise estimating a position and orientation of an optical axis and constructing a virtual model of each lens.

[00110] Embodiments of the invention relate to systems and methods that solve various problems as detailed below. It should be noted that although some embodiments of the invention are described in the context of eyeglasses lenses, the principles of the invention can be applied to any lenses not just those in eyeglasses.

[00111] The description of the methods herein assumes a single lens. The method could be generalised to estimate lens pose and lens characteristics of a compound lens. A compound lens is a stack of lenses whose optical axes coincide. Therefore, the same technique for lens pose estimation can be used. For a compound lens, a parameter vector P used for the ray tracing based refinement would represent parameters for each individual lens in the lens compound. An objective function would be generalised in such a way that an incident ray is traced through each individual lens in the compound lens, until it hits a reference object. An optimisation procedure would assert that the lenses in the lens compound do not overlap.

Contactless measurement of eyeglasses lenses [00112] One aspect of the invention addresses the problem of estimating eyeglasses prescriptions and eyeglasses lens characteristics based on observations of eyeglasses in videos. The method includes estimating lens properties by analysing how corrective lenses deviate light rays. A lightweight solution is proposed that can be implemented using a smartphone or tablet and a printed planar patterned background. No other equipment is required.

[00113] Prescription eyeglasses leverage refraction to correct a wearer's vision by using corrective lenses that bend light according to the wearer's prescription. Corrective lenses not only modify how the wearer sees the world, but also affect how the world sees the wearer. For example, the eyes of a far-sighted person wearing prescription eyeglasses appear larger, whereas those of a near-sighted person wearing prescription eyeglasses appear smaller.

[00114] In general, embodiments of this aspect of the invention are based on an analysis of refraction effects observed in a video in which corrective lenses are placed in front of a background from which identifiable points (or fiducial points) can be extracted. By analysing such a video, the method can estimate the lens characteristics (including lens pose) and eyeglasses prescription. The estimation of lens characteristics enables a more realistic simulation for virtual image manipulation in relation to eyeglasses. For example, when applied to the virtual try-on of eyeglasses, a pair of prescription eyeglasses can be virtually inserted into a video where the user is not wearing glasses, and the refraction effects that will result from the corrective lenses can be simulated according to the estimated prescription. Furthermore, embodiments of the invention can be applied to edit videos of a user wearing eyeglasses, either to virtually remove the eyeglasses or to manipulate the characteristics of the corrective lenses (e.g. to reduce the refractive effect).

[00115] Two embodiments of the invention are shown in Figures 2(a) and 2(b). In the embodiment of Figure 2(a) a pair of eyeglasses 2 to be measured are mounted in a stationary position in front of a background (or reference object) 3 while a video capture device (i.e. camera) 1 (having the eyeglasses and background viewed through the eyeglasses in its field of view) is moved in unconstrained movement (i.e. side to side and/or back and forth and/or up and down and/or with rotation). In Figure 2(b) the eyeglasses 2 themselves are moved in front of the background 3 while the background 3 and video capture device 1 remain stationary. In Figure 2(b) there is also a second planar board 4 clipped onto and mounted above the eyeglasses to help in tracking the position and orientation of the handheld eyeglasses 2. The second planar board 4 also provides a convenient surface for a user to hold the eyeglasses 2 without obscuring the lenses from the video capture equipment 1. In principle any one or more of the video capture device, the eyeglasses and the background may be moved during the video capture while the other components remain static. It should be noted that the reason for the movement of at least one component relative to the other components is to provide a larger number of possible images to use to find correspondences, than if all of the components remained stationary. However, in some embodiments it may not be necessary for one of the components to be moved relative to the others as long as at least one pair of images can be extracted with identifiable corresponding features viewed directly and indirectly through the eyeglasses.

[00116] The video capture device 1 may be constituted by any video capture equipment. This may typically be provided by a camera of a smartphone or tablet which can be used to capture and process the videos to determine the eyeglasses prescription. Alternatively, the videos may be sent to a computer or server (not shown) for processing.

[00117] The background 3 in Figures 2(a) and 2(b) is of known geometry and comprises a planar board having printed thereon a grid of coded markers (i.e. predefined black-and-white patterned squares) to provide fiducial points to aid identification of corresponding features when viewed with and without the eyeglasses. The correspondences are used to measure the amount of refraction induced by the corrective lenses as will be described in more detail below.

[00118] In other embodiments, other backgrounds may be employed (e.g. including different features, markers or texture to provide correspondences). In some embodiments the background may be non-planer (e.g. curved) and/or may be provided on an electronic display such as a tablet, smartphone or computer. As will be described below, in a particular embodiment, the background may be constituted by a user's face (i.e. the user may wear the eyeglasses while a measurement video is captured).

[00119] The background 3 is located in front of the video capture device 1 and behind the eyeglasses 2, such that light rays coming from the background 3 towards the video capture device 1 pass through the transparent lenses of the eyeglasses 2.

[00120] Figure 3 gives an overview of a prescription estimation process in accordance with an embodiment of the invention. Pixel correspondences extracted from input images (Figure 3(a)) are used to estimate the camera pose and extract a deviation of rays (Figure 3(b) and Figure 3(b ' )) passing through each lens. The method estimates the lens pose (Figure 3(c) and Figure 3(c')), followed by the lens power based on a thin lens model. A ray tracing-based approach then refines the lens pose and the lens power, and estimates the physical lens properties (Figure 3(d) and Figure 3(d ' )) by optimizing a physically-based thick lens model. As illustrated in Figure 3(e), the output is a lens model comprising refined lens pose and power, surface properties, lens thickness and index of refraction. Each of these steps will be described in more detail below.

[00121] In Figure 3(a) a first video is captured of the background 3 alone (i.e. without eyeglasses 2 present) and a second video is captured of the background 3 as viewed through the eyeglasses 2. These videos may be captured using the set-up in either of Figures 2(a) or 2(b) (i.e. with one of the components being moved relative to the other components) or with all components relatively static as explained above. Multiple still images are then extracted from each of the videos (i.e. direct background images 20 and indirect eyeglasses images 22). In other embodiments, still images (i.e. photographs) may be provided from the outset rather than extracting images from videos. However, an advantage of using videos is that a large number of images can be extracted from slightly different viewpoints giving a greater opportunity for correspondences to be identified.

[00122] It should be understood that the images obtained with the eyeglasses present will include a refracted image of the background when viewed through the corrective lenses. The images will be processed as explained below in order to yield parameter values for a computerised model of each eyeglasses lens and an estimation of the eyeglasses prescription.

[00123] As illustrated in Figures 3(b) and (b'), the captured images are processed in order to measure the deviation of light rays passing through each one of the corrective lenses 24. One lens will be analysed at a time and the process repeated for the second lens independently.

[00124] In this particular embodiment, the intrinsic parameters of the image capture device (i.e. camera) are estimated in a calibration procedure conducted in a pre-processing step, as explained below, and each image is then undistorted to compensate for radial distortion using known techniques.

[00125] In an initial step, an estimation of the scene layout and geometry is made by extracting two-dimensional (2D) feature points on the background image and using these to estimate the pose (i.e. position and orientation) of the camera with respect to the background, the geometry of the background, and the pose of the eyeglasses with respect to the background in each image frame, as explained in more detail below. In particular, the pose of the camera may be obtained using a known structure-from-motion technique.

Camera calibration and 3D reconstruction of markers

[00126] In a particular embodiment of the invention, a few images of the reference object or background are captured before inserting the eyeglasses into the scene. Point correspondences are extracted by detecting and identifying AprilTag markers [Olson 201 1 ] printed on the reference object. These correspondences are used to calibrate the camera automatically [Bouguet 2008], and estimate the 3D position corresponding to each marker corner, m. It is assumed that the camera intrinsic and radial distortion parameters are fixed during the capture. All images captured after the eyeglasses are inserted are then undistorted to compensate for radial distortion. The camera is considered as a pinhole thereafter.

Camera pose estimation in images with glasses

[00127] In one embodiment, the 2D position q im of each AprilTag marker corner m is extracted from each image i captured after inserting the eyeglasses. In order to estimate the camera pose (i.e., position and orientation with respect to the reference object) in each image, these 2D positions are compared with the known 3D positions of the markers using OpenCV's SolvePnP() function [Bradski 2000]. Since rays passing through a lens are deviated due to refraction, the corresponding markers are ignored by using a RANSAC process that flags them as outliers during the camera pose estimation.

Extraction of ray correspondences

[00128] Each marker m visible behind a lens in an indirect image with eyeglasses i allows the extraction of an incident ray (C r ; u r ) and a point P r along the ray refracted by the lens. These 3D points and vector are defined as follows:

a) ray origin C r corresponds to the 3D position of the camera in the image i which captures this ray;

b) refracted point P r is the 3D position of marker m; it corresponds to the intersection of the ray deviated by the lens with the reference object; c) incident direction u r is the unit vector originating from the camera center C r and passing through the center of correspondence pixel q im in the image i with eyeglasses.

[00129] In every image, rays passing through each lens are separated from those intersecting the reference object without passing through a lens. A K-means clustering is applied (with K = 3 for a pair of eyeglasses with two lenses) based on the ray incident direction and amount of deviation. For each lens, ray correspondences passing through that lens are gathered into a set Γ; rays that do not intersect either lens are discarded.

Identifying Correspondences

[00130] For each image with eyeglasses, a partner image without eyeglasses is selected by identifying a background image that was captured from the most similar viewpoint as the eyeglasses image. The background image is then warped or adapted in order to align it to the eyeglasses image, resulting in a pixel-accurate alignment.

[00131] For each pair of images identified above (i.e. taken with and without eyeglasses from the same viewpoint), pixel correspondences are extracted across the two images in the region behind the corrective lens. This can be done either by using physical markers on the background or by leverage textural information (i.e. wrinkles on a face) and applying a dense correspondence extraction method such as optical flow. Unreliable correspondences (e.g. in uniform regions with no texture) are filtered and discarded.

[00132] Further details of the process used in specific embodiments of the invention are described in later sections.

Lens Model

[00133] In some embodiments of the invention, a computerised model of each lens of the eyeglasses is created. The following description focuses on single-vision eyeglasses lenses, which have the same corrective properties over an entire area of each lens. These lenses are commonly used to correct for near-sightedness (myopia), far-sightedness (hyperopia), and/or astigmatism in the eye. Bi-focal and progressive lenses could be handled with an extension of this model, as will be understood by those skilled in the art, but are not considered here. [00134] The model is based on an ophthalmic lens made of transparent material with two refractive surfaces: a convergent (convex) front surface and a divergent (concave) rear surface. The difference in curvature between the front and rear surfaces leads to the corrective power of the lens. Each of the front and rear surfaces is either spherical (i.e. a portion of a sphere defined by a center and radius) or toroidal (i.e. a portion of a torus defined by a center, a major radius, a minor radius, and an orientation). The present model ignores aspherical and atoroidal lenses, although these could again be modeled by those skilled in the art.

[00135] The model of the spherical lens incorporates the following parameters: the position of the lens (e.g. the 3D position of the back vertex (3 parameters in meters), which corresponds to the intersection of the back surface with the optical axis), and orientation of the optical axis (2 parameters in degrees), the curvature of the front and rear surfaces (e.g. the sphere power of the front and back surfaces (2 parameters in diopters)), the thickness of the lens (1 parameter in meters), and the index of refraction of the lens material (1 dimensionless parameter, typically in the range from 1.49 to 1 .74). The cylinder power of the front and back surfaces (2 parameters in diopters), and cylinder axis (1 parameter in degrees) may also be incorporated into the model to provide for lenses that correct for astigmatism. These parameters allow for the virtual construction of a wide variety of eyeglasses lenses consisting of a front surface and a back surface, where each of the surfaces can be convex, concave, planar, spherical, cylindrical or toroidal.

[00136] Notably, the model (along with its parameters) is physically based such that it can exhibit the same optical properties as the actual lenses observed in the videos described above, and can be virtually placed at the same location and orientation as the observed lenses.

[00137] The lens characteristics of the eyeglasses can be estimated based on the pose and an initial guess for the refractive power as will be explained in more detail below.

Comparison of methods for extracting point correspondences

[00138] The embodiments described herein use a planar board of coded markers for the background image. This enables a robust extraction and tracking of reliable correspondences in multiple images. In other embodiments the planar board could be replaced with a textured background and the correspondence extraction and tracking could be performed by using optical flow which is a known technique used to estimate correspondences between different images, or by other correspondence extraction methods.

[00139] Table 1 below provides a comparison of details for the embodiments described using a planar board with coded markers for the background (both for the case shown in Figure 2(a) where the eyeglasses are static and the camera is moved and for the case shown in Figure 2(b) where the camera is static and the eyeglasses are moved), for a textured background observed from a moving camera and for the case where the eyeglasses are worn on the user and the user's face constitutes the background.

[00140] The planar board is made from an A4 paper sized acrylic board on which black-and-white patterned squares (also referred to as AprilTags) are printed such that they can be reliably detected and recognised under a variety of viewpoints and transformations. Notably, the corners of each square marker provide very accurate point correspondences (with an error <1 mm) such that they can be used to measure the amount of refraction introduced by the corrective lenses in accordance with the methods described herein.

[00141] For the case illustrated in Figure 2(b), the additional board attached to the eyeglasses is 20cm by 7cm in size and includes further coded markers in order to track the position of the eyeglasses which are moved in each frame of the video.

[00142] In the case where the eyeglasses were worn by a user one video was captured with the user wearing the eyeglasses and one video was captured of the user's face without the eyeglasses. The videos in this case could be captured either with a static camera and the user rotating his/her head while maintaining a static facial expression, or with the user remaining static and the camera being rotated around the user.

[00143] In one embodiment, face tracking software (Faceshift) was used in order to track the camera pose with respect to the user's face in each frame, along with the face geometry. As Faceshift requires depth in addition to color input images, an RGBD (red+green+blue+depth) sensor was used to capture the input. However, in other embodiments, other face tracking software may be employed using the same or a different type of input image.

Table 1 : Comparison of methods for extracting point correspondences for different embodiments

Table 2: Errors on real data based on mean error. Missing cells correspond to the real capture scenario for which the pose of the lenses are unknown.

Detailed Example

[00144] In this embodiment we describe an image-based approach for estimating the pose, optical power, and physical properties of a spherical lens. Spherical lenses in eyeglasses are used to correct refractive errors such as myopia (near-sightedness) and hyperopia (far-sightedness). A spherical lens consists of two surfaces, at the front and back, with each of these being a portion of a sphere or plane. Both surfaces are rotationally symmetric around the lens optical axis. The corrective power of a lens, formally called "back-vertex power" as it is measured from the back surface of the lens, entirely depends on a few physical properties of the lens: its index of refraction, thickness, and curvature of its front and back surfaces.

[00145] It will be understood that in other embodiments, the principles of the present invention can be applied to other types of lenses (e.g. toroidal, cylindrical, aspherical, atoroidal) and the method for locating the optical axis of the lens will be adapted accordingly, for example, by analysing the symmetrical nature or other properties of the lens.

[00146] As above, ray optics is employed where light propagation is represented in terms of individual rays. The convention used in ray tracing is adopted where light rays travel backwards, from the camera towards an object surface. Each stage of the method is based on analysing the observed deviation of rays that pass through the prescription lenses, using identifiable (or fiducial) correspondences extracted from the captured images and leveraging multiple viewpoints. The present embodiment employs an estimation process similar to that illustrated in Figure 3.

[00147] The proposed approach takes as input a sequence of images captured with a handheld camera from multiple viewpoints. Identifiable correspondences are used to estimate the camera pose, along with directions for the rays passing through each corrective lens. Rays are processed as passing through each lens independently. The method locates the lens' optical axis, which defines the pose of the lens up to an unknown translation along the optical axis. The position of the lens along its optical axis is then estimated, which allows lens power to be inferred with a thin lens model as will be described below. Finally, the lens pose and prescription are refined by fitting a physically-based lens model, using an approach based on ray tracing.

Capture and notations

[00148] The aim is to extract a set of rays, for which the deviation introduced by the prescription lenses can be measured. To do so, the eyeglasses are placed in front of a reference object or background, and fiducial (e.g. identifiable) correspondences are identified across multiple images. In practice the eyeglasses can be placed on top of a small object, in order to create a distance between the lens surface and the reference object. A spacer, which may be in form of a water bottle cap or toothpaste cap, can be employed for this purpose. The actual spacer object used does not matter - in experiments, good results have been obtained with a variety of household items.

[00149] A typical eyeglasses prescription includes a sphere power S (in diopters), a cylinder power Z (in diopters), and a cylinder axis A (in degrees), for each eye. These values are necessary to construct corrective lenses appropriate for a patient. The simplest lenses are rotationally symmetric and exhibit only sphere power (S≠0. Z=0); these lenses are typically used for correcting myopia or hyperopia. On the other hand, lenses that correct for astigmatism (Z≠0) have a power that varies across meridians. These lenses are characterized by two principal meridians, which are perpendicularly crossed at the center of the lens: i) the sphere principal meridian M 0 , which contains the sphere power S of the prescription; and ii) the combined principal meridian M which contains the combined sphere and cylinder power S + Z of the prescription. The lens orientation is specified by the cylinder axis A in the prescription, which corresponds to the angle between the sphere meridian M 0 and the horizontal direction.

[00150] The corrective power of an ophthalmic lens depends entirely on a few physical properties of the lens: its index of refraction, thickness, and the curvatures of its front and back surfaces. These properties are carefully selected for each patient, in order to correct focusing defects of individual eyes. The front surface of modern lenses is usually spherical, while the back side may be spherical or toroidal if the prescription includes cylinder power (Z≠0). Figure 4(a) illustrates the geometry of such an ophthalmic lens 120 which comprises a spherical front surface 122 and a toroidal back surface 124. For more details on toroidal lenses and eyeglasses prescriptions, see [Meister and Sheedy 2000]. In the present embodiment, an ophthalmic lens is represented by a parametric model which combines the physical properties and the pose of the lens. The model parameters, concatenated in a parameter vector P, are listed in Table 3.

Lens Pose Physical properties

optical axis direction a front sphere power S F

position of back vertex V B back sphere power S B cylinder axis A back cylinder power Z B

index of refraction n

lens thickness d

Table 3: Set of parameters P determining the physical properties and position of a lens

[00151] Parameter vector P includes parameters which describe the position and orientation of the lens. The lens optical axis is central in defining the lens pose; it traverses the lens at its center, passing through front vertex V F and back vertex V B as shown in Figure 4b. The pose of a rotationally symmetric lens is uniquely defined by the optical axis direction (unit vector a ) and position of the back vertex V B . Non- rotationally symmetric lenses (e.g. toroidal lenses) require an extra parameter to fully define the lens orientation. In this case, cylinder axis A (in degrees) can be employed, which describes the angle between the sphere meridian M 0 and the horizontal direction.

[00152] Parameter vector P includes five parameters which describe the lens shape and material: refractive power of the front and back surfaces S F , S B , and Z B , in diopters (D); lens thickness d in meters; and the index of refraction n. A direct relation exists between the surface radii and the surface power [Meister and Sheedy 2000]: a spherical surface (e.g. the front surface) is parameterized by the sphere radius, which corresponds to (1 -n)/S F ; while a toroidal surface (e.g. the back surface) is described by two radii r \ = (n-1 yS B and r 2 = (n-1 y(S B β ), where r \ is the radius of the torus and r ? is the radius of its tube. Thickness d is measured at the center of the lens, as illustrated on Figure 4(b).

[00153] Formally, the aim is to extract for each lens a set of ray correspondences Γ, where each ray r e Γ is defined by origin C r and incident direction u r . A ray originating from camera position C propagates in a straight line and would intersect the surface of a reference object at point Q r , if it were not deviated by a lens. When prescription glasses are inserted into the scene, refraction effects deviate the incident ray and cause the refracted ray to intersect the reference object at point P r . These notations are illustrated in Figure 4(b) which shows the path of a single ray originating from the camera centre C and travelling towards direction u. The ray is deviated by a corrective lens and intersects the reference object at point P. In the absence of a lens, the ray would travel in a straight line and would intersect the reference object at point Q. The lens pose is defined by its optical axis (O, a). [00154] Images are captured from multiple viewpoints, by moving the camera while the position of the glasses and reference object are fixed. Identifiable correspondences on the reference object/background are used to estimate the camera pose in each view, and to measure (C r , u r , P r ), for each ray r, as described above.

[00155] In the present setup, the reference object consists of a planar paper sheet of known size (e.g., A4), onto which fiducial patterns are printed. While any textured object with known size could be used for extracting identifiable correspondences, along with a correspondence extraction method, the Applicants found that AprilTag markers [Olson 201 1] were consistently detected and recognised under a variety of viewpoints, lighting conditions, and distortions. This setup therefore results in a simple, accessible, and repeatable approach for extracting ray correspondences.

Lens pose estimation

[00156] The first goal is to locate the lens optical axis (O, a), which is defined by a unit direction a and a point O lying somewhere on this axis. The optical axis describes the pose of the lens, up to an unknown translation along its optical axis. Note that an estimate of the lens position along the axis will be obtained by following the procedure outlined in a later section.

[00157] In the following, light rays are considered to originate from the camera position and are traced towards the reference object. Each ray is deviated twice due to refraction, when it intersects the front and the back surfaces of the lens. According to Snell-Descartes' law, at each refractive surface the deviated ray remains within the plane of incidence, which is spanned by the incident direction and the local surface normal. For most rays, the normal at the intersection with the second refractive surface is not included in the plane of incidence at the first refractive surface; as a result, rays entering and exiting the lens are generally not coplanar.

[00158] A closer look at the geometry of rotationally symmetric lenses reveals some specific planes which contain both incident rays and deviated rays exiting the lens after refraction. A property of rotationally symmetric surfaces is that any plane M containing the object's symmetry axis will also include the local surface normal, at every point where M intersects the surface. As a result, incident rays included within a plane containing the optical axis of a rotationally symmetric lens remain within the same plane throughout their trajectory. In optics, such a plane is traditionally called a meridional plane. [00159] A key insight from the present approach is that meridional planes can be identified based on the ray correspondences extracted from captured images, and that combining meridional planes corresponding to multiple viewpoints allows the location of the lens optical axis to be identified.

[00160] It should be noted that the use of meridional planes in the present embodiments is appropriate due to the spherical nature of the lens. When the lens is non-spherical other techniques will be required, for example, based on the symmetrical properties of the lens in question.

Per-image estimation of meridional plane

[00161 ] For an off-axis camera position C, corresponding to image i, there exists a single meridional plane M, which contains both the camera position and the unknown optical axis. Since C, is known, identifying M, reduces to finding its normal ¾ . Following the above discussion on rotationally-symmetric lenses, the following condition can be defined:

Condition 1 : T denotes the plane with normal n , passing through C,. If T is the meridional plane for image i, for every incident ray originating from C, and propagating within T, the corresponding deviated ray also propagates within the same plane.

[00162] For a ray correspondence r e Π, the following angles can be defined under the small-angle approximation:

• a r fi « sina r fi = u r . n is the angle between the candidate meridional plane and the incident ray with direction u r ;

• /? r f j « sinPrfi = v r . n , with v r = , is the angle subtended by the refracted point P r on the reference object and its projection on the candidate plane, as viewed from the ray origin C r .

[00163] Figure 5 illustrates these angles for two rays, and shows the ground truth meridional plane M, that includes C, and the optical axis (O, a). Since the incident direction CQ^ of the first ray 60 is included in the meridional plane (α 1 ¾ = 0 ), its refracted direction remains within that plane and = 0. On the other hand, since the second ray's 62 incident direction not contained within the plane (a 2 fi ≠ 0), the corresponding refracted ray may propagate in other directions (β ≠ 0). [00164] A different candidate plane 70 is shown in Figure 6. In this configuration, both incident rays are included in the candidate plane (¾ = ¾ ί = 0 )· However, the corresponding deviated rays do not propagate within that plane (/¾≠ 0 and ≠ 0), and as a result, this candidate plane should be rejected according to Condition 1.

[00165] An energy that represents, for any candidate plane T with normal n, the cost of violating Condition 1 is defined as:

( )

Ei (¾ =∑ r e π K (-^-) {{v r - u r ) - nf (2)

^incident' where K is a Gaussian kernel of the form if (x) = c K exp(^x 2 ) , c K is a normalization constant so that the sum of the K(.) kernels over all rays equals one, and o incident is the bandwidth of the Gaussian kernel {o incident = 3 x 10 "3 in the results presented).

[00166] Given a candidate meridional plane with normal n, the Gaussian kernel K(.) selects only incident rays that are nearly included in the candidate plane, while ignoring all other rays. The second factor represents the angular deviation of the refracted ray, away from T. The energy E, computes the weighted sum of the angular deviation of all rays, resulting in a higher cost when incident rays included in the candidate plane are refracted away from the incident plane.

[00167] Equation 1 describes an energy that could be minimised independently for each image, in order to find a good meridional plane. However, several candidate planes may have a low and similar cost, when the camera center lies close to the optical axis. When C lies exactly on the axis (O, a), an infinity of meridional planes exist for that image: they form a pencil of planes rotated around the optical axis. To alleviate these ambiguous cases, the observations from multiple viewpoints are leveraged in order to simultaneously estimate the location of the optical axis.

[00168] The next goal is to find the optical axis direction a, as well as a point O lying on the optical axis, so that the corresponding meridional plane M, satisfies Condition 1 in every image i. By definition, the meridional plane M, in image i includes both the camera position C, and the optical axis in each image i, and its normal can be defined as: n i = W\\ (3)

with Λ indicating the cross-product of two vectors. E, is accumulated over all images and the total energy is minimised with respect to the optical axis position and origin: argmin S o i E i τζ) = argmin S o i E i ( =g|) (4 )

[00169] Without loss of generality, O can be constrained to lie on the plane z = 0. This results in a 4D non-linear optimisation that can be initialised with a coarse grid search.

[00170] The optical axis direction is initialized as the unit vector orthogonal to the plane containing the markers, in the capture setup described above. The position of O is initialized in two steps: (i) in each image /, combine multiple planes with low cost £ / (n) in order to identify the direction of minimal flow d„ which is included in these planes and thus orthogonal to their normals; (ii) find the least-squares intersection of directions d, across all input images.

[00171] Spherical and aspherical lenses are rotationally symmetric (i.e. they contain an infinite number of planes of symmetry, which all intersect at the lens optical axis). In contrast, toroidal and atoroidal surfaces are symmetrical with respect to only two planes of symmetry, which are coincident with the principal meridians of the lens. These planar symmetries may enable a method for identifying optical axis and principal meridians of toroidal/atoroidal lenses, in a manner similar to the rotationally symmetric lenses above. For the time being, for toroidal lenses the initial estimation of the optical axis is used as input to the following section, and the optimization step of Equation 4 is omitted.

Thin lens model

[00172] The previous section detailed how to locate the optical axis of a rotationally symmetric lens based on ray correspondences. A method for estimating the lens position along its optical axis and inferring an initial guess for the lens power is now described, based on a thin lens model. These steps may be performed for non- spherical lenses, for example, where the optical axis has been located through other techniques.

Lens position along axis

[00173] The simplest way to model a spherical lens is to use a thin lens model. Such a model considers that the centre thickness and curvature of a lens are negligible - drastically simplifying calculations. Under the thin lens model, the front vertex V F and back vertex V B coincide with the thin lens optical centre. While this model is approximate, it allows a first approximation of the lens position and power to be estimated that can be refined below. [00174] Any light ray passing through the optical center L of a thin lens (i.e., the intersection of the optical axis with the lens plane) remains undeviated after refraction through the lens. This property is used in order to locate the position of L along the optical axis. Since incident rays intersecting the lens nearby L should have a small deviation, measuring such deviation provides the following energy:

Ecentre(L) =∑ j r e π K l-^-^ . j (^(u r , V r )) 2 (5)

\ ° incident /

[00175] with ^(u r ,¾ * sin(u r , C^L) = (1 - )£ The Gaussian kernel K(.)

\ C f L\\ J

selects only incident rays close to L, while the second factor accumulates the deviation of those rays. Minimising this energy with respect to the position of L yields the position of the thin lens optical centre. Furthermore, L can be constrained to lie on the optical axis (O, a) estimated above. The output of this process is the position of the optical center along the lens optical axis, thus completing the determination of the lens pose.

Optical power of the thin lens

[00176] With the lens pose fully determined, an estimate of the spherical power of the lens can be made. This is a difficult problem because, given the same lens, the power varies depending on the reference position it is measured from. In ophthalmic optics, the back vertex power measured from the intersection of the lens optical axis with the back surface is generally used (for a thin lens, it is calculated from the position of the thin lens optical centre).

[00177] As a first approximation of the lens power, the spherical power can be estimated using a thin lens model placed at optical center L. Having estimated the lens pose above allows the selection of the meridional rays r e M, that lie in the meridional plane in any image. Transformation of the coordinates into a meridional coordinate system is performed where the z axis corresponds to the optical axis, the origin is the thin lens optical center L, and the camera C, lies on plane (L,y,z).

[00178] Restricting the analysis to paraxial rays, which are the meridional rays with a small incident angle 9 Cr with the optical axis, it is possible to represent the propagation of a light ray and its deviation by the thin lens with power So, with a sequence of matrix operations [Pedrotti et al. 2006]:

ypr 1 z p r Z L 1 0 1 ¾ - z Cr

(6)

.0 1 -S 0 1 .0 1 e c r . where yp r represents the intersection of the refracted ray with plane z = z Pr and 6 Cr the angle of the refracted ray with the optical axis, as predicted by the thin lens model. All coordinates are expressed in the meridional coordinate system.

[00179] Since point P r is known based on the ray correspondences Γ captured, the lens power S 0 can be estimated by minimising the difference between the observed position yp r and the position yp r predicted by the model:

argmin So r e Mi (yp r - yp r Y (7) argmin So r e Mi (s 0 (¾. - z L ) (y Cr + 6 Cr (z L - z Cr )) - 0 Cr (¾. - ¾.) - ( c r - yVri) (8)

This simple formulation provides a first estimate of the sphere power So, based on the deviation of meridional rays. This estimate is based on a thin lens model.

Raytracing-based optimisation

[00180] The previous sections describe a method to compute the pose of a lens, and the associated lens power, based on meridional rays, which are a subset of the rays observed in the image data. Based on this estimated pose and lens power, a physically based approach can be employed to construct a lens in 3D, which has an improved pose and lens power. In addition, its surface properties can be determined to be close to the lens parameters of the lens observed during capturing of the images, which is useful in a variety of applications as discussed below. The method comprises tracing rays through a virtual 3D lens model, by measuring the differences, based on a specified error metric, to the correspondences extracted from the image data.

[00181] In the following, a physically based approach to construct a 3D lens from image correspondences is described. The proposed approach to construct a 3D lens is based on tracing rays through a virtual 3D lens model, by measuring differences to the correspondences through a specified error metric. The method outputs the power, pose, and surface characteristics of the observed lens. In order to calculate intersections of geometric rays with the parametric lens described above, a strategy common in ray tracing [Giassner 1989] is followed wherebyeach lens surface is described analytically using its algebraic equation. Then, ray/surface intersections are performed by substituting the parametric ray equation into the respective surface equation. The roots of a resulting univariate equation correspond to the distances from the ray origin to the respective surface intersection. The intersection points are determined by evaluating the parametric ray equation with the computed roots. Surface normals are computed by evaluating the partial derivatives of the algebraic formulation.

Energy Function

[00182] The aim of the optimization algorithm, described below, is twofold: to estimate the lens power, and to search for a lens geometry and lens pose, which best fits the observed data. To achieve that, an energy function f(P, Γ) is defined according to Equation 9

where P, is the deviated point observed in an image corresponding to ray—♦. P- is the ciQi deviated point computed by intersecting the ray — » with the lens described with cl Q l

parameter vector P. Note that this involves the application of Snell's law (as illustrated in Figure 4(b)), when the ray enters and exits the front and back surfaces, respectively. The energy function in Equation 9 evaluates to zero when the lens, described by P, refracts rays such that the deviated points P[ equal their corresponding observed points

P.

[00183] Analyzing the energy landscape of f(P, Γ) indicates the existence of a single global minimum. Figures 7(a) and 7(b) show a visualization of the energy function for a bispherical lens, placed (for illustrative purposes) at its true pose, true index of refraction, and true lens thickness. As illustrated, the energy function f(P, Γ) is plotted along the front (F) and back (B) surface power in the range [-5D, +5D]. In this case, the lowest point of the white valley (black point) corresponds to the global minimum. A local minimum is the lowest point of a similar valley of f(P, Γ), where lens pose, index of refraction, or lens thickness differ from the true parameters.

[00184] In the case of a simpler lens, e.g., a spherical lens where fewer ambiguities between lens parameters exist, these properties may be determined through a single call to a nonlinear minimization routine. This has been demonstrated in the work by [Ben-Ezra and Nayar 2003]. However,

such a strategy will not be successful in the present setup, due to the higher dimensional parameter model which allows the specification of oriented toroidal lenses. This generality makes the energy landscape f(P, Γ) more complex and the problem is further amplified by a sparse discrete input. [00185] Identifying lens parameters located close to the global minimum for such a lens is difficult and requires an accurate starting guess. Otherwise, a single call to a nonlinear minimization routine will determine a lens pose different from the true location. The result is a lens, whose power and surface properties differ from the actual lens. Thus, the solution will sit close to the lowest point of a valley corresponding to a local minimum. Note, an accurate guess cannot be provided in the present setup, as the user places an unknown lens without special attention in front of a pattern, such that both are roughly parallel.

Algorithm

[00186] An algorithm to estimate the parameters in P given the image correspondences Γ is designed based on two key observations discussed below.

[00187] 1 . Given an eyeglasses prescription, a family of lenses corresponding to this prescription exists, where each lens in the family has the same refractive power, but each has a different front and back surface power. More formally, if S is the prescription ' s sphere power, S F can be a specific front surface power, both in diopters and the back surface power S B , can be determined through the thick lens formula [Meister and Sheedy 2000], i.e.,

■_ S F 2 d-S F n+S B n ^

[00188] While each lens in the family has the same optical power, each one has its individual refractive characteristics. For instance, any point on the valley shown in Figures 7(a) and 7(b) corresponds to a lens, which has the same power as the observed lens. However, the lowest point of the valley corresponds to the actual observed lens. Identifying the correct front surface is key to finding lens parameters close to the parameters defining the observed lens.

[00189] 2. In a case where 1 ) the observer is aligned with the optical axis, 2) the distance to the observer <¾ is known, and 3) the distance to the background δ, is known, the lens equation can be used to estimate the focal power f, i.e., 1/f = 1/δ 0 +1/δ, . If any of these distances are not known, determining f is difficult. An additional ambiguity occurs for toroidal lenses - without further formalism, the thin lens formulation can be used to show that the cylinder power of the estimation can be influenced when the lens is tilted. This is illustrated in Figure 8 in which the top row shows the observed lens flow or power 130 when the optical axis of a toroidal lens 132 is aligned with an observer 134; and, in the bottom row, the observed lens flow or power 136 when the lens 132 is tilted with respect to the observer 134. It is therefore apparent that the resulting lens flow/power is reduced to cylinder power when a toroidal lens is tilted. Thus, if the pose is inaccurate, the estimated lens power may differ significantly from the observed lens. In summary, parameters defining lens pose, i.e., (O, a) can stand in conflict with lens power parameters, which would cause ambiguities during optimization. Thus, in the following algorithm, pose estimation is separated from the lens power estimation.

[00190] A challenge stems from high dimensionality, which makes a uniform sampling of the parameter space infeasible. The algorithm is based on a hypothesis optical axis direction a', and a hypothesis lens prescription (S Z A). The algorithm is based on two assumptions: 1 ) The optical axis direction does not vary more than Θ degrees from the optical axis direction of the actual lens; 2) A position, O, through which the optical axis passes is known. Such a position can in fact be estimated to lie near the back vertex of the lens. The description above discusses how these initial properties are determined from the image correspondences Γ.

[00191] The goal is to improve and update these hypothetic quantities iteratively, taking the observations made above into account. The proposed estimation scheme is presented in Algorithm 1 , followed by a description of its individual components performed in a single iteration /.

[00192] First, in line 3 of the algorithm above, a set of M lenses is generated, which all have the same optical power corresponding to the current prescription hypothesis (S). The lenses in L differ by their front surface power S F , which is sampled linearly between F m and F max . The sphere power S B and cylinder power Z B of the lens back surface is computed according to Equation 10. The remaining parameters, lens thickness and index of refraction, are set to of = 1 mm and n = 1.523, corresponding to crown glass (although other parameters may be chosen for other types of glass). The lenses in L are positioned at the current pose hypothesis, i.e, (O, a') with L = 0, and the cylinder axis A.

[00193] The lenses in L can be considered as initial guesses for a series of continuous nonlinear optimizations performed on the back surface parameters S B , Z s , and A (line 4). In order to avoid ambiguity during optimization, and for efficiency reasons, the remaining parameters, i.e., front surface parameters, index of refraction, lens thickness, and pose are held in place. P, describes the parameters of an optimized lens, which corresponds to the smallest energy value, i.e.:

Pi = argminp f{P, Γ) (1 1 )

See discussion below, for more details on the nonlinear optimization scheme employed.

[00194] Line 5 of the algorithm attempts to improve the optical axis direction hypothesis. A right circular cone with apex angle of Θ degrees is constructed, such that the cone ' s apex and axis coincide with the current a' and O, respectively. A number n s of stratified samples are generated at the cone ' s aperture. For each sample, a candidate optical axis direction is generated, passing through O and the sample location. This axis is used to evaluate the energy function using the lens parameters of Pj. The new optical axis hypothesis corresponds to the sample which has the smallest energy value. This axis is assigned to P,. Finally in line 6, (S Z A ) is updated from the lens power of P, via the thick lens equation (Equation 10). P, is then added to the list C of candidate lenses in line 7. In the present implemention of Algorithm 1 , Θ = 20, n s = 200, and N = 6.

[00195] The purpose of the loop in Algorithm 1 is to determine an accurate lens pose, lens power, and cylinder axis. Finally, in line 9, continuous optimization is performed on the candidate lenses in C. Since there are only a few candidate lenses, this final part requires only little computation time in comparison to the initial lens pose and lens power estimation part. Nonlinear optimization is performed on the lens surface parameters S F , S B , Z, A, the index of refraction n, and lens thickness of, allowing the lens to move along the (fixed) optical axis. The optimized lens P min corresponding to the smallest energy value is the output of the algorithm. The final pose and prescription are derived from that lens. [00196] The present embodiment performs the nonlinear optimization using the gradient free simplex algorithm of [Nelder and Mead 1965], which uses geometric transformations on a /(-simplex constructed on the parameter vector P. The termination criteria is based on the size of the simplex, which is the root mean square sum of the length of vectors from the simplex center to the corner points. Note that variations of the gradient descent algorithm (e.g., nonlinear conjugate gradient) did not result in a desired convergence behaviour, which seems to be due to the discrete nature of the problem, as the energy landscape may consist of local bumps, where standard gradient descent based schemes may get stuck.

[00197] During the optimization process, three degenerate cases may occur: 1 ) the torus is identical to a sphere, i.e., S B * 0 and Z = 0; 2) the torus is identical to a plane, i.e., S B = 0 and Z = 0; and 3) the torus is identical to a cylinder, i.e., S B = 0 and Z * 0. In these cases, the algebraic equation of the back surface (normally represented by a torus) is replaced with the algebraic equation of a (1 ) sphere, (2) plane, or (3) cylinder, respectively. The algebraic equation for the front surface is chosen accordingly.

Discussion

[00198] Note, the nonlinear nature of the present approach conveniently permits it to be extended to other lens surface types (e.g., prism lenses, toroidal lenses, aspherical lenses, etc.) by extending the parameter vector P. In comparison to the work of [Ben- Ezra and Nayar 2003], whose method is also based on a parametric model, the formulation above does not make the simplifying assumption that feature points are at infinite distances. In addition, the present formulation does not need to know the index of refraction of the lens beforehand. These attributes are important for the applications discussed below, which are based on estimating lens properties, where lens indices are unknown and where the lens is positioned close to a reference object.

Ray Tracing Estimation

[00199] Figure 9(a) shows an overview of a virtual scene with virtual lenses 24 positioned in front of a background 3. Figure 9(b) shows the corresponding real scene including lenses 24 and background 3. Figure 9(c) shows a view of a portion of Figure 9(a) where the background 3 visible through one of the lenses 24 is enlarged due to the refractive power of the lens. Figure 9(d) shows a view of a portion of Figure 9(b) where the background 3 visible through one of the lenses 24 is reduced due to the refractive power of the actual lens 24.

[00200] In accordance with Figure 9(e), the real and virtual images 9(c) and 9(d) are superimposed and the features (in this case, the spacing grid) in each background are aligned. The lens model parameters are then refined (using ray-tracing as explained above) until rays passing through the modeled virtual lens match those observed in the real images.

Results

[00201] The Applicants evaluated the methods proposed above following a three tiered approach:

1 . Synthetic data, which provides ground truth pose and lens properties (front and back surface power, thickness, and index of refraction) from which the prescription can be computed.

2. Real data captured on an optical bench, yielding a pose close to the actual optical axis.

3. Real data on the actual proposed capture setup shown in Figure 2A, for which there is no ground truth pose available.

[00202] Prescriptions for actual lenses were measured using an automatic Huvitz lens meter, which measures lens power in increments of 0.01 diopters. In all 3 scenarios the test was performed with lenses in the range of -8 to +8 diopters.

Prescription Estimation Results on Spherical Lenses

[00203] Synthetic data was generated by constructing a virtual scene close to a real capture scenario, to mimic the real capture scenario illustrated in Figure 10. Thus, a lens 140 was placed with a slight tilt (5 degrees) about 5cm in front of a marker plane 142. A camera 144 was positioned about 20cm in front of the lens 140 and was rotated in a slightly perturbed circular motion around the measurement axis.

[00204] A total of real 26 lenses were measured, which were mostly spherical, however also various aspherical lenses were used. Table 2 gives an overview for three types of errors obtained for both the real and synthetic data: error in pose (optical axis angular error), error in optical axis distance to ground truth (GT) (i.e. the error in the distance from the true position of the lens optical axis obtained from the setup presented in Figure 10), and error in prescription (which, in this case, corresponds to the optical power of the lens). For each error the standard deviation is also calculated and provided in Table 2. Errors were measured based on existing ground truth data. In almost all cases the lens model refinement improved the initial estimate.

[00205] From the data obtained, the present embodiments consistently estimated the prescription with an error of < 0.1 diopters, and only for stronger plus lenses the error increased when used the lens model, which is mostly due to the fact that plus lenses magnify the marker pattern, and hence fewer rays are used for the estimation. This data demonstrates that embodiments of the present invention are capable of competing with manual lens meters, whose power measurement is read in 0.25 diopter steps.

Quantitative analysis of the raytracing-based optimization

[00206] A goal of the algorithm described above is to estimate the power of an ophthalmic lens using a smartphone and a marker plane. In the following, a study is described to demonstrate that the algorithm is able to reliably estimate the power of ophthalmic lenses, as they occur in practice. As described above, the study is divided into an experiment on synthetic data, and one on real data.

[00207] In relation to the synthetic data experiment, the Applicants created a set of lenses based on a lens specification issued by Carl Zeiss. For a given prescription, the specification suggests a specific front surface power, in order to minimize aberrations for the user, caused by the final lens. A total of 221 lenses were generated, where the range of sphere prescription (SPH) ranges from +8D to -8D, and the cylinder prescription (CYL) ranges from 0D to +6D, covering the majority of prescriptions. These ranges are regularly sampled using an increment of +1 D for the SPH range, and +0.5D for the CYL range. Note, the set of lenses contains lenses which are spherical only, i.e., the cylinder power is 0, and the set contains lenses, which are cylindrical only, i.e., lenses whose sphere power is 0. Indices of refraction lie in the range [1 .5, 1 .6], and lens thicknesses lie in the range of [0.001 , 0.002] (in meters). These two quantities are determined randomly for each lens. In order to virtually reproduce the capture setup, lenses are placed about 8cm in front of a virtual marker plane. The cylinder axis of the lens is also chosen randomly. Furthermore, lenses are tilted such that the angle between the optical axis and the background normal is 20 degrees. For each lens, a dataset was created where 32 cameras were randomly positioned about 75cm in front of the marker plane. For each image, 16 correspondences were chosen at random, as suggested by one of the experiments described above. The algorithm was then executed on all datasets. For this experiment, an initial optical axis direction of the lens was set to (0, 0, 1 ), which is also the normal of the background marker plane. The initial optical axis origin was determined by analysing lens flow images, which can be generated for each camera view. Note, on real data this is not feasible, as the generation of flow images requires the lens geometry, which is unknown in the real case. Table 4 presents the error statistics for the synthetic dataset, where the error is calculated using Equation 9. error type optical axis lens origin 1st meridian 2nd meridian cylinder axis in deg in mm in D in D in deg mean 04768 0.07 0.0352 0.03» 0.4951 median 0.7301 0,02 0,0099 0.0125 0.0859 stdev 13318 0.28 0,1348 0.0835 4.0229

Table 4: Result summary of the raytracing-based optimization on 221 synthetic datasets.

[00208] In relation to the real data experiment, the Applicants captured datasets from real lenses using the set-up described above and shown in Figure 10. In this experiment, the lens was manually aligned with the optical rail axis with the help of a laser in order to measure the ground truth optical axis direction and optical axis origin. Prescriptions for these lenses were measured using an automatic Huvitz lensmeter, which deviates from the true lens power by ±0.01 diopters. For this experiment, the initial optical axis direction was set to (0, 0, 1 ), and the initial optical axis origin was set to the optical axis origin from the manual lens alignment. Table 5 presents the corresponding error statistics for the real dataset.

Table 5: Result summary of the raytracing-based optimization on 39 real datasets. [00209] In the present experiment, cylinder axis statistics are computed only for those lenses which have a cylinder power of > 0.1 D. For lenses below this threshold, the cylinder power does not have a significant effect. Overall it can be concluded that statistically, the achieved accuracy meets any practical need. It is important to point out that the set of real lenses also contains aspherical and atoroidal lenses, i.e., the surfaces of these lenses are spherical or toroidal, respectively only close to the center of the lens. Note that aspherical and atoroidal lenses cannot be represented with the lens model described above. However, the proposed algorithm is general enough to accurately estimate the power and pose of these lenses as well.

Data scalability

[00210] The amount of data, i.e., the number of images per dataset and the number of correspondences per image, directly influences both computation time and quality of the estimation. The amount of data necessary to obtain reasonable estimation results in a reasonable amount of computation time was investigated. The study was conducted based on 16 lenses (8 spherical and 8 toroidal lenses), whose lens power lie in the range as they occur in practice. A sequence of synthetic datasets was created by varying the number of images and the number of correspondences per image. For each image, cameras were randomly positioned about 25cm in front of the lens and correspondences are chosen at random. For each dataset, the algorithm was performed on the 16 lenses and the output was the mean and standard deviation error. The error is defined as the difference between the estimated meridian and the ground truth meridian. Note, toroidal lenses have two meridians M 0 and M-i as shown in Figure 4(a). The resulting error depending on the number of correspondences and number of images is presented in Table 6 for both toroidal and spherical lenses. For this experiment, the initial optical axis direction of the lens was set to (0, 0, 1 ), which is also the normal of the background marker plane. The initial optical axis origin was determined by analysing lens flow images, which can be generated for each camera view. Kimages

| 1 4 j 8 16 32

§ l0.16 ftt.14), 0.12 (0.13) 0.08. (005), 0.08 (0.05)10.04 (0D5), 0.03 (0.03) Θ.Ο4 (0Λ4), 0.05 (0.03} 0.04 |0.0S) t COS (0.03) 16 0.03 (0.12), 0.10 (0.09) 0.05 (0.06), 0.07 (0.06) 0.04 (0.04), 0.04 (0.03) 0.03 (0.04), 0.05 (0.03) 0.05 (0.04), 0.06 (0.03)1 33 0.53 (1,13), 0.40 (O.SS) 0.08 (0.11), 0,01 |0.eS) 0.04 (0.04), O.OS (O03) 0.04. (0.04), 0.05 (0.02) 0,03 fO©4} » 0.03 (0.02) §4 0.12 (009), 0.07 (0.04) 0.03 (004), 0.04 (0.03) 006 (0.05), 0.04 (003) 0.04 (0.04), 0.03 (0.02) 0.04 (0.04), 0.03 (0.02) 12B|OJ09 (Λ04), 0.08 (0.04) 0.04 (0.05), 0.04 (0.02)10-0* P-OS), 0.03 (0JD2) 0.04 (0Λ4), 0.03 (0.02) 0.04(0.04), 0.03 (0.02)

toroidal lenses

#i mages

2 4 6 16 32

8 0.OB13 (0.0155) 0.0315 (0.022*) D.CB26 (0.0362) 0.0358 (0.0321) 0,0323 (0.02»)

16 0.0677 (0.0627) 0.0308 (0.0227) 0.0724 (0.1103) 0.0SS6 (0.0976 0.0363 (0.0314)|

32 Q.0∞fe.C*») 0.0722 (0.0SS6) 0.0215 (0.0314) 0.0313 (0.0320; 0.0335 (0.0264)

64 0.0695 (0.0501) 0.0349 (0.0312) 0.0333 (0.0345) 0.0335 (0.0255; 0.O331 (0.02.91)

118 0.0323 (0.0253) 0.02SO (O.OiS9) 0.03i6 (0.0312) 0.0344 (0.0321] 0.0404 (0.0316)

spherical lenses

Table 6: Mean error and standard deviation (in parentheses) for each data point depending on the number of correspondences and number of images used

[00211] The tolerances for eyeglasses lenses vary from country to country. Chosen tolerances are generally determined by what differences in power a patient can perceive. For instance, current ANSI standards approve tolerances of ±0.13D for spherical lens powers below ±6.5D. For larger powers, larger tolerances are accepted. The same holds for tolerances for cylinder axes, where tolerances vary between ±2 and ±14 degrees, depending on the prescribed cylinder power. More information can be obtained from [Mohan and Sharma 2012]. Given this information, the combination of 32 images, with 16 correspondences per image, yields estimations detailed in Table 6 which satisfy accepted tolerances.

Comparison

[00212] The nonlinear nature of the proposed approach allows it to be conveniently extended to other lens surface types (e.g., aspherical lenses, atoroidal lenses, etc.) by adjusting P with the required parameters. In comparison to the work of [Ben-Ezra and Nayar 2003], whose method is also based on a parametric model, the formulation above does not make the simplifying assumption that the index of refraction is known. Furthermore, the present formulation does not assume that feature points are at an infinite distance.

[00213] A comparison between the present method and the method in [Ben-Ezra and Nayar 2003] has been performed based on synthetic data using six bi-spherical lenses with lens powers ±1 , ±2, ±4. Both methods are executed for five different distances (5cm, 10cm, 1 m, 10m, and 100m) between a reference object and the respective lens. Figures 1 1 (a) and 1 1 (b) show the comparison data based on the bi-spherical -2 and +2 lenses, respectively. The shown error is the absolute difference between the estimated lens power and the ground truth lens power. Although not shown, the plots for the remaining lenses show an almost identical error behavior. The method in [Ben- Ezra and Nayar 2003] can estimate lens power when the distance between the lens and the reference object is large with an average error of ^ 1 diopter for distances > 10m. However, when the lens lies closer to the reference object, the error produced by [Ben-Ezra and Nayar 2003] increases significantly (average error s 10diopters for distances < 10cm). On the contrary, the accuracy of the present method is maintained throughout and for any of the distances studied, the error produced using the present method was significantly lower when compared to the method of [Ben-Ezra and Nayar 2003]. Furthermore, at a distance of around 10cm (which corresponds to that likely to be used in many applications of the present invention), the average error across all six lenses is only 0.05 diopters.

Conclusion

[00214] Some embodiments of the present invention provide a method for measuring the optical power and physical lens properties of prescription eyeglasses using a very light-weight setup based on a marker pattern (background) and a camera (e.g. smartphone). The method allows for the first time, that anybody can measure their prescription at home. Of course, some embodiments of the invention may be realised by using a machine or system set-up. However, these may not require the user to move the eyeglasses or camera when obtaining the input images. A particular application of the present techniques allows a user to simulate their appearance realistically in virtual try-on applications, as will be described in more detail below.

Additional embodiments for lens measurement

[00215] The embodiment illustrated in Figure 2 shows how a user moves a camera or eyeglasses, in freeform fashion, in order to acquire multiple images (with varying viewpoints) on which the prescription estimation pipeline described above is executed. A related embodiment uses multiple cameras (e.g., an array 40 of camera modules) as shown in Figure 12, to capture multiple images simultaneously from several viewpoints. The strength of this setup is that the eyeglasses (or lens) 42 do not have to be fixed into the device, or moved during a period long enough to capture multiple images. Instead, the eyeglasses (or lens) 42 can be held by the operator while they are located between the camera array 40 and the background (i.e. marker board 44), and all the necessary images can be captured within a short time span. This embodiment would be useful for an optician, to rapidly capture a good set of images. This embodiment may require synchronisation of the cameras within the array. Of course, in other embodiments the eyeglasses (or lens) 42 may be mounted in the system shown.

[00216] In another embodiment, a turntable (or moving arm) may be employed and a camera may be fixed such that it points towards the turntable, which exhibits a marker pattern or background. The user places the eyeglasses onto the turntable such that they are approximately facing the camera. The turntable may be configured to complete one loop during which several images are captured and those images are used to estimate the eyeglasses lens characteristics as described above. The loop need not be continuous or complete. For example, the turntable may stop at a few discrete locations at which one image is taken, and may not cover an entire 360 degree rotation.

[00217] In a further embodiment the eyeglasses may be held in a fixed position and the camera may be placed on the turntable or moving arm.

[00218] In a yet further embodiment the camera may be aligned with the lens optical axis. In this case, the user may acquire several initial images, which are used to estimate the optical axis of the lens as explained above. The camera may then be placed (either automatically with some motors, or manually with the assistance of a human operator) along the optical axis. Once the camera center position coincides with the optical axis of the lens, at least one additional image is captured in order to estimate the eyeglass lens characteristics, as above.

[00219] In another embodiment a magnification-based approach may be used. In this case, a camera with a fixed position may take two pictures: 1 ) a picture of a fixed textured pattern or background; and 2) a picture where an eyeglasses lens is placed between the fixed camera and the fixed textured pattern. Note, the textured pattern could be the user's face or body. In this case, the prescription of the lens may be determined based on the amount of magnification (i.e., the apparent change in size behind the lens). Distances could be estimated based on markers, based on automatically extracted pixel correspondences, or by using a depth sensor.

[00220] In further embodiments of the invention, the background may be provided on an electronic display such as a tablet, smartphone or computer. This is advantageous in that the image provided for the background can be chosen (or modified) to improve results. For example, marker size on a patterned background could be varied (i.e. enlarged or reduced) to provide more or better correspondences.

On-face measurement of eyeglasses lenses

[00221] In an extension of the general methods described above, embodiments of the invention address the problem of estimating an eyeglasses prescription and lens characteristics while the eyeglasses are being worn by a user. This is an adaptation of the above video-based estimation method, in order to perform the measurement of the eyeglasses directly on the user's face. It is advantageous that the method can be performed while the eyeglasses remain on the user's face and that the user is not required to give away the eyeglasses in order for the measurements to be made. This is in contrast to traditional methods based on lensmeters, which typically require the user to hand over his/her eyeglasses to an optician who inserts them into a dedicated measuring device.

[00222] As described above, the lens estimation method requires identifiable correspondences in order to measure the deviation induced by the corrective lenses. In order to determine such correspondences, the system captures two videos of the user: one with him/her wearing eyeglasses, and one without. In each video, the user should keep his/her face static while the camera is moving. Pairs of (static) images from the two videos are selected: for each image from the video with eyeglasses, the best matching image from the video without eyeglasses is determined based on a given error metric. For each image pair, feature points (e.g. small wrinkles, skin marks, parts of eye lashes instead of the coded markers used above) near the eye region on the image with eyeglasses are determined, and mapped to their corresponding points in the image without eyeglasses, as described above. The distance between the corresponding points in the two videos is then determined as a measure of the deviation (i.e. refraction) introduced by the corrective lens of the eyeglasses. [00223] Figure 13 shows an embodiment of the on-face eyeglasses lens measurement method. As above, the setup comprises three components: an image capture device 1 1 1 (typically a handheld smartphone or tablet with a camera), the eyeglasses 1 12 being measured, and the user 1 13, wearing eyeglasses (i.e. in this case, the user's face constitutes the background above). Ideally, the user will be located at a distance from the capture device that allows the user's head to occupy substantially the entire vertical field of view of the capture device.

[00224] Embodiments may further comprise using a lens model to estimate the lens characteristics as described above.

[00225] If features extracted in the eye region are insufficient to reliably estimate the parameters of the lens in the model, additional cues may be incorporated, e.g. by turning on a lamp mounted on the camera and observing its reflections on the eyeglasses lenses.

Virtual try-on of eyeglasses

[00226] Further embodiments of the invention address the problem of plausibly inserting eyeglasses on a user's face in a still image or video, for virtual try-on applications. While several systems and methods have been recently developed for virtual try-on of eyeglasses, most of them ignore refraction effects due to the corrective lenses; therefore, the user appears to wear lenses with no optical correction. In contrast, once the user's prescription and lens characteristics are known, it is possible to account for the refraction effects which increase or decrease the apparent size of the wearer's eyes. Accordingly, the virtual images generated by the present virtual try-on method more faithfully simulate the appearance of a user wearing prescription eyeglasses.

[00227] One embodiment comprises a method for virtually inserting eyeglasses on a user's face in a video and simulating the refraction effects caused by the corrective lenses. The method takes as input: a) a textured 3D model of the face of the user, who is physically not wearing eyeglasses; such a 3D model can either be captured from multi-view images, or tracked and dynamically reconstructed in real-time by systems such as Faceshift [Faceshift 2015; Weise et al. 201 1]; b) a textured 3D model of the desired eyeglasses frames, which are appropriately scaled relative to the size of the user's face; c) the position and orientation where the eyeglasses should be inserted, relative to the face of the user; these can be estimated based on facial feature points, or manually adjusted; d) a virtual representation of the surrounding lighting, which is either captured from photographs or defined manually.

[00228] Given the inputs above, a virtual scene can be created using known techniques to place the eyeglasses on the user's face and illuminate them with the specified surrounding lighting. An image of such a scene can be synthesised using traditional rendering techniques, such as ray-tracing.

[00229] In the present embodiments, the virtual try-on system further adapts the virtual scene to include refractive effects and provide a more realistic view of the wearer when wearing prescription eyeglasses. This is achieved using a lens model such as that described above to modify the trajectory of rays passing through the corrective lenses, during image rendering. Notably, embodiments of this aspect of the invention may be configured to produce still image or video of a user wearing virtual prescription eyeglasses. Furthermore, embodiments of the invention allow the lens characteristics of the model to be modified and for the modified lenses to be used to create another virtual image of the user wearing the virtual eyeglasses so as to visualize how such a modification would affect their appearance.

[00230] Eyeglasses frames and prescription lenses make up a significant portion of the products sold at optical retail stores. A decision to buy new eyeglasses frames is largely based on how these frames look when the customers try them on. However, this traditional process of trying on and picking new eyeglasses frames has two well- known shortcomings: 1 ) In order to try on various eyeglasses frames, a customer has to take off his/her current eyeglasses. A customer with imperfect visual acuity will then have difficulties evaluating the new eyeglasses frames in the store, unless he/she wears contact lenses; 2) the customer purchases eyeglasses frames based on how he/she looks when he/she tries them on in the store. However, eyeglasses frames showcased at optical stores are equipped with lenses that have zero corrective power and thus exhibit no refraction. Once corrective lenses are mounted into the eyeglasses frame according to a personal prescription, the appearance of the customer wearing glasses can be different due to the effects of refraction. Trying on eyeglasses with no optical power can therefore be misleading and customers may experience disappointment and frustration as a result.

[00231] Embodiments of the invention overcome these two intrinsic try-on issues. The system first measures (or takes as input) the eyeglasses prescription of the patient, then allows the patient to virtually try on a variety of corrective eyeglasses, which the patient can visualize while wearing his/her own eyeglasses. [00232] Embodiments of the invention may comprise a display and a RGBD video camera, which acts as a "virtual mirror" and lets the customer browse through eyeglasses frame collections from an optician's shop. The system may virtually remove the eyeglasses that the customer is physically wearing (as will be described in more detail below), and will virtually insert the desired eyeglasses frame and simulate the refraction effects due to the corrective lenses according to the patient's prescription. The virtual image will be displayed to the customer in real-time. This system can also benefit online optical retail stores, as they could let customers measure their existing prescription using the embodiments described above and then virtually try on eyewear that accurately takes into account the refractive power of the lenses required by the prescription, from the comfort of their home.

[00233] Eyeglasses lenses are often more expensive than eyeglasses frames. The price of prescription lenses varies widely, depending on optical power, index of refraction, manufacturer, design, and coatings. However, it is often difficult for an optician to convince customers to purchase corrective lenses with higher quality or different characteristics (e.g. thinner lenses, higher index of refraction), as they are more expensive and their benefits cannot traditionally be demonstrated before the purchase. Embodiments of the present invention will allow customers to virtually try different lenses and compare the resulting appearance, which would demonstrate the cosmetic benefits that upscale lenses offer.

Virtual try-on User study

[00234] The Applicants performed a user study to assess the perceived realism of virtual try-on videos generated using the above-described embodiment. To generate the stimuli, videos were captured of five actors from different ethnicities and one mannequin. In each capture session, the actor was asked to try on prescription eyeglasses provided, with sphere power ranging from -1 to -7.5 diopters. These were recorded as reference videos. A second video was then recorded of the actor without eyeglasses, and this was used as input to the virtual try-on method. Four variations of virtual try-on results were generated using the present approach, with/without reflections and with/without refraction, using lens model parameters corresponding to the prescription lenses in the reference video. A live virtual try-on session was also recorded for an online eyeglasses store with similar eyeglasses frames. The five stimuli videos were cropped to less than 3 seconds for each actor. [00235] In each trial of the study, a subject was first shown the reference video of an actor wearing real eyeglasses. The five stimuli corresponding to this actor were then shown simultaneously, playing in a loop; the subject was asked to rank the videos according to how they looked, from "most real" to "least real", by dragging them over the screen into ranking bins. Each trial corresponding to one actor was repeated twice and all trials were ordered randomly; two training trials were added at the beginning of the session but were not used in the analysis. In total, each participant completed 14 trials.

[00236] Twenty individuals (12 male, 8 female) participated in the study, with ages ranging from 23 to 49 (average 28). Nine participants reported to be familiar with computer graphics, and 17 of them wore eyeglasses. The average length of the study (including a break) was 16 minutes.

[00237] The overall opinion favoured video sequences which exhibit refraction and reflections, which increased the perceived realism in the virtual try-on results. Specifically:

• 65.42% of the votes favoured the videos with refraction compared to the videos with no refraction, for sequences with reflections; a paired t-test confirmed a statistically significant difference (t = 7.09, p < 0.005); a similar analysis can be made when reflections are disabled (68.33% of the votes, t = 8.62, p < 0.005);

• reflections were deemed significant in increasing perceived realism of the generated try-on sequences; videos with reflections were consistently favoured over videos without reflections, both on the videos with and without refractions (87.08% of the votes, t = 17.0937, p < 0.005; identical preference with and without refractions).

• Every one of the results was systematically favoured (> 98% of the votes) compared to those of the online virtual try-on; this does not come as a surprise, since all of the renderings account for the surrounding lighting and exhibit convincing shadows on the face.

[00238] Further, comments provided by some participants indicate that once they understood what the differences (i.e. refraction and reflection) between stimuli were, they were able to rank them quickly and consistently.

Virtual manipulation of eyeglasses worn in videos [00239] Yet further embodiments of the invention address the problem of virtually modifying or removing prescription eyeglasses from an image or video. The system takes as an input an image or video of a person wearing eyeglasses, and produces an output image/video where the person appears to wear eyeglasses with different lens characteristics (i.e. less refracting), or no eyeglasses at all. This is advantageous because eyeglasses can significantly alter a person's appearance. In particular, refraction due to corrective lenses can distort facial features and make the eyes appear significantly smaller or larger than they are in reality.

[00240] Embodiments of the invention enable the processing of images/videos to virtually modify the appearance of a user who is physically wearing eyeglasses. The embodiments relate to three different methods: 1 ) a method to virtually remove the lenses from the eyeglasses frames; 2) a method to virtually modify the characteristics of the lenses; and 3) a method to virtually remove the eyeglasses frames. All three methods are virtual, i.e. they modify images or a video stream.

[00241] All three methods take as input an image or video of the user who is physically wearing eyeglasses. The image/video data is used to track the face of the user in realtime. More specifically, for each video frame, a 3D face template is aligned with the user's face and deformed according to the user's facial expression and facial features, allowing the tracking of points in the user's face across video frames. This may be done using existing technology such as Faceshift [Faceshift 2015; Weise et al. 201 1 ] to perform the face tracking. Each of the three methods is described in more detail below.

Removal of corrective eyeglasses lenses

[00242] In one embodiment, there is provided a method to simulate the appearance of a person wearing eyeglasses, where the corrective lenses are virtually removed or replaced by piano-lenses with zero optical power. In a nutshell, this method alters an input image or video by moving pixels in each image to compensate for the bending of light introduced by the refractive lenses.

[00243] This method may be achieved either through a warping based approach, or by using the 3D computerised model of the eyeglasses lenses described above and estimated using one of the presented embodiments, in combination with a warping based approach: Geometrical rays (i.e. incident rays) originating from the camera location may be traced so that each of them passes through the center of each pixel in the lens area in the image/video. However, these rays are deviated by the lens, yielding refracted rays that are computed according to the estimated 3D model of the lens. The intersections of each incident ray and each refracted ray with the 3D face template are computed for the current frame. Given this discrete information, a continuous vector field can be defined on the lens region through a warping based approach, where a vector at a given pixel is the directional difference between the intersection point computed based on the refracted ray, and the intersection point computed based on the incident ray. This vector field measures the amount of deviation induced by the lens at a given lens pixel. The vector field may then be used as a mapping to modify the colors of the pixels in the lens region to compensate for the lens refraction.

Removal of eyeglasses frame

[00244] It may also be desirable to completely remove eyeglasses frames from an image or video so it appears that the user is not wearing any glasses at all. In this case, the image capture process described above where the user is recorded with and without eyeglasses may not only be used to estimate the eyeglasses lens characteristics, but also to locate eyeglasses frames and acquire texture and geometry of the regions potentially covered by the glasses frames in a video stream at runtime. This information, combined with known in-painting methods, can allow for removal of the eyeglasses frames from an image/video. In an optional, additional process, a relighting method may be used to remove shading caused by the eyeglasses on the face (e.g. shadows cast by the eyeglasses frames).

[00245] Embodiments of this aspect of the invention may be implemented using the system illustrated in Figure 14 comprising: an image capture device 31 which can capture color and depth videos (i.e. an RGBD sensor for Red, Green, Blue and Depth), and transmit it to a computer for processing; the user 32, who is physically wearing eyeglasses and faces the image capture device 31 at a distance which depends on the model of the RGBD sensor (e.g., 25-45cm with an Intel RealSense sensor, and 45- 80cm for a PrimeSense Carmine 1 .09 sensor); and a computer 33 used to process the video and display the manipulated virtual video on a display or transmit it to a remote user in a videoconference session. As shown in Figure 10, the eyeglasses that the user is physically wearing have been removed from the video image displayed on the screen in accordance with the above described embodiments. More specifically, the eyeglasses lenses have been removed by re-tracing the facial feature points behind the lenses to the positions they would be at if the refraction of the lenses were eliminated and the eyeglasses frames have also been removed with the person's facial features behind the frames redrawn based on a reference video of the user not wearing the eyeglasses.

[00246] As explained previously, prescription eyeglasses can negatively affect a person's appearance. Corrective lenses distort eyes and change their size, while thicker eyeglasses frames can cast unpleasing shadows on the wearer's face in many lighting conditions. Prescription contact lenses are often favored over prescription eyeglasses for cosmetic reasons. However, not everyone can tolerate them, and thus eyeglasses are still widespread. Embodiments of the present invention allow a user to wear his/her comfortable eyeglasses, for example, during a videoconferencing call, while remote conversation partners can be presented with a virtual real-time video stream giving the impression that he/she does not wear any eyeglasses. Since this aspect of the invention does not depend on dedicated hardware and can be implemented using widely available sensors/cameras and computers it has the potential to be integrated into widespread and popular video conferencing products and systems.

Modification of the lens characteristics

[00247] In accordance with a further embodiment of the invention, there is provided a method to simulate the appearance of a person wearing glasses, where the characteristics of this person's lenses or the eyeglasses prescription are virtually modified (e.g. to simulate the user's appearance with thinner lenses or a different prescription). In summary, the method may alter an input image/video by moving pixels in each image to simulate the bending of light introduced by refractive lenses with the desired characteristics.

[00248] A 3D computerised model of the eyeglasses lenses, such as that described above, may be used to generate a modified computerised lens model, which has the desired lens characteristics, while retaining the same 3D orientation and 3D location as the modeled lens. Incident rays originating at the camera location are traced as above, and refracted rays are computed according to both the estimated and modified lens models. The intersections between the refracted rays with the 3D face template are computed for each frame. Given this information, a vector field can be defined on the lens region, where a vector at a given pixel is the directional difference between the intersection point computed based on the ray refracted by the estimated lens, and the intersection point computed based on the ray refracted by the desired lens. This vector field measures the amount of lens deviation at a given lens pixel. Similarly as above for lens removal, this vector field can be used to modify the colors of the pixels in the lens region, which yields the impression the user wears eyeglasses equipped with modified lenses that have the desired characteristics. Embodiments of this aspect of the invention may therefore be implemented to visualize the effect of varying lens prescription or physical characteristics on the user's appearance.

[00249] The technology described in the present application aims to estimate the eyeglasses prescription and lens characteristics of a pair of corrective eyeglasses. Embodiments comprise estimating the parameter values of a physically-based computerised model which is used to simulate each of the two corrective lenses. The resulting model can be used in a variety of ways to add, remove or modify eyeglasses in images or videos.

Head-Mounted Displays

[00250] In a further application of the above embodiments, a user's prescription is communicated to a head-mounted display (HMD) and an optical system in the head- mounted display is automatically adjusted to account for the user ' s prescription to allow the user to comfortably wear the HMD without the user ' s eyeglasses and for his/her vision to be corrected.

[00251] Thus, an aspect of the invention relates to use of an eyeglasses prescription which may be determined using any or all of the above methods and systems to compensate for vision deficiency in a head-mounted display (HMD) (e.g. virtual reality headwear). In accordance with embodiments of the invention, a vision correcting system for a head-mounted display comprises: a) a receiver for receiving an eyeglasses prescription from a user; and b) a reconfigurable optical system configured to dynamically adapt to compensate for the eyeglasses prescription such that a user will experience corrected vision whilst wearing the head-mounted display without wearing their eyeglasses.

[00252] Head-mounted displays for Virtual Reality (VR) or Augmented Reality (AR), such as the Oculus Rift or Samsung GearVR headset, are often designed for users with perfect eyesight, and offer few adjustments for users wearing prescription eyeglasses. The tight arrangement of the components, and the fact that the HMD is placed directly on the wearer's face, make it uncomfortable or even dangerous for users wearing prescription eyeglasses to use a HMD. Such a user trying a HMD without their usual eyeglasses experiences reduced comfort and a sub-par experience, as images observed through the HMD look blurred.

[00253] In order to realize the potential of immersive VR, there is a need to ensure that eyeglasses wearers can experience Virtual Reality comfortably while still seeing a clear image of the virtual world. While some headsets offer manual focus adjustment, which allow some users to wear VR headsets without their eyeglasses, they need to be manually adjusted for each user through trial and error; they may not fully correct the user's eyesight (e.g., if the left and right eyes have different prescriptions), and the adjustment needs to be re-done whenever the headset is passed between users. For example, the consumer version of Oculus Rift allows manual adjustment of the interpupillary distance IPD (i.e. distance between the eyes), but no adjustment of focus. The Samsung GearVR includes a focus wheel to manually adjust the position of the screen, which applies the same optical correction to both eyes and the GO headset by Immersion VRelia enables the correction of myopia independently for each eye. However, all of these adjustments are manual and subjective.

[00254] On the contrary, embodiments of the present invention provide an end-to-end vision correcting system for a head-mounted display. In general, the approach comprises measurement of a user's eyeglasses prescription (as described above), storing of the user's eyeglasses prescription on a smartphone, transmitting the user's eyeglasses prescription to a head-mounted display and customized automatic adjustment of the head-mounted display according to the user's prescription. Thus, for each user, the head-mounted display is automatically adjusted to provide the optical correction required. Since the optical correction is provided by the head-mounted display itself, the user no longer needs to wear eyeglasses inside the head-mounted display or manually adjust the HMD using wheels and knobs. A key part of the proposed embodiment is the determination of the user's eyeglasses prescription using the light-weight approach described above, and its use to drive the automatic adjustment of the optical components in a HMD. The adjustment can be automatically re-done to meet each user's needs, enabling the HMD to be passed around whilst still providing a customized yet accurate adjustment. Notably, the users do not need to know their prescriptions in advance of using the HMD since their prescriptions can be easily measured using the methods described above just prior to using the HMD.

[00255] An overview of an embodiment of the present invention is illustrated in Figures 15(a), (b) and (c). More particularly, Figure 15(a) shows a user 150 measuring his/her eyeglasses prescription using a smartphone 152 in accordance with an embodiment of the invention as described above. In Figure 15(b), the prescription 154 is stored on the smartphone 152 in a user profile. In Figure 15(c), the smartphone 152 transmits the user ' s prescription 154 to a head-mounted display 156, which is then automatically adjusted according to the prescription 154, allowing the user 150 to use the HMD 156 without eyeglasses and to see the virtual world clearly.

Measurement of the user's eyeglasses prescription

[00256] As shown in Figure 15(a), the user uses the smartphone 152 camera to capture videos of himself/herself with and without wearing his/her eyeglasses and a processor located in the smartphone 152 or on a remote server extracts correspondences from each video and uses these in the methods described above to infer an objective estimation of the user ' s eyeglasses prescription 154.

[00257] As described above, the method first estimates the approximate pose and power of each of the observed lenses. These values are then further refined using a physically-based ray tracing optimization which also determines the physical lens characteristics, such as the front and back surface curvature, lens thickness, and index of refraction. In this embodiment, the focus is on single-vision ophthalmic lenses which are used to correct for myopia, hyperopia, and astigmatism, i.e., which may exhibit cylindrical correction.

[00258] Once the eyeglasses prescription 154 has been estimated, it is associated with the user's profile which is stored in a smartphone application (app) or on a cloud server, for future use by the user 150. As shown in Figure 15(c), the user's prescription 154 is transmitted from the smartphone 152 to the HMD 156 and is used to drive the automatic adjustment of the HMD 156, as will be described below. The transmission of the prescription from the smartphone 152 to the HMD 156 may be by any suitable communication means (e.g. SMS, Bluetooth, WiFi etc.).

Automatic adjustment of head-mounted display

[00259] A vision correcting system is proposed for a head-mounted display which comprises: a) a receiver for receiving an eyeglasses prescription determined using any or all of the above systems or methods; and b) a reconfigurable optical system configured to automatically adapt to compensate for the eyeglasses prescription such that a user will experience corrected vision whilst wearing the head-mounted display without wearing their eyeglasses. An aim is to automatically adjust the optical system of the HMD independently for each eye, based on the eyeglasses prescription estimated above.

[00260] A HMD typically comprises a display (e.g., a smartphone screen) and a pair of lenses, which transmit the images of the screen to the user's eyes with a wide field of view and provide a specific point of focus. Figure 16(a) illustrates, for one eye 160, a typical setup where the screen 162 is placed at the focal length of the lens 164, so that parallel light rays emerge from the lens 164 due to refraction. For an eye 160 with perfect vision, the light rays travelling towards the eye 160 come to a perfect focus on the retina 166 and the resulting image appears clear and sharp. If the eye 160 contains a refractive error, the light rays do not focus on the retina 166 and the image appears blurry. For example, as shown in Figure 16(b) for a myopic eye 160' the light rays focus in front of the retina 166' instead of directly on it.

[00261] In embodiments of the invention, the optical setup of the HMD is automatically adjusted in order to bring light rays into focus on the retina of a user's eye that includes a refractive error (e.g. myopia/hyperopia/astigmatism), based on the eyeglasses lens properties measured in accordance with an embodiment of the invention as described above. The adjustment can be performed using either of, or a combination of, four different methods:

1 . By adjusting the optical power of the HMD lens. This may be done using focus-tunable liquid lenses having a focal length range that can be adjusted electronically. However, focus-tunable lenses commercially available today have a narrow diameter, drastically reducing the field of view and making them currently unsuitable for HMDs;

2. By translating the display screen, in order to adjust its distance to the lenses and the eyes. Some headsets such as the Samsung GearVR use this approach to manually adjust the focus of both eyes simultaneously. However, this approach is inadequate for users that have different prescriptions for the left and right eyes;

3. By inserting an additional corrective lens 168 between the HMD lens 164 and the eye 160' as illustrated in Figure 16(c), in order to replace the corrective lens from the user's prescription eyeglasses;

4. By translating each HMD lens 164 along its axis as shown in Figure 16(d), allowing independent focus adjustment for each eye 160'.

[00262] Two embodiments which describe modifications to existing HMDs, following the approaches numbered 3 and 4, and as illustrated in Figures 16(c) and 16(d), are described in more detail below. In each case physical prototypes were built by modifying a Samsung GearVR headset, keeping the original headset lens and electronics while adding a vision correcting system according to an embodiment of the present invention. However, any other HMDs could be used.

[00263] Notably, a vision correcting system according to an embodiment of the present invention could be retro-fitted into an existing HMDs or a new HMD could be built to include the proposed vision correcting system.

Embodiment 1

[00264] In a first embodiment, an additional adjustable lens 168 is placed between the headset lens 164 and the eye 160' as shown in Figure 16(c). The additional lens 168 is placed approximately 14mm from the eye 160', at a position corresponding to where a prescription lens would be if the user was wearing his/her eyeglasses (i.e. 14mm corresponds to the average vertex distance between the back surface of prescription eyeglasses and the cornea of the eye). With the lens 168 placed at this position, its optical power should equal the optical power of the eyeglasses normally worn by the user, as measured above.

[00265] This embodiment is illustrated in Figures 17(a) and (b) and comprises a HMD 170 fitted with a vision correcting system 172 comprising two lenses 174 from the commercially available Adjustables range from Adlens. The optical power of the lenses 174 can be adjusted using wave-shaped sliding plates 176. Users would normally adjust the power of the lenses 174 by turning knobs 178. However, in the present embodiment, instead of requiring users to manually turn the knobs 178 to adjust the power, the knobs 178 are motorized using TowerPro SG90 Servomotors 180 through a gear mechanism 182. The servomotors 180 are controlled by an Arduino Nano 3.0 Atmel Atmega328P microcontroller board (not shown), powered via a USB-miniUSB cable. The microcontroller is wired to a HC-05 Serial Port Slave Transceiver Bluetooth Module for Arduino, which receives commands via BlueTooth. The whole assembly is attached to a jig 184, and placed in front of the HMD lenses 186.

[00266] In this embodiment, the power of each lens 174 is correlated with the angular position of the corresponding servomotor 180. In a pre-process step, a calibration table is built by measuring the optical power of each lens 174 for multiple angular positions of the corresponding servomotor 180. At runtime, in order to adjust the vision correcting system 172 for a user with a given prescription, a smartphone sends the desired prescription by BlueTooth to the Arduino microcontroller transceiver, which in turn adjusts the angular position of the servomotor 180 according to the calibration table. [00267] The range of focal adjustment enabled by this embodiment is from -6 to +3 diopters; this enables users with different prescriptions within this range to experience Virtual Reality without wearing their eyeglasses. However, this embodiment is rather bulky and uncomfortable due to the provision of the additional adjustable lenses 174 inserted between the user's eyes and the HMD lenses 186. Misalignment between the eye, the adjustable lens 174, and the HMD lens 186 can also lead to a decrease in perceived optical quality.

Embodiment 2

[00268] In this embodiment no additional lenses are required. Instead, for each eye, the original lens from a standard HMD is translated mechanically along its optical axis, to a specific position so that users perceive the visual content displayed on the screen in focus, despite their imperfect vision. Each lens is translated using servomotors and gears, and its position is determined according to the eyeglasses prescription measured above using a smartphone-based approach.

[00269] Figure 18(a) shows an exploded view of a CAD diagram of a vision correcting system 190 according to this embodiment of the invention. In this case, each of the two original lenses 192 of a HMD are attached to a plastic lens holder 196, made of a first part 196(a) and a second part 196(b). Each lens holder 196 is attached to a rack 198. A servomotor 200 is fixed to a housing 202 and a pinion 204 moves the rack 198 in a channel 206 of the housing 202. As above, the servomotors 200 are controlled by an Arduino Nano 3.0 Atmel Atmega328P microcontroller board. The microcontroller is wired to a HC-05 Serial Port Slave Transceiver Bluetooth Module for Arduino, which receives commands via BlueTooth. The controller and bluetooth modules are placed in a small box outside the HMD, and powered via a USB-miniUSB cable. Figure 18(b) shows a photograph of the vision correcting system 190 in use with a HMD screen 208 in the form of a smartphone, while Figure 18(c) shows the vision correcting system 190 inserted into a HMD 210.

[00270] In this system, the power of each lens 192 is correlated with the angular position of the corresponding servomotor 200. Given a fixed position for the HMD screen 208 and the eye, the appropriate position of the lens 192 is determined in order to make light rays converge on the user's retina, based on the eyeglasses prescription estimated above. In a pre-process step, the geometry and index of refraction of the headset lens is measured and is used in a ray tracing simulation to build a calibration table which indicates the position of the lens that yields the best focus for an eye model corresponding to different eyeglasses prescriptions.

[00271] Figures 19(a) and 19(b) illustrate the required lens position to focus the light rays corresponding to 0 diopter and -6 diopter correction, respectively, while Table 7 lists the obtained distance for prescriptions between -6D and +1 D. This table is used to mechanically adjust the position of the lens given the user's eyeglasses prescription estimated previously.

[00272] At runtime, in order to adjust the headset for a user with a given prescription, a smartphone sends the desired prescription by BlueTooth to the Arduino microcontroller, which in turns adjusts the angular position of each servomotor 200 according to the calibration table. Using this approach, a commercial headset (Samsung GearVR) was modified with the ability to adjust the focus automatically based on the estimated prescription. The position of each lens in the headset was adjusted with the servomotors, which are controlled remotely to take into account the user's eyeglasses prescription. While the focus could be adjusted through electronically tunable lenses or by modifying the position of the screen, the present solution enables the independent correction of each eye, and a wide field of view comparable to the original headset; it is also cost-effective as it preserves the original headset lenses. The modified headset is fully functional and retains the original capabilities of the Samsung GearVR, including motion tracking sensors, touchpad, and compatibility with the Oculus software platform.

[00273] The range of focal adjustment enabled by this embodiment is from -6 to +1 diopters, but can be extended with different lenses or a different range of translation motion. This embodiment has two main drawbacks: 1 ) If the headset lens is rotationally symmetrical (such as the aspherical lenses used in most VR headsets), translating it along its axis can correct for myopia or hyperopia, but not for astigmatism; if the measured eyeglasses prescription includes astigmatism correction, it can be converted into an approximate spherical correction by averaging the optical power along the two principal meridians; alternatively, the lens could be tilted to introduce astigmatism correction; 2) A user with strong myopia would see a slightly smaller apparent field of view, as the headset lens is moved further away from the eye. Eye-lens Lens-screen

Correction [Dj distance [mm| distance [mm . ]

1 10.732 36,538

0 1 1.975 32.295

-0,25 12.29 34.98

"0.5 12.607 34.663

-0.75 12.927 34.343

- 1 13.248 34.022

-1.25 13.572 33.698

-1 ,5 13.899 33.371

-1.75 14,229 33.041

-2 14.562 32.708

-2.25 14.898 32.372

-2.5 15.237 32.033

-2.75 15.579 31.691

-3 15.924 31 .346

-3.25 16.272 31

-3.5 16.623 30.647

-3.75 16.975 30.295

_4 17.328 29.942

4.25 17.681 29.589

-4.5 18.034 29.236

-4.75 18.385 28.885

-5 17.328 29.942

-5.25 1 .079 28.191

-5.5 19.42 27.85

-5,75 19.757 27.513

-6 -20.091 27.1.78

Table 7: Position of the headset lens for different eyeglasses prescriptions (or corrections), as determined through raytracing simulations.

Validation

[00274] It is difficult to reproduce what a human eye sees using a camera. Accordingly, the vision-correcting capabilities of the above embodiments have been validated through simulation. Images of a smartphone display, as seen through the headset lens, are generated using raytracing and a virtual eye model which reproduces the optics of a human eye with refractive errors.

[00275] The simulation uses a standard Gullstrand schematic eye model [von Helmholtz et al. 1909] as shown in Figure 20. This is a well-known model in biomedical optics to simulate the human eye and to better understand eye defects such as myopia, hyperopia, and astigmatism. As shown in Figure 20, the eye model comprises six refractive spherical surfaces (these are labelled 1 ) anterior cornea; 2) posterior cornea; and 3) to 6) crystalline lens surfaces). The model also includes the retina, which is represented as one non-refractive spherical surface, indices of refractions, and an iris. The model is initialized using the values from [Fink and Micol 2006], which correspond to an unaccommodated eye with perfect vision and is adjusted to simulate different degrees of myopia.

[00276] Myopia occurs if either the eyeball is too long, the curvature of the retina is too large, or the refractive power of eye lens is too large. Given any of these reasons, light, as it enters the cornea, is not focused on the retina, but in front of the retina, resulting in a blurred image. In the present simulation, myopia is modeled by elongating the eyeball depending on a given power. To estimate the retina position to simulate a myopic eye (e.g., with a -N diopter prescription), a virtual lens is placed 14mm in front of the unaccommodated eye model with perfect vision. Then, rays parallel to the optical axis are traced through this lens and traversed through the six refractive surfaces of the eye model. In the case of a myopic lens correction, the retina is moved to the resulting focal point where most of these rays intersect; this point is located on the optical axis, behind the retina of an ideal eye (i.e. with perfect vision).

[00277] Figure 21 illustrates a system 220 for retinal image rendering for the present simulation. The system comprises a human eye represented by the Gullstrand eye- model (comprising a cornea 222, iris 224, crystalline lens 226 and retina 228), an aspherical HMD lens 230, and a smartphone display 232. Figure 21 also illustrates how rays for a given input pixel are constructed in order to determine the pixel's color. Similarly to [Wei et al. 2014], the color for a given pixel p on an image plane 234 is determined by first projecting the pixel center onto the retina 228. This projection is denoted q. Then, a number of random rays are generated, emanating from q and passing through the iris 224. Each ray is traced through the six refractive surfaces shown in Figure 20. The rays exiting the eye are then traversed through the HMD lens 230 until they intersect the smartphone display 232. A texture lookup, using nearest neighbor interpolation, is performed for each intersection point on the display 232, and the average of these colors defines the final pixel color simulating what the retina 228 would observe.

[00278] In order to demonstrate that the modified HMD described in Embodiment 2 above can correct for myopia/hyperopia, and produce images that are perceived as sharp by a user with a given prescription, two renderings for myopic eyes are made. The first of these has a HMD lens at its default position so that all rays emerge parallel and uncorrected for the user's eyes. The second of these has the HMD lens moved along its axis to the appropriate position, determined by Table 7, in order to compensate for the refractive error of the eye and provide a corrected image.

[00279] Figures 23(a) and (b), 24(a) and (b) and 25(a) and (b) show screenshots of a simulated HMD session for a user with -2D, -4D and -6D myopia, respectively, in both eyes. In this simulation, the left eye is uncorrected and the resulting uncorrected images are simulated in Figures 23(a), 24(a) and 25(a). Accordingly, for the left eye, the HMD lens remains in its default position to create a virtual image of the screen at infinity for users with perfect vision, and the myopic user sees a blurred image. On the contrary, Figures 23(b), 24(b) and 25(b) show simulated corrected images for the right eye whereby the position of the HMD lens is adjusted according to the user's eyeglasses prescription, such that the user is able to see virtual content sharply in focus.

[00280] In addition to the embodiments presented for a HMD in the form of a Virtual Reality headset, embodiments of the invention could also be used in Augmented Reality and Mixed Reality headset applications. Thus, embodiments of the invention would not only allow the virtual content to be seen as sharp images by the user, but they could also adjust the optics of the headset so that the surrounding real environment also appears clear even without wearing their eyeglasses.

[00281] In the present embodiments, the adjustment of the optical components in the HMD was based on a prescription estimated using the methods described earlier to correct for myopia, hyperopia and astigmatism. In other embodiments, users may obtain and enter their prescription using other methods instead of estimating it with our smartphone-based approach. In other words, embodiments of the invention relate to a vision-correcting system for a HMD configured to automatically adjust the optical components to account for any given prescription. In addition, other properties may be adjusted to improve the comfort for HMD users - these include inter-pupillary distance (IPD), pantoscopic tilt and others. These other properties could be estimated using a similar approach to the methods currently described. For example, Figure 22 shows a mechanical diagram of a device 240 that would enable the physical adjustment of the intra-lens distance in a way similar to the focus adjustment of Embodiment 2, in order to account for a user's personal IPD. Accordingly, the device 240 comprises a frame 242 on which the lenses 244 are movably mounted on a steel rack (gear bar) 246 which is driven by a servomotor 248 to adjust the intra-lens distance. Conclusion

[00282] A method for measuring the optical power and physical lens properties of prescription eyeglasses using a light-weight setup that can be implemented using a smartphone has been described above. Using this method, an accurate and objective measurement of spherical and cylindrical prescriptions can be conducted without additional equipment, in a home environment. The Applicants have demonstrated through an extensive quantitative study that the method yields accurate prescription estimations for both real and synthetic data. A particular application to vision-corrected HMDs is also described in detail, tackling a significant problem that plagues the majority of HMD users.

Further Applications

[00283] Embodiments of the invention can lead to the development of several commercial products, all based on the same core technology. The technology may impact four key markets: 1 ) the market of optical retail, 2) the market of video conferencing, 3) the market of 3D sensors and 4) the market of head-mounted displays.

[00284] Optician retail stores worldwide could be interested in a portable and contact- less video lensmeter enabled by embodiments of the invention as described above, in order to quickly estimate a patient's eyeglasses prescription with a sleek smartphone application.

[00285] A growing proportion of eyeglasses are purchased through online stores, which require customers to provide their personal eyeglasses prescription, as evaluated by a trained optometrist. Embodiments of the present invention allow users to measure their prescription at home based on a current pair of eyeglasses and it is envisaged that this will make users more likely to purchase eyeglasses through e- commerce and thus boost online sales.

[00286] A virtual try-on application in the domain of optometry allows placing virtual eyewear, such as conventional eyeglasses frames or sunglasses, onto a user's face captured by a colour camera [De ' niz et al. 2010; Afifi and Korashy 2015]. Different commercial applications such as glasses.com or FittingBox allow adding eyewear to captured images or directly into live video streams. However, it is believed that all available commercial virtual try-on solutions ignore the effects of refraction caused by eyeglasses lenses. Refraction artifacts as they occur on real eyeglasses lenses generally change the experience on how a wearer perceives his/her potentially new pair of eyeglasses.

[00287] A similar issue arises when trying on eyeglasses in a brick-and-mortar shop: eyeglasses on the display are equipped with lenses that have zero corrective power, thus customers cannot see how they will appear until the sale is closed and their prescription lenses are inserted. The appearance of a user wearing eyeglasses may drastically change once his/her prescription lenses are fitted in, due to the refraction effects which increase or decrease the apparent size of the eyes, and hence often causes buyer's remorse.

[00288] In contrast, once the user's prescription and lens characteristics are known, e.g. by measuring them using an embodiment of the present invention, it is possible to account for refraction and to more faithfully simulate the appearance of a user wearing prescription eyeglasses.

[00289] A "virtual mirror" application is also conceived where prescription eyeglasses are inserted on top of a user's face. A video of the user is captured with a camera such as a Blackmagic cinema camera and face-tracking software such as Faceshift [Weise et al. 201 1] and a RGBD sensor such as Primesense Carmine 1 .09 can be used to estimate the pose and geometry of the user's face in each frame. The incident illumination can also be captured before the sequence using a reflective sphere [Debevec 1998]; alternatively, real-time methods can be used to estimate a low- frequency approximation of the incident illumination in real time [Wu et al. 2014].

[00290] Embodiments of the present invention can output a parametric 3D lens model, which best fits the observation data from an image sequence, i.e., the estimated surface properties of the estimated lens are similar to the actual lens, which is observed in the image sequence.

[00291] Embodiments of the invention also allow for virtual lens cutting in which the estimated 3D lens may be virtually cut based on the contour of a frame 3D model (provided separately). The virtual cutting process may be based on tracing out the contour for each lens surface individually. The interior of each of the two lens regions is then tessellated. The two tessellated surface patches are connected along their boundaries. This embodiment allows us to place a virtual lens into a virtual eyeglasses frame.

[00292] Inspired by real lens cutting machines, it is possible to cut a virtual lens by first aligning the unconstrained lens based on its optical axis, with the z-axis in world space. A contour, constructed based on a frame design, could then be placed in front of the lens depending on the location of the eyes with respect to the eyeglasses frames. Rays parallel to the z-axis located on the contour are then intersected with the lens to determine its final shape. Note, the optical axis of the lens is aligned with the optical axis of the eye. The Applicants have developed an interactive tool to perform the positioning step, where the user can conveniently adjust the 3D lens and position it along its optical axis using the Faceshift mesh model of the user. The cut lens is represented using a triangle mesh with a fine tessellation.

[00293] The combination of embodiments of the invention in estimating lens parameters and virtual lens cutting allows the realization of two further embodiments: 1 ) Virtual try-on using a virtual 3D lens model, which is similar to the actual lens, and 2) lens duplication, e.g., once measured and virtually cut, the lens could be duplicated using a 3D printer.

[00294] Embodiments of the invention may also be employed for fabricating lenses suitable for virtual reality devices. The current thrive for virtual reality (VR) equipment opens up a variety of applications for embodiments of the invention. A use case where embodiments of the present invention would prove useful is for a myopic patient who wants to use a virtual reality (VR) headset, which is often uncomfortable to use while wearing eyeglasses. Instead of consulting an optician to order a custom lens, the methods described above could perform the measuring and design for optical parts customised for each user. Since 3D printers are now widespread the user could even print custom lens holders, which accurately position the lenses within the VR headset in order to match the user's prescription. In the near future, the appropriate 3D lens could be printed directly using transparent media.

[00295] Embodiments of the present invention can also be used for quality control as well as lens duplication. In which case, embodiments could be used in two steps of an optometrist workflow: 1 ) to measure the physical properties of existing lenses, in order to duplicate them; and

2) to assess the quality of purchased eyeglasses, e.g. by checking the prescription, base curve and centration.

[00296] Once a lens power has been decided by an optician and an appropriate lens for the prescription has been chosen, the lens is cut and positioned into the eyeglasses frames. Then, the lens has to be centered such that the optical axis of the lens is aligned with the optical axis of the eye. The lens pose estimation part of the proposed embodiment could be used as a tool for quality control, to verify whether the two optical axes are indeed aligned.

[00297] Although only certain embodiments of the present invention have been described in detail, many variations are possible in accordance with the appended claims.

References

[00298] The disclosures of the following references are incorporated herein in their entirety.

1 ) AFIFI, M., AND KORASHY, M. 2015. Eyeglasses shop: Eyeglasses replacement system using frontal face image. In ICMIS 2015 the 4th International Conference on Mathematics and Information Science, Zewail City of Science and Technology, 12.

2) AGARWAL, S., MALLICK, S. P., KRIEGMAN, D. J., AND BELONGIE, S. 2004.

On refractive optical flow. In ECCV (2), vol. 3022, 483-494.

3) BBGR, 2015. EyeMio App. http://qoo.ql/3AYKdF.

4) BEN-EZRA, M., AND NAYAR, S. 2003. What Does Motion Reveal about Transparency? In IEEE International Conference on Computer Vision (ICCV), vol. 2, 1025-1032.

5) BORZA, D., DARABANT, A. S., AND DANESCU, R. 2013. Eyeglasses lens contour extraction from facial images using an efficient shape description. Sensors 13, 10, 13638-13658.

6) BRANDIMAGE, 2015. Brandimage designs the new retail concept for optic 2000 optician stores. http://sqkinc.com/aboutus/news/Brandimaqe-desiqns-the-new- retail-concept-for-optic-2000-optician-stores. CHOUKROUN, A., AND GALLOU, S., 2014. Method for detecting a predefined set of characteristic points of a face, Jan. 9. US Patent App. 14/000,666.

CHOUKROUN, A., 2012. Augmented reality method applied to the integration of a pair of spectacles into an image of a face, Dec. 13. US Patent App. 13/522,599. CHOUKROUN, A., 2013. Method and device for measuring an interpupillary distance, Mar. 28. US Patent App. 13/634,954.

CUENTO, M., 2013. System, method and software product in eyewear marketing, fitting out and retailing, May 23. US Patent App. 13/681 ,402.

DEHAIS, C, MAMMOU, K., CHOUKROUN, A., AND GALLOU, S., 2014. Model and method for producing 3d photorealistic models, Feb. 27. US Patent App.

13/976,142.

DE NIZ, O., CASTRILLN, M., LORENZO, J., ANTN, L, HERNANDEZ, M., AND BUENO, G. 2010. Computer vision based eyewear selector. Journal of Zhejiang University SCIENCE C 1 1 , 2, 79-91 .

DU, C, AND SU, G. 2005. Eyeglasses removal from facial images. Pattern

Recogn. Lett. 26, 14 (Oct.), 2215-2220.

ESSILOR, 2015. M'eye Fit Touch. http://qoo.ql/OcLe49.

EYENETRA INC., 2015. Netrometer. https://eyenetra.com/productnetrometer.html.

FACESHIFT, 2015. Faceshift. http://www.faceshift.com.

FITTINGBOX, 2014. Eyewear intelligence talks about us! http://www.fittinqbox.com/bloq/eyewear-intelliqence-talksabo ut-us.html.

GALLOU, S., AND CHOUKROUN, A., 2014. Method for determining ocular and optical measurements, Sept. 1 1. US Patent App. 14/348,044.

GU, K., 2010. Method and apparatus for automatic eyeglasses detection and removal, Jan. 26. US Patent 7,653,221 .

IBISWORLD, 2015. Online eyeglasses & contact lens sales in the us: Market research report. http://www.ibisworld.com/industry/online-eveqlasses- contactlens-sales.html.

IHRKE, I., KUTULAKOS, K. N., LENSCH, H. P. A., MAGNOR, M., AND HEIDRICH, W. 2008. State of the art in transparent and specular object reconstruction. In STAR Proceedings of Eurographics, 87-108.

KIM, H., AHN, S., AND OH, Y., 2008. Image processing method for removing glasses from color facial images, June 24. US Patent 7,391 ,900.

LENSCRAFTERS, 2015. AccuFit. http://qoo.ql/Jlyl9J. LIU, D., CHEN, X., AND YANG, Y.-H. 2014. Frequency-based 3d reconstruction of transparent and specular objects. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, 660-667.

MARKETSANDMARKETS.COM, 2014. 3d sensor market by technology, products, type, application, and geography - analysis & forecast (2014 - 2020). http://www.marketsandmarkets.com/PressReleases/3dsensors.asp .

MIT TECHNOLOGY REVIEW, 2014. Intel says laptops and tablets with 3-d vision are coming soon. http://www.technologyreview.com/news/530666/intel- savslaptops-and-tablets-with-3-d-vision-a re-coming-soon/.

OLLENDORF N.A. LLC , 2015. visuReal. http://www.visureal.com/.

OPTIC2000, 2015. Optic2000. https://www.optic2000.com/.

OPTIKAM TECH INC., 2015. Optikampad. http://www.optikam.com/.

PARK, J.-S., OH, Y. H., AHN, S. C, AND LEE, S.-W. 2003. Glasses removal from facial image using recursive pea reconstruction. In AVBPA, Springer, J.

Kittler and M. S. Nixon, Eds., vol. 2688 of Lecture Notes in Computer Science,

369-376.

PARK, J.-S., OH, Y. H., AHN, S. C, AND LEE, S.-W. 2005. Glasses removal from facial image using recursive error compensation. Pattern Analysis and Machine Intelligence, IEEE Transactions on 27, 5 (May), 805-81 1 .

RIBEIRO, J. L. P. 2015. Myopia glasses and optical power estimation: An easy experiment. The Physics Teacher 53, 2, 101-102.

RODIN, A., KHADER, L, AND RAJAN, S. 2014. Lensmeter: Power of eyeglasses measuring application. Final report, ece 1778 - creative applications for mobile devices, University of Toronto.

ROTLEX, 2015. Freeform Verifier, http://www.rotlex.com/freeform-verifier-ffv. SERIANI, J., AND WILSON, H., 2008. System and method self enabling customers to obtain refraction specifications for, and purchase of, previous or new fitted eyeglasses, Aug. 21. US Patent App. 1 1/707,237.

TRANSPARENCY MARKET RESEARCH, 2013. Eyewear market -global industry analysis, size, share, growth and forecast 2012-2018. http://www.transparencymarketresearch.com/eyewearmarket.html .

TRANSPARENCY MARKET RESEARCH, 2015. Video conferencing market - global industry analysis, size, share, growth, trends and forecast 2014 - 2020. http://www.transparencymarketresearch.com/pressrelease/video conferencing- market.htm. VENTUREBEAT, 2013. Glasses. corn's mobile app scans your face in 3d, lets you try on sunglasses virtually, http://venturebeat.com/2013/04/09/qlasses-coms- mobileapp-scans-vour-face-into-3d-and-try-on-sunglasses-virt uallv/.

VSP OPTICS GROUP, 2015. otto - one touch to optical, http://qoo.ql/vkoyr1.

WAUPOTITSCH, R., TSOUPKO-SITNIKOV, M., MEDIONI, G., MISHIN, O.,

SHAMGIN, V., CALLARI, F., AND GUIGONIS, D., 2006. Interactive try-on platform for eyeglasses, Mar. 21 . US Patent 7,016,824.

WEISE, T., BOUAZIZ, S., LI, H., AND PAULY, M. 201 1. Realtime performance- based facial animation. ACM Trans. Graph, (proc. of SIGGRAPH) 30, 4 (July), 77:1-77:10.

WU, C, LIU, C, SHUM, H.-Y., XU, Y.-Q., AND ZHANG, Z. 2004. Automatic eyeglasses removal from face images. Pattern Analysis and Machine Intelligence, IEEE Transactions on 26, 3 (March), 322-336.

YE, M., ZHANG, C, AND YANG, R. 2013. Video enhancement of people wearing polarized glasses: Darkening reversal and reflection reduction. In Computer

Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on.

YU, J., QIU, Z., AND FANG, F. 2014. Mathematical description and measurement of refractive parameters of freeform spectacle lenses. Appl. Opt.

53, 22 (Aug), 4914-4921 .

ZEISS, 2015. i.Terminal. http://qoo.ql/Rkcqp8.

BOUGUET, J. Y., 2008. Camera calibration toolbox for Matlab.

BRADSKI, G. 2000. The OpenCV Library. Dr. Dobb's Journal of Software Tools.

OLSON, E. 201 1. AprilTag: A robust and flexible visual fiducial system. In

Proceedings of the IEEE International Conference on Robotics and Automation

(ICRA), IEEE, 3400-3407.

[Glassner 1989] Glassner, A. S., Ed. 1989. An Introduction to Ray Tracing. Academic Press Ltd., London, UK, UK.

[Nelder and Mead 1965] Nelder, J. A., and Mead, R. 1965. A simplex method for function minimization. The Computer Journal 7, 4, 308-313.

DEBEVEC, P. 1998. Rendering synthetic objects into real scenes: Bridging traditional and image-based graphics with global illumination and high dynamic range photography. In Proceedings of the 25th Annual Conference on Computer

Graphics and Interactive Techniques, ACM, New York, NY, USA, SIGGRAPH '98,

189-198. HUANG, F.-C, WETZSTEIN, G., BARSKY, B. A., AND RASKAR, R. 2014. Eyeglasses-free display: Towards correcting visual aberrations with computational light field displays. ACM Trans. Graph. 33, 4 (July), 59:1-59:12. MATSUMOTO, M., PAMPLONA, V. F., HOFFMANN, M., UZEJKA, G., AND SHARPE, N. 2015. High-order power map and low order lensmeter using a smartphone add-on. In Frontiers in Optics 2015, Optical Society of America, FW5E.2.

PEDROTTI, F. L, PEDROTTI, L. M., AND PEDROTTI, L. S. 2006. Introduction to Optics (3rd Edition), ch. Chapter 18: Matrix Methods in Paraxial Optics.

WU, C, ZOLLHFER, M., NIENER, M., STAMMINGER, M., IZADI, S., AND THEOBALT, C. 2014. Real-time shading-based refinement for consumer depth cameras. In ACM Transactions on Graphics (Proceedings of SIGGRAPH Asia 2014), vol. 33.

FINK, W., AND MICOL, D. 2006. simeye: computer-based simulation of visual perception under various eye defects using Zernike polynomials. Journal of Biomedical Optics 11. 5, 05401 1-12. 14

KONRAD, R., COOPER, E. A., AND WETZSTEIN, G. 2016. Novel optical configurations for virtual reality: Evaluating user preference and performance with focus-tunable and monovision neareye displays. In CHI '16. 10, 1 1

KONRAD, R., PADMANABAN, N., COOPER, E., AND WETZSTEIN, G. 2016. Computational focus-tunable near-eye displays. In SIGGRAPH '16 Emerging Technologies. 10

MALACARA-HERN " ANDEZ, D., AND MALACARA-HERN " ANDEZ, Z. 2003.

Handbook of optical design, second edition. CRC Press. 4

MEISTER, D., AND SHEEDY, J. E. 2000. Introduction to Ophthalmic Optics. 3, 4,

7

MOHAN, K., AND SHARMA, A. 2012. How often are spectacle lenses not dispensed as prescribed? Indian Journal of Ophthalmology 60, 6 (Nov-Dec), 553-555. 10

VON HELMHOLTZ, H., GULLSTRAND, A., VON KRIES, J., AND NAGEL, W. 1909. Handbuch der Physiologischen Optik. Leopold Voss. 14

WEI, Q., PATKAR, S., AND PAI, D. K. 2014. Fast ray-tracing of human eye optics on graphics processing units. Comput. Methods Prog. Biomed. 114, 3 (May), 302-314. 14