Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
IMAGE ACQUISITION SYSTEM OPTIMIZATION IN VISION-BASED INDUSTRIAL AUTOMATION
Document Type and Number:
WIPO Patent Application WO/2024/035397
Kind Code:
A1
Abstract:
According to disclosed embodiments for configuring an image acquisition system with one or more cameras for vision-based inspection of parts on a production line, a simulation engine renders synthetic images of a part on the production line acquired by the one or more cameras, based on a 3D model of the part and a configuration of the image acquisition system defined by optimizable parameters. A surface coverage measurement engine uses an output of the simulation engine to measure blind spots on a part surface for individual cameras and therefrom determine a measure of visible surface coverage on the 3D model of the part. An optimization engine generates an updated configuration of the image acquisition system by updating the optimizable parameters based on evaluation of an optimization objective defined by the measured visible surface coverage. The above process is iteratively executed to determine a final configuration of the image acquisition system.

Inventors:
EROL BARIS (US)
KISLEY BENJAMIN (US)
BREU ANNEMARIE (US)
DUBE JASON (CA)
DÖRING TIMO (DE)
Application Number:
PCT/US2022/039894
Publication Date:
February 15, 2024
Filing Date:
August 10, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIEMENS AG (DE)
SIEMENS CORP (US)
International Classes:
G06T7/00; G06T7/60; G06T7/70; G06T15/20
Foreign References:
EP3937069A12022-01-12
Other References:
CHUANTAO ZANG ET AL: "A flexible visual inspection system combining pose estimation and visual servo approaches", ROBOTICS AND AUTOMATION (ICRA), 2012 IEEE INTERNATIONAL CONFERENCE ON, IEEE, 14 May 2012 (2012-05-14), pages 1304 - 1309, XP032450643, ISBN: 978-1-4673-1403-9, DOI: 10.1109/ICRA.2012.6224912
Attorney, Agent or Firm:
BASU, Rana (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computer-implemented method for configuring an image acquisition system comprising one or more cameras for vision-based inspection of parts on a production line, the method comprising: over a number of iterations, performing: executing a simulation engine for rendering synthetic images of a part on the production line acquired by the one or more cameras, based on a 3D model of the part and a configuration of the image acquisition system defined by optimizable parameters, executing a surface coverage measurement engine for using an output of the simulation engine to measure blind spots on a part surface for individual cameras of the one or more cameras, and therefrom determine a measure of visible surface coverage on the 3D model of the part, and executing an optimization engine for generating an updated configuration of the image acquisition system by updating the optimizable parameters based on evaluation of an optimization objective defined by the measured visible surface coverage, whereby, a final configuration of the image acquisition system is determined for visionbased inspection on the production line.

2. The method according to claim 1 , wherein the simulation engine is executed for rendering, for each individual camera, a plurality of synthetic images based on randomization of environmental and operational parameters pertaining to the production line.

3. The method according to claim 2, wherein the synthetic images rendered by the simulation engine include photorealistic images.

4. The method according to claim 3, wherein the output of the simulation engine utilized by the surface coverage measurement engine includes the photorealistic images, the visible surface coverage being measured by: for individual cameras, segmenting the respective rendered photorealistic images based on a defined range of color density levels to generate 2D surface coverage maps delineating visible regions from shadow regions corresponding to blind spots, mapping a pixel space of the 2D surface coverage maps for individual cameras into the 3D model of the part to create a global visible surface coverage map.

5. The method according to any of claims 1 to 4, wherein the output of the simulation engine utilized by the surface coverage measurement engine includes, for each individual camera, a region of the part surface defined by the 3D model of the part that lies within a field of view (FOV) of the camera, the visible surface coverage camera being measured by: for each individual camera, determining a measure of an angle between the part surface and an FOV plane of the camera at multiple locations in the region within the FOV of the camera, determining a visibility of each location by comparing the respective angle to defined angle range, and mapping the visibility of the locations determined for the individual cameras into the 3D model of the part to create a global visible surface coverage map.

6. The method according to claim 5, wherein the multiple locations correspond respectively to planar surface elements defined by a polygon mesh of the 3D model of the part, which are contained in the region within the FOV of the respective camera.

7. The method according to claim 5, wherein the multiple locations are determined by sampling points on a UV map of the part surface defined by the 3D model of the part, for the region within the FOV of the respective camera.

8. The method according to claim 5, wherein the multiple locations include landmarks on planar surfaces formed by geometrical shapes on the part surface defined by the 3D model of the part, for the region within the FOV of the respective camera.

9. The method according to any of claims 3 to 8, further comprising: executing a defect generator to create defect data in the photorealistic images, creating a dataset using the photorealistic images including the defect data, training an artificial intelligence (Al) model for defect detection using the dataset, and measuring a performance of the trained Al model, wherein the optimization objective is defined by a combination of the measured visible surface coverage and the measured performance of the Al model.

10. The method according to any of claims 1 to 9, wherein the iterations are performed until the evaluation of the optimization objective reaches a predefined threshold.

11. The method according to any of claims 1 to 10, wherein the optimization engine comprises a genetic algorithm.

12. The method according to any of claims 1 to 11, comprising performing each of the iterations using 3D models of different parts having different nominal geometries, to determine a final generalized configuration of the image acquisition system for inspecting the different parts.

13. The method according to any of claims 1 to 12, wherein the optimizable parameters are selected from the group consisting of: position, angle, field of view, exposure, color mode, analog gain and depth of focus of the one or more cameras.

14. A non-transitory computer-readable storage medium including instructions that, when processed by a computing system, configure the computing system to perform the method according to any one of claims 1 to 13.

15. A system for configuring an image acquisition system comprising one or more cameras for vision-based inspection of parts on a production line, the system comprising: one or more processors, and a memory storing algorithmic modules executable by the one or more processors, the algorithmic modules comprising: a simulation engine configured to render synthetic images of a part on the production line acquired by the one or more cameras, based on a 3D model of the part and a configuration of the image acquisition system defined by optimizable parameters, a surface coverage measurement engine configured to use an output of the simulation engine to measure blind spots on a part surface for individual cameras of the one or more cameras, and therefrom determine a measure of visible surface coverage on the 3D model of the part, and an optimization engine configured to generate an updated configuration of the image acquisition system by updating the optimizable parameters based on evaluation of an optimization objective defined by the measured visible surface coverage, wherein the simulation engine, the surface coverage measurement engine and the optimization engine are executable over a number of iterations to determine a final configuration of the image acquisition system for vision-based inspection on the production line.

Description:
IMAGE ACQUISITION SYSTEM OPTIMIZATION IN VISION-BASED INDUSTRIAL AUTOMATION

TECHNICAL FIELD

[0001] The present disclosure relates to computer vision systems for industrial automation applications. Embodiments of the disclosure specifically relate to a technique for optimization of configuration of an image acquisition system for vision-based inspection of parts on a production line.

BACKGROUND

[0002] Computer vision has become central to various segments of modern manufacturing processes. For example, computer vision technologies can help in visual quality tasks, which is becoming more and more important to reduce the number of defective products. One challenge to deploying these advanced technologies successfully is the correct determination of configuration of the image acquisition system, such as number of cameras, camera position and angle, etc., during the inspection process. The performance of an artificial intelligence (Al) model may depend heavily on the quality of the image data that they are trained on. Therefore, if the initial system design is not done correctly (e.g., the camera is positioned in wrong locations, or the wrong lighting solution is utilized) the quality of the image data may be detrimentally reduced.

[0003] Currently, the installation or even modification of an image acquisition system for production line inspection can be a tedious manual process that usually requires full-time human expertise from different domains, such as lighting, camera, Al, and production experts, etc. Furthermore, the design phase of most vision projects may involve a comprehensive experimentation stage, where domain experts may try different configurations (e.g., number of cameras, camera position and angle, etc.) to design an image acquisition system. However, the resulting design based on configuration parameters determined by experts are often subjective and produce suboptimal results. For example, the design may be suitable only for specific parts and not generalizable enough. SUMMARY

[0004] Briefly, aspects of the present disclosure provide a virtual commissioning framework to optimize a configuration of an image acquisition system for vision-based inspection of parts on a production line.

[0005] A first aspect of the disclosure provides a computer-implemented method for configuring an image acquisition system comprising one or more cameras for vision-based inspection of parts on a production line. The method comprises executing a simulation engine for rendering synthetic images of a part on the production line acquired by the one or more cameras, based on a 3D model of the part and a configuration of the image acquisition system defined by optimizable parameters. The method further comprises executing a surface coverage measurement engine for using an output of the simulation engine to measure blind spots on a part surface for individual cameras of the one or more cameras, and therefrom determine a measure of visible surface coverage on the 3D model of the part. The method further comprises executing an optimization engine for generating an updated configuration of the image acquisition system by updating the optimizable parameters based on evaluation of an optimization objective defined by the measured visible surface coverage. The method comprises performing the above activities over a number of iterations to determine a final configuration of the image acquisition system for vision-based inspection on the production line.

[0006] Other aspects of the disclosure implement features of the above-described method in computing systems and computer program products for configuring an image acquisition system for vision-based inspection of parts on a production line.

[0007] Additional technical features and benefits may be realized through the techniques of the present disclosure. Embodiments and aspects of the disclosure are described in detail herein and are considered a part of the claimed subject matter. For a better understanding, refer to the detailed description and to the drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] The foregoing and other aspects of the present disclosure are best understood from the following detailed description when read in connection with the accompanying drawings. To easily identify the discussion of any element or act, the most significant digit or digits in a reference number refer to the figure number in which the element or act is first introduced. For clarity, some of the images herein are schematically represented as line drawings.

[0009] FIG. 1 illustrates an example of an inspection system for a production line including an image acquisition system for vision-based inspection of parts.

[0010] FIG. 2 is a schematic illustration of a system for configuring an image acquisition system for vision-based inspection of parts on a production line according to an example embodiment.

[0011] FIG. 3 illustrates an example of a synthetic image of a part rendered by a simulation engine for a 2D image-based blind spot detector.

[0012] FIG. 4 illustrates an output of a global context mapper with a 2D image-based blind spot detector.

[0013] FIG. 5 is a schematic illustration of measurement of planar surface angles by a 3D geometry-based blind spot detector.

[0014] FIG. 6 illustrates an output of a global context mapper with a 3D geometry-based blind spot detector.

[0015] FIG. 7 is a schematic illustration of deployment of the disclosed methodology on a visionbased inspection system.

[0016] FIG. 8 illustrates a computing system that can support configuration of an image acquisition system for vision-based inspection of parts on a production line according to an example embodiment.

DETAILED DESCRIPTION

[0017] In the design of image acquisition systems for end-of-the-line inspection in production processes, a primary objective is to acquire the best quality image data (such as with low light reflection, ability to see minimum defect size, robustness against motion blurs, etc.) and to capture the full part on the inspection conveyor system. Full part coverage may be achieved, for example, by using several cameras and/or multiple trigger times based on the dimensions of a specific part. Such measures can pose maintenance challenges resulting from the additional resources, data and hardware management challenges due to added complexity to the hardware and bandwidth issues, and moreover may not be generalizable for different types of parts.

[0018] One of the technical problems caused by ill-informed design is the so-called blind spots. A blind spot is defined as a region on the surface of a part that is not visible to a camera lens due to the position of the camera with respect to the part. The present inventors recognize that full part coverage, for example using multiple cameras/ trigger times as described above, may not necessarily address the problem of blind spots, since a blind spot may exist even within a camera’s field of view.

[0019] The disclosed methodology can generate optimized configuration parameters for an image acquisition system via surface coverage measurement based on blind spot detection using simulation. The disclosed methodology may be implemented as part of a virtual commissioning pipeline to ensure an optimal configuration for image acquisition in the physical environment without requiring real- world experimentation.

[0020] Turning now to the drawings, FIG. 1 illustrates an inspection system for a production line where the disclosed methodology can be suitably deployed. As shown, the inspection system 100 may include a conveyor 102 for moving a part 104 through an inspection area. The part 104 may be manufactured or assembled upstream on the production line and transferred to the conveyor 102, for example, by a gantry, a robotized system, or any other type of transfer mechanism. As the part 104 is moved by the conveyor 102, an image acquisition system 106 may be triggered to acquire images of the part 102 via one or more cameras. In the shown example, the image acquisition system 106 includes two cameras 108a, 108b mounted on a horizontal fixture 110 extending across the width of the conveyor 102. The position of the cameras 108a, 108b may be adjustable by moving them along the horizontal fixture 110 (in this example, by translation along the Y axis). The angle the cameras 108a, 108b may be adjustable by rotation, for example by rolling, pitching and yawing (in this example, by rotation about the X, Y and Z axes respectively). Furthermore, depending on the dimensions of the part 104, each of the cameras 108a, 108b may be triggered once or at multiple times, to capture the entire extent of the part 104. In various other embodiments of the inspection system, the inspection station may be stationary where there is no motion of the part during inspection and/or the image acquisition system may comprise one or more robots capable of manipulating the position and angle of one or more cameras.

[0021] The images captured by the image acquisition system 106 may be processed, for example, using artificial intelligence (Al) models and/or computer vision algorithms to detect the presence of a defect in the part. To that end, the image acquisition system 106 may be optimally configured in terms of parameters such as number of cameras, camera positions, angles, etc., which can be determined using the methodology described herein.

[0022] FIG. 2 illustrates an example embodiment of system 200 for configuring an image acquisition system for vision-based inspection of parts on a production line. The various engines described herein, including the simulation engine 204, the surface coverage measurement engine 216 and the optimization engine 236, may be implemented by a computing system in various ways, for example, as hardware and programming. The programming for the engines 204, 216 and 236 may take the form of processor-executable instructions stored on non-transitory machine-readable storage mediums and the hardware for the engines may include processors to execute those instructions. The processing capability of the systems, devices, and engines described herein, including the simulation engine 204, the surface coverage measurement engine 216 and the optimization engine 236, may be distributed among multiple system components, such as among multiple processors and memories, optionally including multiple distributed processing systems or cloud/network elements.

[0023] Referring to FIG. 2, the simulation engine 204 may be executed to render synthetic images of a part on the production line acquired by one or more cameras, based on a 3D model of the part and a configuration of the image acquisition system defined by optimizable parameters. The 3D model can include, for example, a CAD model of the part. The surface coverage measurement engine 216 may be executed to use an output 214 of the simulation engine 204 to measure blind spots on a part surface for individual cameras, and therefrom determine a measure 222 of visible surface coverage on the 3D model of the part. The optimization engine 236 may be executed to generate an updated configuration of the image acquisition system by updating the optimizable parameters based on evaluation of an optimization objective defined by the measured visible surface coverage 222. The above steps may be performed over a number of iterations, starting with an initial configuration 212 of the image acquisition system, to iteratively determine a final configuration 248 of the image acquisition system for vision-based inspection on the production line. The initial configuration 212 may be specified, for example, based on input from a domain expert. The final configuration 248 may be determined when a convergence criterion is met by the optimization engine 236.

[0024] In one implementation, the simulation engine 204 may perform a simple camera simulation using a camera simulator 206 to render the synthetic images based on the 3D model of the part and configuration parameters of the image acquisition system. The configuration parameters may include optimizable parameters, such as, number of cameras, position, and angle of the cameras, etc., as well as fixed parameters, such as exposure, color mode, resolution, analog gain, field of view, depth of focus, etc. In embodiments, some of the so-called fixed parameters mentioned above may also be defined as optimizable parameters.

[0025] In accordance with disclosed embodiments, a more realistic simulation may be performed using a digital twin 202 of the inspection system. The digital twin 202 may comprise a 3D model of the actual inspection system, for example, including the conveyor and other components of the transfer mechanism. The digital twin 202 may be created based on an overall system design, for example, using information pertaining to the image acquisition system, the part(s) to be inspected, line configuration and process parameters. The image acquisition system information may include information such as location and design of the fixture to mount the cameras (e.g., array or grid, distance from inspection station, etc.), available number of cameras, candidate camera trigger locations, camera fixed parameters (such as mentioned above), among others. The part information may include a 3D model, such as a CAD model, of each part to be inspected. The line configuration information may include, for example, inspection station background, conveyor speeds, ambient lighting conditions, line/station dimensions, part orientation, etc. The process parameters may include parameters related to the production process, such as shift or time of day, number of operators, etc. Having created the digital twin 202, the simulation may be started using an initial configuration 212 of the image acquisition system. The initial configuration 212 may comprise preferred initial values of the optimizable parameters, for example, specified by a domain expert.

[0026] Consistent with disclosed embodiments, the simulation engine 204 may comprise an environmental simulator 208 that can be used in combination with the camera simulator 206 to render, for each individual camera, a plurality of synthetic images based on randomization of environmental and operational parameters pertaining to the production line. The environmental simulator 208 may utilize the digital twin 202 to generate a large number of scenarios by randomizing environmental and operational parameters, such as ambient lighting conditions, part orientations, conveyor speeds vibrations, having operators next to the inspection station, conveyor/inspection station background, among many others. The camera simulator 206 may be executed to render a synthetic image corresponding to each camera for each scenario generated by the environmental simulator 208, using the optimizable parameters of the image acquisition system for the given iteration of optimization. The randomization implemented by the environmental simulator 208 may be distinct from traditional domain randomization because it is focused on variables in the physical environment of the production line and can thus simulate realistic changes that can be seen on the actual production line. The variations achieved by randomization can generalize the camera positions and make the optimization more robust.

[0027] The surface coverage measurement engine 216 may comprise a blind spot detector 218 that can use an output 214 of the simulation engine 204 to measure blind spots on the part surface for each camera being simulated, and a global context mapper 220 that can map the blind spots of individual cameras on the 3D model of the part to determine a measure of the overall visible surface coverage. The measured visible surface coverage 222 may be used as a metric to optimize the camera configuration.

[0028] As stated above, a blind spot, in relation to a camera, may be defined as a region on the part surface that is not visible to the camera lens due to the camera’s position. For example, usually for a top-down camera, any vertical surface can be a potential blind spot. In the image space, the blind spots can show up as shadow regions due to the reflection pattern from part to the camera. In this disclosure, two different modalities of blind spot detectors are described.

[0029] A first modality of the blind spot detector 218 may involve a 2D image-based blind spot detector. In this case, the output 214 of the simulation engine 204 that defines an input to the blind spot detector 218 may include photorealistic images. A photorealistic image is an image rendering that is based on simulation of the behavior of light, for example, using techniques such as ray tracing. A photorealistic image may thus have realistic lighting reflections as can be expected on a real shop floor. According to disclosed embodiments, the photorealistic images may comprise RGB images (having red, green and blue channels of pixel intensities) [0030] An example of a rendered image that can be processed by a 2D image-based blind spot detector 218 is schematically shown in FIG. 3. Here, the image 300 corresponds to a portion of the part surface captured by a single camera with a top-down field of view (FOV). For clarity, the image 300 is represented as a line drawing without photorealistic effect. However, in the photorealistic rendering of the image, the surfaces perpendicular to the FOV plane, such as the inside vertical walls 302, 304, 306 in this example, would appear as shadow regions. Such shadow regions may have different pixel color densities than surfaces that face the camera FOV plane, such as surfaces 308, 310, 312. Hence, there may exist a one-to-one correlation between shadow regions and blind spots.

[0031] For implementing the 2D image-based blind spot detector 218, a range of color densities may be defined for categorizing blind spots and visible surfaces. In some embodiments, multiple ranges of color densities may be defined to even categorize different levels of visible surfaces. Density may be defined as a measure of “darkness” or “lightness” of a region in an image which may be measured in terms of RGB values of the pixels. For each individual camera that is simulated, the 2D image-based blind spot detector 218 may segment the respective rendered photorealistic images based on the defined range of color density levels to generate 2D surface coverage maps delineating visible regions from shadow regions corresponding to blind spots.

[0032] In embodiments, the 2D image-based blind spot detector 218 may be executed as follows. At the outset, each image may be processed to remove the background from the image. In one embodiment, the background-subtracted image may be segmented into regions of different levels of visibility based on comparison of each pixel’s RGB values to the defined color density range. In another embodiment, a color-wise intensity map may be computed, and the different color groups may be clustered using a clustering technique, such as k-means. The RGB values of the cluster centers may be used as reference points for determining the visibility level of every pixel in the respective clusters, based on a comparison with the defined color density range. The result can include a 2D surface coverage map that may be indicative of the percentages of different levels of visible surfaces (including invisible surfaces or blind spots). The visibility levels in the 2D surface coverage map corresponding to a given camera may be determined by combining results from multiple photorealistic images rendered by environmental and operational randomization.

[0033] Once the above-described process is repeated for each individual camera, the results may be forwarded to the global context mapper 220. The global context mapper 220 may map the pixel space of the 2D surface coverage maps for individual cameras into the 3D model of the part to create a global visible surface coverage map. For overlapping pixels (i.e., pixels viewed by multiple cameras), the highest visibility level may be considered to create the global visible surface coverage map.

[0034] FIG. 4 illustrates a global visible surface coverage map 400 for a 2D image-based blind spot detector. As shown, the global visible surface coverage map 400 indicates three levels of visibility on a 3D model of the part. In the shown illustration, the shadings 402, 404, 406 respectively indicate low (blind spots), medium and high level of visibility. The visible surface coverage may be measured by determining an area excluding the blind spots (in this case, the medium and high visibility regions), for example, as a fraction or percentage of the total part surface.

[0035] Turning back to FIG. 2, a second modality of the blind spot detector 218 may involve a 3D geometry-based blind spot detector. In this case, the output 214 of the simulation engine 204 that defines an input to the blind spot detector 218 may include, for each individual camera, a region of the part surface defined by the 3D model of the part that lies within the FOV of the camera. As is typical, the 3D model of the part may be constructed by generating a polygon mesh defining planar surface elements. The planar surface elements may comprise, for example, tessellated triangles, quadrilaterals or other simple convex polygons. In one embodiment, the output 214 used by the 3D geometry-based blind spot detector 218 may include, for each camera, a collection of planar surface elements contained within the FOV of the camera. In this modality, the blind spot detector 218 does not necessarily require photorealistic renderings.

[0036] The 3D geometry-based blind spot detector 218 may operate by determining a visibility level at multiple locations on the part surface based on an angle between the local part surface and an FOV plane of the camera. The FOV plane of a camera refers to a plane defined by the image acquiring surface (lens) of the camera. Thus, the 3D geometry-based blind spot detector 218 may determine, for each camera, a measure of an angle between the part surface and an FOV plane of the camera at multiple locations in the region within the FOV of the camera. For each location, a visibility level may be determined by comparing the respective angle to a defined angle range. In some embodiments, multiple angle ranges may be defined to categorize multiple levels of visibility.

[0037] In a simple implementation, the locations for angle measurement may be defined by using landmarks. The landmarks may be placed on planar surfaces formed by geometrical shapes on the part surface defined by the 3D model of the part. A landmark may be representative of all points on a given planar surface. The landmarks may be specified, for example, in the part information used for creating the digital twin 202. This approach can be computationally efficient and particularly suitable for part surfaces that are not complex (e.g., including regular geometrical shapes).

[0038] Referring to FIG. 5, a part 504 may comprise several planar surfaces, such as 504a, 504b, 504c, 504d, etc., which can have landmarks placed on them. For each individual camera 502a, 502b, 502c, an angle between the FOV plane of the camera and the planar surfaces within the FOV of that camera may be measured at the respective landmark points. In the shown illustration, 0 a denotes an angle between FOV plane of the camera 502a and the surface 504b, Ob denotes an angle between FOV plane of the camera 502b and the surface 504a and 0 C denotes an angle between FOV plane of the camera 502c and the surface 504a.

[0039] The angles described herein are 3D angles. In practice, the measure of each angle may be determined by measuring an angle between the normal of the planar surface (surface normal) and the normal of the camera FOV plane (camera normal). Thus, visibility may be highest when the angle is 180 degrees (planar surface faces camera FOV plane), and almost zero when the angle drops to about 90 degrees. In case of overlapping planar regions that are within the FOV of multiple cameras, the highest measured angle may be stored.

[0040] In another embodiment, instead of landmarks, the multiple locations may correspond respectively to planar surface elements defined by the polygon mesh on the 3D model of the part. In this case, for a given camera, surface normals may be computed for each planar surface element within the FOV of the camera. Based on this, an angle may be measured between each planar surface element and the FOV plane of the camera by measuring an angle between the respective surface normal and the camera normal. This process may be repeated for each camera being simulated. Thus, for each planar surface element on the 3D model of the part, an angle may be stored that is indicative its visibility. In case of overlapping planar surface elements that are within the FOV of multiple cameras, the highest measured angle may be stored. The stored angles for each planar surface element may be compared with a defined angle range to determine a visibility level of each planar surface element. This technique may be particularly suitable part surfaces that have complex shapes as it can be used to accurately determine gradients of planar surface angles along surface of the part. [0041] In yet another embodiment, the multiple locations may be determined by sampling points on a UV map of the part surface defined by the 3D model of the part, for the region within the FOV of the respective camera. UV mapping is a known technique to map (unwrap) a 3D mesh into a 2D image. The 2D image may be defined by UV coordinates that correspond with vertex information from the 3D mesh. This embodiment may involve generating a UV map using the 3D mesh of the part. Points may be sampled on the UV map, for example using random or blue noise sampling. The sampled points may already include UV projection information, including surface normals. For each sampled point in the FOV of a camera, an angle may be directly determined based on the knowledge of the surface normal and the camera normal. This process may be repeated for each camera being simulated. Thus, for each sampled point on the UV map, an angle may be stored that is indicative its visibility. In case of overlapping points that are within the FOV of multiple cameras, the highest measured angle may be stored. The stored angles for each point may be compared with a defined angle range to determine a visibility level of each point.

[0042] The global context mapper 220 may map the visibility level of the locations determined for individual cameras into the 3D model of the part to create a global visible surface coverage map. In one embodiment, the visibility levels corresponding to a given camera may be determined by combining results from multiple images rendered by environmental and operational randomization.

[0043] FIG. 6 shows an example of a global visible surface coverage map 600 for a 3D geometrybased blind spot detector. As shown, the global visible surface coverage map 600 indicates three levels of visibility on a 3D model of the part. In the shown illustration, the shadings 602, 604, 606 respectively indicate low (blind spots), medium and high level of visibility. In an example implementation, the shading 602 indicating low visibility may include points or planar surface elements having an angle in the range of [0 to 100 degrees], the shading 604 indicating medium visibility may include points or planar surface elements having an angle in the range of [100 to 130 degrees] and the shading 606 indicating high visibility may include points or planar surface elements having an angle in the range of [130 to 180 degrees]. The visible surface coverage may be measured by determining an area excluding the blind spots (in this case, the medium and high visibility regions), for example, as a fraction or percentage of the total part surface.

[0044] Continuing with reference to FIG. 2, the output 222 of the surface coverage measurement engine 216 may be indicative of a measure of the visible surface coverage determined using the global context mapper 220. In one embodiment, the surface coverage measurement engine 216 may incorporate both, a 2D image-based blind spot detector and a 3D geometry-based blind spot detector. In this case, the output 222 may include a combination of the visible surface coverage determined using both modalities of blind spot detection. Doing so may define a metric for optimizing the camera configuration that takes into account the geometry of the part as well as the behavior of light in the physical environment of the inspection system. In examples, the output 222 may be determined by assigning appropriate weights to the results generated using the 2D image-based blind spot detector and the 3D geometry-based blind spot detector.

[0045] As a further feature according to disclosed embodiments, the visible surface coverage determined the surface coverage measurement engine 216 may be integrated with a measure of an Al model performance, to define the optimization objective. To that end, the system 200 may include an Al engine 226. The Al engine 226 can provide another realistic metric to quantitively measure the robustness of the camera configuration parameters being optimized.

[0046] The Al engine 226 may receive, as input, photorealistic images 224 from the simulation engine 204. In this case, the simulation engine 204 may incorporate a defect generator 210 to create defect data in the photorealistic images 224. The defect generator 210 may be configured to extract defects on high-risk areas of the part. Such high-risk areas may be defined, for example, in the part information used to create the digital twin 202. The defect generator may randomize the defects and apply them to the 3D model of the part, based on which photorealistic images may be rendered by executing the camera simulator 206 and the environmental simulator 208.

[0047] The photorealistic images 224 received by the Al engine 226 may include both defective and defect-free images. The images with defects may be automatically labeled, for example, by localizing the defects with bounding boxes. The photorealistic images 224 may be used to create a labeled dataset 228. In some embodiments, the size of the dataset 228 may be increased by using data augmentation techniques. A model trainer 230 may be executed to train an Al model for defect detection using the dataset 228. The Al model may include, for example, a deep neural network. The training can involve a supervised learning process based on the defect labels. A model performance evaluator 232 may be used to test the trained Al model and measure a performance thereof. The measure of the performance may include an accuracy or success rate of the trained Al model to correctly predict defects in unlabeled input images during the evaluation. The output 234 of the Al engine 226, indicative of the measured performance of the trained Al model, may be incorporated into the optimization objective along with the output 222 of the surface coverage measurement engine 216.

[0048] The optimization engine 236 may be used to determine an optimized configuration of the image acquisition system by generating an updated configuration at each iteration based on evaluation of the optimization objective. The optimization objective may include an objective function defined based on a measure of the visible surface coverage. However, the optimization objective may be open- ended to incorporate other metrices as desired. For example, according to disclosed embodiments, the objective function may be defined based on a combination of a measure of the visible surface coverage and a measure of the performance of the Al model trained using the rendered synthetic images.

[0049] The optimization engine 236 may comprise an objective evaluator 238 to evaluate the optimization objective at each iteration. According to disclosed embodiments, the optimization objective may be evaluated based on the outputs 222 and 234 of the surface coverage measurement engine 216 and the Al engine 226 respectively. The optimization objective may be evaluated by assigning respective weights to the measured visible surface coverage and the measured Al model performance. For example, in one embodiment, the optimization objective may be evaluated as a weighted sum of the outputs 222 and 234. Depending on the weights, the optimization may be biased toward a achieving a higher visible surface coverage or a higher Al model performance.

[0050] The optimization engine 236 may further comprise a configuration generator 240 to generate an updated configuration of the image acquisition system based on the evaluation of the optimization objective. The configuration generator 240 may receive, as input, the current values of the optimizable parameters 242 from the simulation engine 204 and the evaluated optimization objective from the objective evaluator 238, to determine updated values of the optimizable parameters 246. To determine the updated configuration, the configuration generator 240 may be informed of the constraints and search spaces 244 for each parameter to be optimized. The constraints and search spaces 244 may obtained from the digital twin 202. For example, the illustration shown in FIG. 1, if camera position is a parameter to be optimized, the search space may be limited to the horizontal fixture length Additionally, the cameras may not be placed on top of each other, such that each camera must be constrained to have a unique position. Similarly, if camera angle is a parameter to be optimized, the search space may be limited to [-180 to 180 degrees]. [0051] In one suitable embodiment, the optimization engine 236 may comprise a genetic algorithm that can generate new populations of camera configurations, for example, by implementing selections, crossovers and mutations, in a direction to maximize the optimization objective. Other suitable embodiments of the optimization engine 236 can include Reinforcement Learning (RL) algorithms, Particle Swarm Optimization (PSO) algorithms, among others.

[0052] The updated values of the optimizable parameters 246 may be communicated back to the simulation engine 204 to commence the next iteration of the steps described above. The iterations may be executed until a convergence criterion is met. For example, the convergence criterion may be met when the evaluated optimization objective has reached a predefined threshold value. Upon achieving the convergence criterion, the final values of the optimizable parameters may be stored as a final configuration 248 of the image acquisition system to be used in a deployment phase.

[0053] FIG. 7 illustrates deployment of the disclosed methodology on a vision-based inspection system according to one embodiment. As shown, the vision-based inspection system 700 may comprise an image acquisition system 702 including a number of cameras that may be triggered for capturing images of individual parts on an end-of-the-line conveyor system 704. The images captured by the image acquisition system 702 may be processed a defect detector 704 that may include a computing system which can use an Al model and/or a computer vision algorithm to detect the presence of a defect in the parts.

[0054] The inspection system 700 may be capable of visually inspecting different types of parts which may have different nominal geometries. A database 706 may store, for each type of part, the respective optimized configuration of the image acquisition system. In some embodiments, the database 706 may further store, for each type of part, a respective trained Al model specific to the part (e.g., produced by the Al engine 226). When batch of parts of a given type is introduced on the line, a part ID may be specified, for example, by an operator or a controller, to retrieve the corresponding camera configuration and Al model from the database 706. The retrieved camera configurations can be used to configure the image acquisition system 702. The retrieved Al model may be deployed to the defect detector 704.

[0055] In an alternate embodiment, the optimization of the camera configuration can be generalized for different types of parts. In this case, the above-described methodology may be implemented by using 3D models of different parts having different nominal geometries at each iteration including simulation, surface coverage measurement (in some embodiments, further including Al model performance measurement) and optimization. Thereby, a final generalized configuration of the image acquisition system may be determined for vision-based inspection of the different parts. This can obviate the need to change the camera configuration every time a new batch of parts arrives at the inspection system.

[0056] FIG. 8 shows an example of a computing system 800 that can support visual quality inspection of manufactured parts on a shop floor according to disclosed embodiments. In examples, the computing system 800 may be configured as a powerful multi-GPU workstation, among other types of computing devices. The computing system 800 includes at least one processor 810, which may take the form of a single or multiple processors. The processor(s) 810 may include a one or more CPUs, GPUs, microprocessors, or any hardware devices suitable for executing instructions stored on a memory comprising a machine-readable medium. The computing system 800 further includes a machine-readable medium 820. The machine-readable medium 820 may take the form of any non- transitory electronic, magnetic, optical, or other physical storage device that stores executable instructions, such as simulation instructions 822, surface coverage measurement instructions 824 and optimization instructions 826, as shown in FIG. 8. As such, the machine-readable medium 820 may be, for example, Random Access Memory (RAM) such as a dynamic RAM (DRAM), flash memory, spin-transfer torque memory, an Electrically-Erasable Programmable Read-Only Memory (EEPROM), a storage drive, an optical disk, and the like.

[0057] The computing system 800 may execute instructions stored on the machine-readable medium 820 through the processor(s) 810. Executing the instructions (e.g., the simulation instructions 822, the surface coverage measurement instructions 824 and the optimization instructions 826) may cause the computing system 800 to perform any of the technical features described herein, including according to any of the features of the simulation engine 204, the surface coverage measurement engine 216 and the optimization engine 236, described above.

[0058] The systems, methods, devices, and logic described above, including the simulation engine 204, the surface coverage measurement engine 216 and the optimization engine 236, may be implemented in many different ways in many different combinations of hardware, logic, circuitry, and executable instructions stored on a machine-readable medium. For example, these engines may include circuitry in a controller, a microprocessor, or an application specific integrated circuit (ASIC), or may be implemented with discrete logic or components, or a combination of other types of analog or digital circuitry, combined on a single integrated circuit or distributed among multiple integrated circuits. A product, such as a computer program product, may include a storage medium and machine- readable instructions stored on the medium, which when executed in an endpoint, computer system, or other device, cause the device to perform operations according to any of the description above, including according to any features of the simulation engine 204, the surface coverage measurement engine 216 and the optimization engine 236. Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network.

[0059] The processing capability of the systems, devices, and engines described herein, including the simulation engine 204, the surface coverage measurement engine 216 and the optimization engine 236, may be distributed among multiple system components, such as among multiple processors and memories, optionally including multiple distributed processing systems or cloud/network elements. Parameters, databases, and other data structures may be separately stored and managed, may be incorporated into a single memory or database, may be logically and physically organized in many different ways, and may be implemented in many ways, including data structures such as linked lists, hash tables, or implicit storage mechanisms. Programs may be parts (e.g., subroutines) of a single program, separate programs, distributed across several memories and processors, or implemented in many different ways, such as in a library (e.g., a shared library).

[0060] Although this disclosure has been described with reference to particular embodiments, it is to be understood that the embodiments and variations shown and described herein are for illustration purposes only. Modifications to the current design may be implemented by those skilled in the art, without departing from the scope of the patent claims.