Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUTONOMOUS SOLAR INSTALLATION USING ARTIFICIAL INTELLIGENCE
Document Type and Number:
WIPO Patent Application WO/2024/035917
Kind Code:
A1
Abstract:
A system and method for installing solar panels are provided. The method includes obtaining images of solar panels an installation structure during installation. The method also includes pre-processing the images by compensating for camera intrinsics or distortions, rectifying the images, and/or determining depth information. The method also includes detecting the solar panels by inputting the images into neural networks. The method also includes a first post-processing to compute a first panel pose based on an output of the neural networks. The method also includes generating control signals, based on the first panel pose, for operating a robotic controller for installing the solar panels. In some embodiments, the method also includes homography transforms to obtain a second panel pose, based on the first panel pose and visual patterns or fiducials on a solar panel, and generating the control signals further based on the second panel pose.

Inventors:
LUAN SCOTT T (US)
PIPER MICHAEL JASON (US)
RUSHITI BARDH (US)
ASAMI DEISE YUMI (US)
AVERY ALEXANDER (US)
KIGGINS JACOB (US)
SHELTON JOHN CHRISTOPHER (US)
Application Number:
PCT/US2023/030048
Publication Date:
February 15, 2024
Filing Date:
August 11, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
THE AES CORP (US)
International Classes:
H02S50/15; G06T5/50; G06T3/40
Foreign References:
US20220069770A12022-03-03
US20220164983A12022-05-26
US20210264141A12021-08-26
US20150059158A12015-03-05
US20130146123A12013-06-13
US20180241343A12018-08-23
Attorney, Agent or Firm:
KILLIAN, Jeffrey G. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method for autonomous solar panel installation, the method comprising: obtaining one or more images during installation, wherein the one or more images comprises an image of one or more solar panels and an installation structure; pre-processing the one or more images including one or more of compensating for camera intrinsics or distortions, rectifying the images, and determining depth information; detecting the one or more solar panels by inputting the one or more images into one or more neural networks that are trained to detect solar panels; a first post-processing to compute a first panel pose based on an output of the one or more neural networks; and generating control signals, based on the first panel pose, for operating a robotic controller for installing the one or more solar panels.

2. The method of claim 1, further comprising: a second post-processing comprising one or more homography transforms to obtain a second panel pose for the one or more solar panels, based on the first panel pose, wherein the second post-processing compensates or corrects for inaccuracies in the first panel pose based on visual patterns or fiducials on a solar panel, and wherein the control signals for operating the robotic controller for installing the one or more solar panels is further based on the second panel pose.

3. The method of claim 2, wherein the visual patterns comprise a grid pattern on the solar panel.

4. The method of claim 2, wherein the output of the one or more neural networks comprises panel segmentation, and wherein the second post-processing comprises: filtering out background in an image using the panel segmentation as a mask, to obtain a masked panel; identifying grid intersections in the masked panel as comers using a comer finding algorithm; computing a homography matrix Hi using the corners; determining locations of grid intersections in millimeter-space, based on Hi; computing a homography matrix H2 to transform the grid intersections in image-space to their locations in the millimeter space; and back-projecting comers in the millimeter space to image-space, using inverse of H2.

5. The method of claim 4, wherein determining locations of the grid intersections comprises an association in Hi-space based on Euclidean distance.

6. The method of claim 2, wherein the output of the one or more neural networks comprises panel segmentation, and wherein the second post-processing comprises: filtering out background in an image using the panel segmentation as a mask, to obtain a masked panel; identifying grid intersections in tire masked panel in millimeter space using one or more artificial intelligence techniques; computing a homography matrix Hi based on the grid intersections; and back-projecting comers in the millimeter space to image-space, using inverse of Hi.

7. The method of claim 2, wherein the second post-processing comprises: estimating four comers of a solar panel in an image-space ki; computing a homography matrix Hi that maps ki to a millimeter-space; identifying (i) pixel locations pi of panel features in image-space, and (ii) corresponding locations qi for the pixel locations pi in millimeter-space, based on Hi; computing a homography matrix H2 that maps pi to qi; and back-projecting comers (0, 0), (H, 0), (H, W), (0, W) from millimeter- space to imagespace based on H2 1.

8. The method of any one of claims 1-7, wherein the one or more neural netw orks is trained to output bounding boxes, segmentation, keypoints, depth and/or a 6D0F pose.

9. The method of any one of claims 1-8, wherein the pre-processing comprises compensating for a camera distortion, rectifying the image, and/or determining depth information based on a single-baseline stereo camera, a multi-baseline stereo camera, a time-of-flight sensor, or a LiDAR sensor.

10. The method of any one of claims 1-9, wherein the first post-processing comprises one or more computer vision algorithms for processing the output of the one or more neural networks based on invariant structures in the images to determine locations of panel keypoints.

11. The method of any one of claims 1-10, wherein the first post-processing further comprises solving for Perspective-n-Point based on panel dimensions and panel keypoints.

12. The method of claim 11, wherein the panel keypoints are four comers of the panel frame.

13. The method of any one of claims 1-12, wherein the installation structure includes a torque tube and a clamp, and: wherein the method further comprises a third post-processing comprising processing one or more images of the torque tube and/or clamp.

14. The method of claim 13, wherein the one or more images of the torque tube and/or clamp is obtained with a high-resolution camera and structured lighting.

15. The method of claim 14, wherein the structured lighting is a laser line that is approximately orthogonal or parallel with respect to the torque tube.

16. The method of claim 13, wherein the processing of the one or more images of the torque tube and/or clamp is performed by one or more neural networks and/or a computer vision pipeline.

17. The method of any one of claims 13- 16, further comprising locating a nut associated with the clamp by using high-intensity illumination and computer vision algorithms.

18. The method of claim 17, wherein high-intensity illumination is a ring light.

19. A system for installing solar panels, the system comprising: a camera system for obtaining one or more images during installation, wherein the one or more images comprises an image of one or more solar panels and an installation structure; one or more devices for (i) pre-processing the one or more images including one or more of compensating for camera intrinsics or distortions, rectifying the images, and determining depth information, estimating panel poses for the one or more solar panels, based on the solar panel segments; (ii) detecting the one or more solar panels based on the one or more images; and (iii) a first post-processing to compute a first panel pose based on an output of the one or more neural networks; and a controller for generating control signals, based on the first panel pose, for operating a robotic controller for installing the one or more solar panels.

20. The system of claim 19, further comprising: the one or more devices for a second post-processing comprising one or more homography transforms to obtain a second panel pose for the one or more solar panels, based on the first panel pose, wherein the second post-processing compensates or corrects for inaccuracies in the first panel pose based on visual patterns or fiducials on a solar panel, and wherein the control signals for operating the robotic controller for installing the one or more solar panels is fur the r based on the second panel pose.

Description:
AUTONOMOUS SOLAR INSTALLATION USING ARTIFICIAL INTELLIGENCE

RELATED APPLICATION DATA

[0001] This application is based on and claims priority under 35 U.S. C. §119 to U.S. Provisional Application No. 63/397,125, fded August 11, 2022, the entire contents of which is incorporated herein by reference.

BACKGROUND

Field of the Invention

[0002] The present disclosure generally relates to a solar panel handling system, and more particularly, to a system and method for installation of solar panels on installation structures.

Discussion of the Related Art

[0003] In the discussion that follows, reference is made to certain structures and/or methods. However, the following references should not be construed as an admission that these structures and/or methods constitute prior art. Applicant expressly reserves the right to demonstrate that such structures and/or methods do not qualify as prior art against the present invention.

[0004] Installation of a photovoltaic array typically involves affixing solar panels to an installation structure. This underlying support provides attachment points for the individual solar panels, as well as assists with routing of electrical systems and, when applicable, any mechanical components. Because of the fragile nature and large dimensions of solar panels the process of affixing solar panels to an installation structure poses unique challenges. For example, in many instances the solar panels of a photovoltaic array are installed on a rotatable structure which can rotate the solar panels about an axis to enable the array to track the sun. In such instances, it is difficult to ensure that all of the solar panels in an array are coplanar and leveled relative to the axis of the rotatable structure. Additionally, the installation costs for photovoltaic array can be a considerable portion of the total build cost for the photovoltaic array. Thus, there is a need for a more efficient and reliable solar panel handling system for installing solar panels in photovoltaic array. Conventional computer vision techniques may be used when the environment is ideal. However, glare, over- or under-exposure can negatively affect object detection algorithms.

SUMMARY

[0005] Accordingly, the present invention is directed to a solar panel handling system that substantially obviates one or more of the problems due to limitations and disadvantages of the related art. [0006] The solar panel handling system disclosed herein facilitates tire installation of solar panels of a photovoltaic array on a pre-existing installation structure such as, for example, a torque tube. Installing solar panels can be made more efficient and reliable by combining tooling for handling the solar panel with components that enable mating of the solar panel to the solar panel support structure. Some embodiments use machine learning techniques to overcome environmental inconsistencies. The system can learn from examples with glare and illumination issues, and can generalize to new data during inference.

[0007] Additional features and advantages of the invention will be set forth in the description which follows, and in part will be apparent from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.

[0008] To achieve these and other advantages and in accordance with the purpose of the present invention, as embodied and broadly described, a system for installing a solar panel may comprise an end of arm assembly tool comprising a frame and suction cups coupled to the frame, and a linear guide assembly coupled to the end of arm assembly tool, wherein the linear guide assembly includes: a linearly moveable clamping tool including an engagement member configured to engage a clamp assembly slidably coupled to an installation structure, a force torque transducer configured to move the clamping tool along the installation structure, and a junction box coupled to the frame and including a controller configured to control the force torque transducer and the suction cups, and a power supply.

[0009] In another aspect, a method of installing a solar panel may comprise engaging an end of arm assembly tool with a solar panel, the end of arm assembly tool comprising a frame and suction cups coupled to the frame, positioning the solar panel relative to an installation structure having a clamp assembly slidably coupled thereto, engaging a linear guide assembly coupled to the end of arm assembly tool with the clamp assembly, the linear guide assembly comprising a linearly moveable clamping tool including an engagement member configured to engage the clamp assembly and a force torque transducer configured to move the clamping tool along the installation structure, and actuating the force torque transducer to move the clamp assembly along tire installation structure so as to engage with a side of tire solar panel, thereby fixing the solar panel relative to the installation structure.

[0010] In another aspect, a method of training a neural netw ork for autonomous solar installation, according to some embodiments. The method includes obtaining one or more images during installation. The one or more images includes an image of one or more solar panels and an installation structure. The method also includes pre-processing the one or more images including one or more of compensating for camera intrinsics or distortions, rectifying the images, and determining depth information. The method also includes detecting the one or more solar panels by inputting the one or more images into one or more neural networks that are trained to detect solar panels. The method also includes a first post-processing to compute a first panel pose based on an output of the one or more neural networks. The method also includes generating control signals, based on the first panel pose, for operating a robotic controller for installing the one or more solar panels. In some embodiments, the method also includes a second post-processing including one or more homography transforms to obtain a second panel pose for the one or more solar panels, based on the first panel pose. The second post-processing compensates or corrects for inaccuracies in the first panel pose based on visual patterns or fiducials on a solar panel. The control signals for operating the robotic controller for installing the one or more solar panels is further based on the second panel pose.

L0011J It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are intended to provide further explanation of the invention as claimed.

BRIEF DESCRIPTION OF THE DRAWINGS

[0012] The accompanying drawings, which are incorporated herein and form part of the specification, illustrate the present invention and, together with the description, further serve to explain principles of the invention and to enable a person skilled in the relevant arts to make and use the invention. The exemplary embodiments are best understood from the following detailed description when read in conjunction with the accompanying drawings. It is emphasized that, according to common practice, the various features of the drawings are not to scale. On the contrary, the dimensions of the various features are arbitrarily expanded or reduced for clarity. Included in the drawings are the following figures:

[0013] Figure 1 shows a perspective view of a solar panel handling system along with a container of solar panels, in accordance with an embodiment of the present disclosure.

[0014] Figure 2A-2C show a top, front, and side view, respectively, of tire solar panel handling system and container of solar panels of Figure 1.

[0015] Figure 3A shows a top, front, and side view, respectively, of the solar panel handling system coupled to a single solar panel, in accordance with an embodiment of the present disclosure. [0016] Figures 4A and 4B show perspective views of the solar panel handling system, in accordance with an embodiment of the present disclosure.

[0017] Figure 5 A and 5B show a top view and a front view, respectively, of a solar panel handling system, in accordance with an embodiment of the present disclosure.

[0018] Figure 5C shows a side view with a clamping tool of the solar panel handling system in a retracted position, in accordance with an embodiment of the present disclosure.

[0019] Figure 5D shows a side view with a clamping tool in an extended or advanced position, in accordance with an embodiment of the present disclosure.

[0020] Figures 6A and 6B show perspective views of the clamping tool of a solar panel handling system in engagement with a clamp assembly coupled to an installation structure, in accordance with an embodiment of the present disclosure.

[0021] Figure 7A shows a top view of the clamping tool of the solar panel handling system in engagement with a clamp assembly coupled to an installation structure, in accordance with an embodiment of the present disclosure.

[0022] Figure 7B shows a front view of the clamping tool of the solar panel handling system in engagement with a clamp assembly coupled to an installation structure, in accordance with an embodiment of the present disclosure.

[0023] Figure 7C shows a side view of the clamping tool of the solar panel handling system in engagement with a clamp assembly coupled to an installation structure, in accordance with an embodiment of the present disclosure.

[0024] Figure 7D shows a back view of the clamping tool of the solar panel handling system in engagement with a clamp assembly coupled to an installation structure, in accordance with an embodiment of the present disclosure.

[0025] Figure 8 schematically illustrates, in an overhead view, the solar panel handling system during the process of installing a solar panel, in accordance with an embodiment of the present disclosure.

[0026] Figure 9 illustrates the solar panel handling system including the assembly tool coupled with an assembly moving robot using a robotic arm.

[0027] Figure 10 illustrates the solar panel handling system having two robotic arms in which two assembly tools are coupled with an assembly moving robot using respective robotic arms.

[0028] Figures 11A to 11C illustrate a process for installing the solar panels. [0029] Figure 12A and 12B illustrate an arrangement for a moving robot system including two module vehicles and a ground vehicle having two robotic arms.

[0030] Figure 13 schematically illustrates installation achieved using computer vision registration.

[0031] Figure 14 schematically illustrates an arrangement wherein the module vehicles are exchanged with new module vehicles having additional solar panels for replenishment.

[0032] Figures 15-34 provide detailed illustrations of an example configuration for a system for installing solar panels according to an embodiment of the present disclosure.

[0033] Figure 35A shows a block diagram of an example image processing pipeline, according to some embodiments.

[0034] Figure 35B shows an example rectified acquired image, according to some embodiments.

[0035] Figure 35C shows an example output for neural network image segmentation for the acquired rectified image shown in Figure 35B, according to some embodiments.

[0036] Figure 35D shows an example panel comer detection, according to some embodiments.

[0037] Figure 36 shows examples for images of a road, under different lighting conditions, and segmentation masks for the images, according to some embodiments.

[0038] Figure 37A shows an example of a captured image that includes a solar panel and a torque tube, according to some embodiments.

[0039] Figure 37B shows an example of an annotated image for the captured image shown in Figure 37A, according to some embodiments.

[0040] Figure 38A shows an example of image classification.

[0041] Figure 38B shows an example of object localization for the image shown in Figure 38A.

[0042] Figure 38C shows an example of semantic segmentation, according to some embodiments.

[0043] Figure 39 shows an example of instance segmentation for solar panels, according to some embodiments.

[0044] Figure 40 shows an example image processing system, according to some embodiments. [0045] Figure 41 shows a trailer system, according to some embodiments.

[0046] Figures 42A and 42B show histograms of pose error norms for the neural network with and without coarse position, according to some implementations.

[0047] Figures 43A and 43B show examples for coarse positions of solar panels using a B Mask R-CNN model, according to some embodiments.

[0048] Figure 44 shows a system 4400 for solar panel installation, according to some embodiments.

[0049] Figure 45A shows a vision system for tracking trailer position, and Figure 45B shows an enlarged view of the vision system, according to some embodiments.

[0050] Figure 46A shows a vision system for module pick, and Figure 46B shows an enlarged view of the vision system, according to some embodiments.

[0051] Figure 47A, 47B, and 47C show a system 4700 for distance measurement at module angle, according to some embodiments.

[0052] Figure 48A shows a system for laser line generation for detecting tube and clamp position, according to some embodiments.

[0053] Figure 48B shows an enlarged view of the laser line generation system shown in Figure 48A, and Figure 48C shows a view of laser line generation (horizontal line detects a clamp, and a vertical line detects a tube), according to some embodiments.

[0054] Figure 49A shows a vision system 4900 for estimating tube and clamp position, and Figure 49B shows an enlarged view of the vision system, according to some embodiments.

[0055] Figure 50A shows a flowchart of a method for autonomous solar installation, according to some embodiments.

[0056] Figure 50B shows a flowchart of a method of training a neural network for autonomous solar installation, according to some embodiments.

[0057] Figure 51 shows an example application of Hough transforms for identifying comers of a solar panel, according to some embodiments.

[0058] Figure 52 shows an example application of segmentation, according to some embodiments.

[0059] Figure 53 shows an example application of comer detection algorithm, according to some embodiments. [0060] Figure 54 shows grid intersections that can be detected using a two-pass homography algorithm, according to some embodiments.

[0061] Figure 55 shows a schematic diagram of region-of-interest segmentation, according to some embodiments.

[0062] Figure 56 shows multiple-data-channel capabilities of LiDAR used by some embodiments.

[0063] Figure 57 is a schematic diagram of an example computer vision/artificial intelligence (Al) architecture for estimating six degrees-of-freedom (6DoF) pose of a solar panel, according to some embodiments.

[0064] Figure 58 is a schematic diagram of another example computer vision/AI pipeline for estimating the 6DoF pose of a solar panel, according to some embodiments.

[0065] Figure 59 is a schematic diagram of sensor fusion architecture for determining the 6DoF pose of a solar panel, according to some embodiments.

[0066] Figure 60 is a schematic diagram of another sensor fusion architecture for determining the 6DoF pose of a solar panel, according to some embodiments.

[0067] Figure 61 is a schematic diagram of a sensor fusion architecture, according to some embodiments.

[0068] Figure 62 is a schematic diagram of a solar panel installation, according to some embodiments.

[0069] Figure 63 shows a flowchart of a method for autonomous solar installation, according to some embodiments.

[0070] The features and advantages of the present invention will become more apparent from the detailed description set forth below when taken in conjunction with the drawings, in which like reference characters identify corresponding elements throughout. In the drawings, like reference numbers generally indicate identical, functionally similar, and/or structurally similar elements.

DETAILED DESCRIPTION

[0071] Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings.

[0072] Figure 1 shows a perspective view of a solar panel handling system along with a box of solar panels, in accordance with an embodiment of the present disclosure. The solar panel handling system may include an end of arm assembly tool 100 which can couple to individual solar panels 120 from a box of solar panels and move them to a position relative to an installation structure for installation.

[0073] The end of arm assembly tool 100 may include a frame 102 and one or more attachment devices 104 coupled to the frame 102. Example attachment devices 104 include suction cups or other structures that can be releasably attached to the surface of the solar panel 120 and, at least in the aggregate, maintain attachment during manipulation of the solar panel 120 by the end of arm assembly tool 100. The frame 102 may consist of several trusses 102 -A for providing structural strength and stability to the frame 102. The frame 102 also functions as a base for the end of arm assembly tool 100 and other related components of the solar panel handling system disclosed herein.

[0074] Other related components of the solar panel handling system disclosed herein may be coupled to the frame 102 so as to fix a relative position of the components on the end of arm assembly tool 100. One or more of the various components of the solar panel handling system may be coupled to one or more of the trusses 102-A so as to fix a relative position of the components on the end of arm assembly tool 100.

[0075] The attachment devices 104 are configured to reliably attach to a planar surface such, as for example, a surface of a solar panel, such as by using vacuum. In a suction cup embodiment, the suction cups can be actuated by pushing the cup against the planar surface, thereby pushing out the air from the cup and creating a vacuum seal with the planar surface. As a consequence, the planar surface adheres to the suction cup with an adhesion strength that is dependent on the size of the suction cup and the integrity of the seal with the planar surface. In some embodiments, the suction cups engage with the solar panel to create an air-tight seal, and then a vacuum pump sucks the air out of the suction cups, generating the vacuum required for the proper adhesion to the solar panel. In some embodiments, an air inlet (not shown) provides air onto the planar surface when the planar surface is sealed to the suction cup so as to deactivate the vacuum and release the planar surface from the suction cup.

[0076] The system may further include a linear guide assembly 106 coupled to the end of arm assembly tool 100. The linear guide assembly 106 includes a linearly movable clamping tool 108 with an engagement member 108-A configured to engage a clamp assembly coupled to an installation structure. The linear guide assembly 106 can be actuated to move the clamping tool 108 along an axis between, for example, an extended position and a retracted position. The axis of movement of the clamping tool 108 may be parallel to an axis of tire installation structure. Thus, the linear guide assembly 106 can move the clamping tool 108 and the engagement member 108-A along the installation structure.

[0077] In some embodiments, the engagement member 108-A may include electromagnets which may be actuated to grasp a clamp assembly 602 (see Figure 6A, 6B). Alternatively, or additionally, the engagement member 108-A may include a gripper to prevent disengagement between the clamp assembly 602 and the engagement member 108-A when the linear guide assembly 106 is actuated to move the clamping tool relative to the installation structure as described in more detail elsewhere herein.

[0078] The linear guide assembly 106 is actuated using a force torque transducer 110. In some embodiments, the linear guide assembly 106 and the force torque transducer 110 may form a rack and pinion structure such that the rotation of the force torque transducer 110 results in advancement or retraction of the clamping tool 108. In some embodiments, the linear guide assembly 106 may be a hydraulic assembly including a telescoping shaft coupled to the clamping tool 108. In such embodiments, the force torque transducer 110 may be configured in the form of a pump for pumping a hydraulic fluid. In other embodiments, the force torque transducer 110 may be configured in the form of or coupled to a liner drive motor that engages a surface of the telescoping shaft coupled to the clamping tool 108.

[0079] In some embodiments, the linear guide assembly 106 may include an electric rod actuator to move the clamping tool 108 parallel to an axis of the installation structure.

[0080] In some embodiments, the guide assembly 106 may include a roller 606 to facilitate the movement of the clamping tool 108 along the installation structure 604. The roller may, for example, include a bearing or other components designed for reducing friction while the clamping tool 108 moves relative to the installation structure. The roller may be coupled w ith a sensor, such as by a force sensor or rotation sensor, to provide feedback to a controller.

[0081] In some embodiments, the guide assemble may include a spring mechanism 608 that enables small amounts of tilting (up to 15 degrees of tilt) of the clamping tool 108 relative to the installation structure 604. Such tilting may occur when the orientation assembly 804 tilts the end of arm assembly tool 100 relative to the installation structure 604 in order to appropriately level the solar panel.

[0082] The system may further include a junction box 112 coupled to the frame 102. The junction box 112 may include a controller configured to control the force torque transducer 110 and the attachment devices 104. In some embodiments, the junction box 112 may also include a power supply or a power controller for controlling the power supply to various components. [0083] In some embodiments, the controller 112 may include a processor operationally coupled to a memory. The controller 112 may receive inputs from sensors associated with the solar panel handling system (e.g., an optical sensor or a proximity sensor 108-B described elsewhere herein). The controller 112 may then process the received signals and output a control command for controlling one or more components (e.g., the linear guide assembly 106, the clamping tool 108, or the attachment devices 104). For example, in some embodiments, the controller 112 may receive a signal from a proximity sensor determining that the clamp assembly is approaching a trailing edge of a solar panel being installed and accordingly reduce the speed of the linear guide assembly 106 to reduce excessive forces and impacts on the solar panel.

[0084] Referring to Figure 8, in some embodiments, the solar panel handling system may further include an optical sensor 802 such as, for example, a camera, a photodetector, or any other optical imaging or light sensing device. The optical sensor is suitably located on the frame 102, for example, at an outer or lower surface of an edge member indicated by position 802-A in Figure 8, or at an interior location of the frame 102 that has a field of view that includes the leading edge of the solar panel, such as indicated by position 802-B in Figure 8. The optical sensor may be configured to sense an orientation of the solar panel relative to the installation structure during the operation of the end of arm assembly tool. In some embodiments, the optical sensor may be configured in the form of one or more light guided levels (not shown). In such embodiments, one or more light beams (e.g., laser beams) may be projected along or parallel to the axis of the installation structure 604 from one end of the end of arm assembly tool 100, such as first locations on the frame 102. One or more photodetectors may be positioned at another end of the end of arm assembly tool 100, such as second locations on the frame 102, so as to detect the one or more laser beams. Thus, if the solar panel 120 being installed is not appropriately oriented or properly level relative to the installation structure 604, the solar panel 102 may obstruct the some or all of one or more laser beams resulting in varying signals from the one or more photodetectors, indicating that the solar panel 120 is not appropriately oriented or properly level relative to the installation structure 604.

[0085] In some embodiments, one or more sensors, such as optical sensors 802, may be used to detect and recognize objects to position and control the installation with improved accuracy. The sensor(s) may be implemented together with a neural netw ork of, for example, an artificial intelligence (Al) system. For example, a neural network can include acquiring and correcting images related to the solar panel handling system, the solar panels (both installed and to be installed), and the installation environment (both natural environment, such as topography, and installed equipment, such as structures related to the solar panel array). Also, for example, a neural network can include acquiring and correcting positional or proximity information. The corrected images and/or the corrected positional or proximity information are input into the neural network and processed to estimate movement and positioning of equipment of the solar panel handling system, such as that related to autonomous vehicles, storage vehicles, robotic equipment, and installation equipment. The estimated movement and positioning are published to a control system associated with the individual equipment of the solar panel handling system or to a master controller for the solar panel handling system as a whole.

[0086] In some embodiments, the signal from the optical sensor may be input to the controller. In some embodiments, the solar panel handling system may further include an orientation assembly 804 (see Figure 8) configured to tilt the end of arm assembly tool 100 relative to the installation structure 604. In such embodiments, the controller 112 may control tire orientation in response to an input from the optical signal indicating that the solar panel being installed is not appropriately oriented or properly level relative to the installation structure, such as a torque tube 604. It will be appreciated that while the orientation assembly 804 is shown as being coupled to the force torque transducer 110, those of ordinary skill in the art will readily recognize other means of implementing the orientation assembly 804.

[0087] In some embodiments, the controller 112 may also be configured to control the attachment devices 104 so as to activate or deactivate the attachment/detachment thereof. For embodiments in which the attachment devices 104 are suction cups, a vacuum can enable coupling or release of the solar panels 120 with the end of arm assembly tool 100.

[0088] In some embodiments, the installation structure 604 may have an octagonal crosssection, as shown, e.g., in Figures 6A, 6B, and 7A-7D, to form a torque tube preventing inadvertent slipping of the clamp assembly 602. However, other cross-sectional shapes may be used, such as squared, oval, or other shape. Further, the installation structure 604 may use a circular cross-sectional shapes.

[0089] In some embodiments, the assembly tool 100 may be configured to couple with an assembly moving robot 903 (an example of which is shown in Figures 9 and 10). The assembly moving robot 903 may be configured to position the end of the arm assembly tool 100 relative to a stack or storage container 905 of solar panels, move a selected solar panel and position the selected solar panel relative to the installation structure 604. In some embodiments, the assembly moving robot 903 may be operationally coupled with the end of arm assembly tool 100 via the force transducer 110 (or where applicable, the orientation assembly 804). In some embodiments, tire assembly moving robot may also be operationally coupled to tire controller, enabling an operator of the assembly moving robot to control the various functions of the end of arm assembly tool 100, such as, for example, activation and/or deactivation of the attachment devices 104, advancement and/or retraction of the clamping tool, and/or activation and/or deactivation of the engagement member relative to the clamp assembly.

[0090] Referring now to Figures 1, 6A, 6B, 7A-7D, 9, and 10, in operation, a solar panel 120 is obtained and positioned over the installation structure 604. The solar panel is then tilted relative to the installation structure 604 so that a leading edge of the solar panel (i.e., an edge that will be adjacent an edge of the previously installed solar panel or, for a first solar panel, an edge that will be adjacent a stop affixed to the installation structure 604) is oriented closer to the installation structure 604 than an opposite, trailing edge. The leading edge is then placed in a receiving channel (either a receiving channel positioned along the edge of the previously installed solar panel, i.e. as part of a clamp assembly, or a receiving channel in the stop) and tire tilt of tire solar panel reduced to an installed position on the installation structure. The tilt angle is reduced while the solar panel is biased into the receiving channel so that in the installed position the edge region of the top planar surface of the solar panel (i.e., the photovoltaically active surface that is oriented to the sun) is captured within the receiving channel. An example embodiment of a receiving channel 610 on a clamp assembly 602 is shown in Figures 6A and 6B.

[0091] Once the solar panel is in position on the installation structure, the force torque actuator 110 actuates the guide assembly 106 of the end of arm assembly tool 100 to contact the engagement member 108-A of the clamping tool 108 with a clamp assembly 602. This clamp assembly was originally positioned on the installation structure outside the area to be occupied by the solar panel being installed, but also sufficiently close so as to be reached by the relevant components of the end of arm assembly tool 100. Surfaces and features of the engagement member 108-A may be located and sized so as to mate with complimentary features on the clamp assembly 602. After this contact, the force torque actuator 110 is actuated (either continued to be actuated or actuated in a second mode) to axially slide the clamp assembly 602 along a portion of the length of the installation structure 604. Axially sliding of the clamp assembly 602 engages a receiving channel of the clamp assembly 602 with the trailing edge of the just installed solar panel. Sensors, such as in the force torque actuator 110 or in tire clamping tool 108, can provide feedback to the controller indicating full engagement of the receiving channel of the clamp assembly 602 with the trailing edge of the solar panel. Once the clamp assembly 602 is positioned, the guide assembly 106 is retracted and installation of the next solar panel can occur.

[0092] In some embodiments, the linear guide assembly 106 may include a proximity sensor 108-B configured to sense a distance between the engagement member 108 and the trailing edge of the solar panel 120 during an operation of installation of the solar panel 120. An output from the proximity sensor 108-B may be used to suitably control the speed of the clamping tool 108 during the operation of linear guide assembly 106 so as to avoid excessive forces and impacts on the solar panel 120. In some embodiments, the proximity sensor 108-B may be, for example, an optical or an audio sensor (e.g., sonar) that detects a distance between the leading edge of the solar panel 120 and the engagement member 108; in other embodiments, the proximity sensor 108-B may be a limit switch that is retracted by contact.

[0093] With further reference to Figures 9 and 10, the assembly moving robot 903 may be implemented using a ground vehicle 907. For example, the ground vehicle 907 may be implemented as an electric vehicle (EV). The ground vehicle 907 may autonomously move adjacent to the installation structure 604. While not shown, the ground vehicle 907 may move along a track or a rail that is attached to or separate from tire installation structure. In some embodiments, the ground vehicle 907 may be controlled using sensors or be controlled based on input or feedback from sensors. The sensors can be, for example, optical sensors or proximity sensors. In further embodiments, a neural network using artificial intelligence may be used in controlling movement of the ground vehicle 907, such as by analyzing the operating environment and developing instructions for movement and of the ground vehicle.

[0094] Figure 10 illustrates an embodiment of a solar panel handling system having two robotic arms in which two assembly tools are coupled with an assembly moving robot using respective robotic arms.

[0095] As shown Figure 9, the storage container 905 containing the solar panels to be installed may be disposed on the ground vehicle. Here, Figure 9 illustrates the solar panel handling system including an arm assembly tool 100 coupled with an assembly moving robot using a robotic arm. Alternatively, as shown in Figure 10, one or more storage containers 905 may be disposed on respective one or more of module vehicles 1005 adjacent to the ground vehicle 907. As such, Figure 10 illustrates the solar panel handling system having two robotic arms in which two assembly tools are coupled with an assembly moving robot using respective robotic arms. In embodiments of the disclosure, the robotic arm(s) may be an articulated arm having tw o or more sections coupled with joints, or alternatively may be a truss arm. Illustrations herein are intended to disclose the use of any type of arm in accordance with the present disclosure.

[0096] In accordance with Figure 9, for example, the robotic arm of the arm assembly tool 100 having an upper section 908 and a lower section 909 may offer increased flexibility in operation while maintaining lightweight and simple operation. As additionally illustrated in Figure 9, a second robotic arm 911 may be provided with tire arm assembly tool 100 having a nut runner or nut driver at an end thereof to secure the solar panel to the installation structure 604. While any type of robotic arm may be used for the second robotic arm 911. Figure 9 illustrates an example using an articulated arm with the nut runner or nut driver at an end thereof. Here, the robotic arms 100 and 911 may be autonomous operated using computer vision with a neural network and artificial intelligence control. Alternatively, the robotic arms 100 and 911 may be manually operated or remote control operated.

[0097] In some embodiments, the ground vehicle 907 may be an autonomous vehicle in which the neural network and artificial intelligence control the movement and operation and the module vehicles 1005 are towed or coupled to the ground vehicle 907. In other embodiments, the module vehicles 1005 may be an autonomous vehicle in which the neural network and artificial intelligence control the movement and operation and the ground vehicle 907 is towed or coupled to the module vehicles 1005. Also, in some embodiments, tire assembly moving robot 903 is mounted on one of the ground vehicles 907 and the module vehicles 1005. In other embodiments, the assembly moving robot 903 can be mounted on a dedicated robot vehicle.

[0098] A process for installing the solar panels is shown in Figures 11A to 11C. As shown in Figure 11A, a pallet of solar panels may be delivered via truck. In some embodiments, the pallet may compose the storage container 905 of solar panels. The pallet may include machine readable signage, such as a bar code, a QR-code, or other manufacturing reference, that can be read to provide information regarding the solar panels, the installation instructions or other information to be used in the installation process, particularly information to be used by the neural network and artificial intelligence control. Such information can include, for example, number of solar panels, the type of solar panels, physical characteristics of the solar panel such as size, characteristics related to installation, such as hardware type and location, installation instructions, or other characteristics of the solar panels, the storage of the solar panels on the pallet, and information related to installation. Further, using the machine-readable signage, the system may control feeding or replenishing the panels boxes in the right order and/or to ensure panels with similar impedance from the factory are used.

[0099] As shown in Figure 1 IB, mechanized equipment such as a forklift may be used to move and position the pallet on the ground vehicle. Here, the forklift may be manually operated, remotely operated, or autonomous. In Figure 1 IB, the pallet is positioned on the ground vehicle. Alternatively, the pallet may be positioned on a module vehicle. Then, as shown in Figure 11C, the arm of the robot is used to install the solar panels. In the illustrated example, two arms are used to handle respective solar panels to be installed on respective installation structures. Here, the ground vehicle moves between two respective installation structures. Further, one module vehicle is provided, which may be separated from the ground vehicle. [0100] As one of ordinary skill in tire art would recognize, modifications and variations in implementation may be used. For example, as shown in Figures 12A and 12B, two module vehicles may be provided for the respective robot arms. In a further alternative, the module vehicles may be connected with the ground vehicle instead of being separated. Thus, as shown in Figure 12A, the robot arms may engage respective solar panels to be installed as illustrated in Figure 12B).

[0101] In some embodiments, as illustrated in Figure 13, installation may be achieved using computer vision registration. For example, as mentioned above, optical sensors or the like may be utilized with a neural network for artificial intelligence.

[0102] In some embodiments, as illustrated in Figure 14, if module vehicles are used with the ground vehicle, the module vehicles may be exchanged with replenished module vehicles when all solar panels of the module vehicle are installed. Here, the computer vision process may be used to communicate with and to control an autonomous independent vehicle, such as a forklift, to bring additional solar panel boxes. Thus, the supply of solar panels may be replenished.

[0103] In the replenishment operation using the example of a forklift, the forklift (whether autonomous, remote controlled or manually operated) may be used to return empty boxes or containers of the solar panels to a waste area, remove straps, open lids, or cut away box faces from boxes being delivered, pick up boxes to correct rotation/onentation of the solar panels, or other tasks. Further, the forklift may be maintained near the ground vehicle to wait for the system to deplete the next box of solar panels. Thus, the forklift may manually or autonomously discard a depleted box, position a next box on the ground vehicle or the module vehicle, open box (including removing straps, opening lids, or cutting away box faces) and back away from the ground vehicle/module vehicle. As described, the replenishment may be autonomous, remote controlled, or manually operated, for example.

[0104] Figures 15-34 provide detailed illustrations of an example configuration for a system for installing solar panels according to an embodiment of the present disclosure.

Computer Vision and Artificial Intelligence (Al) Techniques

[0105] Computer vision and Al techniques may be used to determine a location where a solar panel can be installed along a mounting structure, such as a torque tube, according to some embodiments. The installation location of a panel may be based on the location of a previously installed panel. Some embodiments use the six degrees-of-freedom (6DoF) pose of the previously installed panel. The term "6DoF" represents six degrees of freedom, which are the three rotational axes (yaw, pitch, and roll) and the three translational axes (x, y, and z). An estimation of a 6DoF pose of a panel includes the position (x, y, z) in 3D-space of a point (or keypoint) on the panel, such as a comer of the frame or some other visual fiducial, and the rotational angles of the panel about the three Cartesian axes (rx, ry, rz). In some embodiments, computer vision or Al is used to estimate the 6D0F pose of the previously installed panel to determine where the next panel is to be installed along the torque tube. The next panel to be installed along the torque tube is typically some constant offset from the position (x y, z) of a keypoint of the previously installed panel with the same rotational angles (rx, ry, rz).

Estimation of 6D0F Pose for Panel Placement and Panel Pick

[0106] Although some embodiments described herein are directed to panel placement (for installation along, e.g., a torque tube), those embodiments rely on techniques that are also applicable to panel pick, e.g., from a storage location, such as a module box or cradle. For panel pick, as for panel placement, the 6D0F pose of a panel in question is determined. For panel placement, the panel in question may be a previously installed panel, as discussed above. By contrast, for panel pick, the panel in question may be a current (or outermost) panel (in a storage location) to be picked for installation. For panel placement, the 6D0F pose of the previously installed panel may be used to determine an (offset) location of the next panel to be installed along the torque tube, as discussed above. By contrast, for panel pick, the 6D0F pose of the current panel to be installed may be used to determine the pick point (usually the center point) of the panel to be picked. For panel pick, a challenge is to pick a panel such that the EOAT is centered with respect to the panel. This centering of the EOAT with respect to the picked panel ensures equal load distribution for the EOAT. For panel placement, as discussed above, the challenge is to install the next panel such that has a certain pose (typically: x, y plus panel width plus [e], z, rx, ry, rz) with respect to tire previously installed panel. However, for both panel placement and panel pick, a goal of CV/AI is to determine the 6D0F pose of a panel in question. And the 6D0F pose can be determined by techniques and technologies based on techniques described herein.

[0107] In the following description, the term “computer vision” is used to refer to “classical” computer vision techniques and algorithms, and the term “artificial intelligence (Al)” is used instead of “machine learning (ML)” or “deep learning (DL)” because the instant disclosure generalizes well to current and newly-developed technologies.

[0108] The term “image” is used to include not just camera images but also data and imageiy from other types of sensors, such as time-of-flight sensors, LiDAR, or other sensors that image or scan a field-of-view (FoV); and terms “pre-processing” and “post-processing” may involve different hardware (e.g., computing devices, sensors) with respect to what is initially or subsequently processed.

Example Non- Al Methods for Pose Estimation

[0109] In some embodiments, the 6D0F pose is estimated using non-AI techniques. 6D0F pose of panels may be determined using different sensors (e.g., stereo cameras, LiDAR) and computer vision algorithms. These different approaches are described below as different pre-processing and post-processing methods with respect to the Al processing. These pre-processing and postprocessing steps, on their own (without Al), may be combined to form a computer vision pipeline that can be used to determine 6D0F poses of panels.

Example Al Models and Inferences

[0110] Turning now to different types of Al models and inferences that are applicable for the determination of 6DoF pose of solar panels. Al-based inferences may be based on bounding boxes, segmentation, keypoints, depth, and/or 6DoF poses. Each of these different techniques is discussed in turn below.

Segmentation

[0111] Segmentation in Al refers to the process of dividing an image or a video into meaningful and semantically coherent regions. A goal of segmentation is to partition the visual data into distinct regions based on their shared characteristics, such as color, texture, or object boundaries. In computer vision, traditional segmentation techniques include methods like thresholding, region growing, edge-based segmentation, and clustering algorithms such as k- means or mean-shift. These methods rely on manual features and heuristics to segment images.

[0112] In Al, deep learning methods, particularly Convolutional Neural Networks (CNNs), have shown remarkable performance in segmentation tasks. Fully Convolutional Networks (FCNs), U-Nct, Mask R-CNN, and DccpLab arc popular architectures for semantic and instance segmentation. These models leverage their ability to learn and extract complex features from images, enabling accurate and efficient segmentation.

[0113] Different types of segmentation techniques can be used in Al including semantic segmentation and instance segmentation. Semantic segmentation involves labeling each pixel in an image or video frame with a corresponding class label. The output is a pixel-wise classification map where each pixel is assigned a semantic category or class label. Semantic segmentation focuses on capturing the semantic meaning of the scene and is used for scene parsing, object recognition, and high-level understanding. Instance segmentation goes beyond semantic segmentation and aims to separate and identify individual objects within an image. It assigns a unique label or identifier to each pixel belonging to a particular object instance. In instance segmentation, each object is segmented separately, allowing for precise delineation and separation of object boundaries. This technique is used for object detection, tracking, counting, and detailed object analysis. For purposes of the discussion below, the term “segmentation” refers to instance segmentation (unless noted otherwise).

[0114] In some embodiments, segmentation is used to determine a 6D0F pose of panels. With a panel segmentation as input, various computer vision algorithms can find pixel locations in an image of the four comers of the panel frame, locations that are then used as input for a Perspective-n-Point solver to determine 6D0F of tire panel in 3D-space (or “world-space”).

[0115] In some embodiments, the pixel locations of the panel comers is determined, based on the segmentation, by one or more of a variety of computer vision techniques, such as Hough transforms, the Ramer-Douglas-Peucker algorithm, or some other computer vision postprocessing. For instance, the Hough transform can yield four Hough lines that are fitted to the four edges of the segmentation which, in turn, correspond to the four edges of the panel. And the four intersections of the four Hough lines yield the four comers of the panel. Figure 51 shows an example application 100 of Hough transforms for identifying comers of a solar panel 110, according to some embodiments. Intersection of the Hough lines 5102 and 5106 yield the comer A, intersection of the Hough lines 5102 and 5108 yield the corner D, intersection of the Hough lines 5104 and 5106 yield the corner B, and intersection of the Hough lines 5104 and 5108 yield the comer C.

[0116] In some embodiments, the Ramer-Douglas-Peucker algorithm is used to determine tire pixel locations of the four panel comers. In some embodiments, the algorithm is constrained to yield a four-sided polygon approximation. For example, in OpenCV implementation of the algorithm — the ApproxPolyDP() function — the parameter epsilon can be optimized to yield a four-sided polygon approximation based on a binary search over epsilon, for instance. Other applicable computer vision algorithms, as recognized by those skilled in the art, may also be used for post-processing of the segmentation to find panel comers or other keypoints. In some embodiments, those keypoints, such as panel comers, is used as input for a Perspective-n-Point solver to determine the 6DoF pose of the panel.

[0117] In some instances, the above post-processing methods (c.g., Hough transforms, Ramcr- Douglas-Peucker algorithm) that fit a single straight line along tire lengthwise of tire edge of the panel may yield an inaccurate corner location, either at the far half or at the near half of the panel. The reason is that the solar panel is not a perfectly rigid body. When installed on, for example, the torque tube, the panel is cantilevered on the tube, deflecting or deforming under its own weight. Accordingly, any approximation of the lengthwise edge by a single line may be inaccurate for one comer or another. To compensate for this inaccuracy, instead of fitting a single straight line, in some embodiments, two lines are fitted to the lengthwise edge(s) of the panel, one line for the far half and one line for near half of the panel. A Perspective-n-Point solver may then solve for 6D0F pose based on coordinates of panel comers (x, y, z) that depart from that of a perfectly rigid body (e.g., with z as the height variation due to the panel deformation).

[0118] Another post-processing method that may be especially well-suited for Al-based segmentation is one that analyzes the pixel intensities of the shadows between adjacent panels. Al -based segmentation may perform worse on images with multiple panels (as compared to images with a single panel). For example, for images with multiple panels, the segmentation of a panel in question may retreat into the interior of the panel (rather than coincide with the outer edge of the panel frame, as expected).

[0119] Such a discrepancy can be observed along the length of the panel in question that is near an adjacent panel. To compensate for this discrepancy, in some embodiments, the shadow between the panel in question and the adjacent panel serves as a visual fiducial from which to identify the true edge of the panel in question. The shadow between adjacent panels are largely consistent and distinct across different lighting conditions. In some embodiments, computer vision algorithms, such as binarization, thresholding, and Hough transforms, arc used to identify the true edge of the panel in question.

Depth Estimation

[0120] In some embodiments, depth estimation is used to determine the 6DoF pose of panels. Depth estimation is the determination of the distance of a given object within a field-of-view (FoV) with respect to a sensor (e.g., camera, LiDAR, time-of-flight sensor).

[0121] Depth estimation can be understood as a sort of segmentation. The object in question is segmented or distinguished from a (more) distant background. So, the panel appears in (a disparity map (with computer vision) or depth visualization (with Al)) as a contiguous segmentation. Figure 52 shows an example application 5200 of segmentation, according to some embodiments. The result of the segmentation can be used as input for the post-processing to find panel comers or other key points for purposes of the 6DoF pose estimation, as described above.

[0122] With Al, depth estimation can be performed using, for example, neural networks for monocular depth estimation (MDE). A comprehensive survey of state-of-the-art approaches to MDE is the following reference, incorporated by reference herein: J. Spencer, et al., The monocular depth estimation challenge, in Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (pp. 623-632) (2023).

[0123] With computer vision, depth estimation can be performed using two cameras (or stereo camera) to yield a disparity map based on correspondence points and the baseline distance between the two cameras. The disparity or perspectival discrepancy between the images from the two (stereo) cameras indicate the distances of objects from the cameras. With CV, depth estimation is problematic under certain lighting or surface conditions, such as texture-less regions and specular reflections on the panel. Conventional techniques, such as photo-consistency methods and active stereo (using IR structured light projections), can be employed to improve the accuracy of depth estimation. Multi-baseline stereo, involving three or more cameras (with multiple baseline distances between cameras), with trinocular or quadocular configurations, for instance, and associated algorithms (such as Semi-Global Matching and iterative matching algorithms) can also be employed to improve the resolution and accuracy of depth estimation. A comprehensive survey of state-of-the-art approaches to multi-baseline stereo is the following references, incorporated by reference herein: H. Hirschmuller, Stereo Processing by Semiglobal Matching and Mutual Information, in IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 328-341, Feb. 2008, doi: 10.1109/TPAMI.2007.1166; S. Patil et al., A Comparative Evaluation of SGM Variants (including a New Variant, tMGM) for Dense Stereo Matching. ArXiv, abs/1911.09800 (2019).

[0124] Whether with Al or CV, depth estimation (as a type of segmentation) uses the same post-processing (as described above) for purposes of locating panel comers in an image. For example, the determination of comer locations — whether by depth estimation or segmentation — can be corrected or refined (before used as input for a PnP solver) by what is called the “two-pass homography” post-processing, described below.

Keypoint Detection

[0125] Some embodiments use a type of Al-based inference called keypoint detection to determine the 6DoF pose of panels. Keypoint detection is a fundamental task in computer vision and Al that includes identifying and localizing specific points or landmarks in an image or a video. These keypoints represent distinctive features in the visual data, such as comers, edges, or other regions of interest. The goal of keypoint detection is to accurately locate and describe these keypoints, enabling various applications like object recognition, tracking, pose estimation, and image alignment. [0126] Keypoint detection typically includes the following steps or algorithms. Preprocessing: The input images or video frames are typically preprocessed to enhance their quality and reduce noise. Common preprocessing steps include resizing, normalization, and grayscale conversion. Various algorithms can be used for keypoint detection, depending on the specific requirements and characteristics of the data. Some embodiments use comer detection: algorithms such as Harris Comer Detector or Shi-Tomasi Comer Detector, to identify comers in an image based on local intensity variations. Some embodiments use scale-space extrema detection including methods, such as Difference of Gaussians (DoG) or Laplacian of Gaussian (LoG), to detect keypoints at different scales by looking for local extrema in the image's scale-space representation. Some embodiments use interest point detectors that include algorithms, such as SIFT (Scale-Invariant Feature Transform) or SURF (Speeded-Up Robust Features), to identify keypoints based on local image gradients and their responses to different scales and orientations. Some embodiments use deep learning-based methods, such as Convolutional Neural Networks (CNNs) that can be trained to directly detect keypoints by learning from annotated datasets. Models like DenseNet, ComerNet, or OpenPose employ deep learning techniques for keypoint detection. Once tire keypoints are detected, their precise locations within tire image need to be determined. This process may include refining the initial detection by using techniques like subpixel interpolation or optimization algorithms to increase localization accuracy.

[0127] Keypoints may be the four comers of the solar panel frame, as discussed above. Keypoints may also be any visual fiducial afforded by a single panel, such as the common grid pattern, as discussed below. Keypoints may also be points on the solar panel, such as the center point.

[0128] As for segmentation models/inferences, keypoint-detection models/inferences can use their own computer vision methods of post-processing. For example, a Combination Of Shifted Filter Responses (COSFIRE) filter may be optimized for pattern recognition of a comer, distinguishing it from the background. A keypoint-detection model may yield a rough region-of- interest (ROI) for input into the COSFIRE filter which, in turn, produces a more fine-grained corner detection.

[0129] Other applicable computer vision algorithms and techniques, as recognized by those skilled in the art, may also be used for post-processing of the keypoint detection to find panel comers for the purposes of 6DoF pose estimation.

Bounding Boxes [0130] Some embodiments use bounding boxes, another Al-based inference technique, to determine the 6D0F pose of panels. Bounding box inference in Al refers to the process of predicting and localizing objects in an image or a video by drawing a rectangular bounding box around them. The bounding box provides an approximation of the object's location and extent within the visual data This technique is widely used in object detection, localization, and tracking tasks. Bounding box inference typically relies on an object detection model that has been trained using machine learning algorithms. Popular object detection models include Single Shot MultiBox Detector (SSD), You Only Look Once (YOLO), and Faster R-CNN (Region-based Convolutional Neural Network). These models are designed to detect and classify objects in an image or video frames. The object detection model is trained on a labeled dataset, where each object of interest is annotated with a bounding box that tightly encloses the object. The training data also includes the corresponding class labels for each object category. During training, the model learns to identify objects and predict their bounding boxes based on visual features extracted from the input data. During the inference phase, the trained object detection model takes an input image or video frame as input and processes it to detect objects and predict their bounding boxes. The model analyzes the visual features and makes predictions about the presence, class, and location of objects within the image. The object detection model predicts the coordinates of the bounding box for each detected object. The bounding box is typically represented by four values: the x-coordinate and y-coordinate of the top-left comer, as well as the width and height of the box These values are used to draw a rectangle around the object, indicating its estimated location in the image or frame. Since multiple bounding box predictions can overlap or enclose the same object, a post-processing step called non-maximum suppression is often applied. Non-maximum suppression aims to remove redundant or duplicate bounding boxes, ensuring that only the most accurate and confident bounding boxes for each object remain. This step helps eliminate duplicate detections and improves the precision of the bounding box inference.

[0131] In some embodiments, bounding boxes is used to identify a given solar panel by bounding or encompassing its entirety within an image. Bounding boxes can be understood as a special case of keypoint detection for the purposes of determining the 6DoF pose of panels. The reason is that, given the typical top-down perspective view, two comers of the bounding box may likely coincide with two physical comers of the panel frame. Specifically, the top-right and bottom-left comers of the bounding box coincide with the far-right and near-left comer of the panel frame. Such a correspondence allows bounding-box inferences to be interpreted as keypoint detection (of two comers). Such a correspondence allows the substitution of bounding-box models for keypoint-detection models in approaches that do not require the identification of all four panel comers, a substitution of models that may be advantageous for training and/or inferencing.

[0132] Whether as keypoint detection or as bounding boxes, Al -based inferences for determining keypoints, such as panel corners, can use the same post-processing (such as the “two-pass homography” algorithm described below) for purposes of determining the 6D0F pose of a panel.

Six Degrees-of-Freedom

[0133] Six Degrees-of-Freedom (6DoF) pose estimation refers to the task of estimating the position and orientation of an object in 3D space using machine learning models. The term "6DoF" represents six degrees of freedom, which are the three rotational axes (yaw, pitch, and roll) and the three translational axes (x, y, and z). Pose estimation is essential in various computer vision applications, such as robotics, augmented reality, and object tracking, where accurate knowledge of an object's position and orientation is critical.

[0134] Some embodiments use Al models for 6DoF pose estimation. The Al models can be categorized into two main types: feature-based methods and direct regression methods.

[0135] First, feature-based methods involve extracting keypoints or features from the object or scene and matching them between the 3D model and the input image. The pose is then estimated based on the spatial relationship between the matched 3D-2D feature correspondences. Feature extraction and matching can be done using traditional computer vision techniques or deep learning-based methods. Traditional feature-based methods use techniques like SIFT (Scale- Invariant Feature Transform), ORB (Oriented FAST and Rotated BRIEF), or SURF (Speeded-Up Robust Features) to detect and match keypoints between the 3D model and the 2D image.

RANSAC (Random Sample Consensus) or PnP (Perspective-n-Point) algorithms are then used to estimate the 6DoF pose from the matched correspondences. In deep learning-based feature -based methods, instead of using handcrafted features, deep learning models like CNNs can be used to leam feature representations directly from the input image. PoseNet and PoseCNN are examples of deep learning-based feature-based methods that predict the 6DoF pose using CNNs.

[0136] Second, direct regression methods directly predict the 6DoF pose parameters (translation and rotation) from the input image, bypassing the need for feature extraction and matching. CNN -based regression models use CNN architectures to directly regress tire 6DoF pose parameters from the input image. The network takes the image as input and outputs the pose values as continuous numbers. Models like DeepIM and PVNet are examples of CNN-based regression methods. Some hybrid approaches combine feature-based and direct regression techniques. They use CNNs to predict initial pose estimates, and then refine the pose using feature-based methods to achieve higher accuracy.

[0137] Al models for 6D0F inferences, if sufficiently accurate, may be used (without corrective post-processing) to determine 6D0F pose of solar panels. However, if 6D0F models are not sufficiently accurate, then those models may still provide information that can be interpreted in terms of other sorts of inferences, such as segmentation, keypoint detection, and bounding boxes, which can be subsequently corrected or refined by post-processing. For example, an initial (coarse) 6D0F estimation of panel pose may serve as rough identification of comers for the purposes of keypoint detection or bounding-box inferences. An initial 6D0F pose estimation may also serve to (roughly) identify or segment a panel from its background, for instance.

[0138] To summarize, described above are different types of Al models and inferences that are applicable for determining the 6DoF pose of solar panels. The Al models may be zero-shot, one- shot, or a few-shots, and may be trained differently, using real images or synthetic images, or using supervised learning or unsupervised learning, for instance.

[0139] In some embodiments, each Al model or inference uses its own methods of postprocessing (e.g., analyzing shadows between adjacent panels, filters). In some embodiments, different Al models/inferences use a same method of post-processing (e.g., “two-pass homography” described below).

[0140] The Al models may be combined such that the output of one model is used as input for another model. For example, a first model (e.g., YOLO) outputs bounding boxes which are, in turn, used as input for second model (e.g., YOLO, SSD, RCNN, SAM) for segmentation. A single type of model can be used twice with the second instance of the same model taking the output of the first instance as input. For example, YOLO for bounding boxes may be used as input for YOLO for keypoint detection. The Al pipeline may comprise any plurality or ensemble of Al models. Ensemble learning in machine learning refers to the technique of combining multiple individual models, called base models or weak learners, to form a more powerful and accurate model known as an ensemble model. The main idea behind ensemble learning is that by aggregating the predictions of multiple models, the ensemble can make more reliable predictions than any individual model alone. There are several ensemble methods used to combine the predictions of base models. Base models are the individual models that form the ensemble. They can be any machine learning algorithm, such as decision trees, random forests, support vector machines, neural networks, or any other model. Each base model is trained on a subset of the training data or with some variations introduced to create diversity among tire models. For example, in voting-based ensembles, each base model independently makes predictions, and the final prediction is determined based on majority voting (for classification problems) or averaging (for regression problems) among the predictions. Bagging (Bootstrap Aggregating) involves training multiple base models on different random subsets of the training data with replacement. The final prediction is obtained by averaging (regression) or voting (classification) the predictions of individual models. Boosting algorithms, such as AdaBoost, Gradient Boosting, or XGBoost, train base models sequentially, where each subsequent model focuses on the instances that previous models struggled with. The predictions of all models are combined to form the final prediction, often by weighted voting. Stacking combines the predictions of multiple base models by training a meta-model that learns to make predictions based on the outputs of the individual models. The base models' predictions are used as features, and the meta-model is trained on this augmented dataset.

[0141] The strength of an ensemble lies in the diversity among its base models. Diversity is achieved through various means, such as using different algorithms, varying the model architectures, training on different subsets of the data, or introducing randomness during training. Diverse models make different types of errors, and when combined, they can compensate for each other's weaknesses, leading to improved overall performance. Ensembles often outperform individual models, as they can capture different aspects of the data and combine their strengths to make more accurate predictions. Ensembles are typically more robust to noise and overfitting compared to individual models, as errors made by some models can be compensated for by others. Ensembles can generalize to unseen data by reducing the impact of individual models' biases and errors, leading to better performance.

[0142] Described above are similarities or overlapping applicability among different types of Al models/inferences, allowing one type of model/inference to be interpreted as a special case of another (more general) type of model/inference. Such a generalizability is in virtue of the particular characteristics of the imaged scene in the use-case of the instant invention Conversely, these different types of models/inferences admit to the same post-processing (e.g., the “two-pass homography” algorithm) is in virtue of their generalizability.

[0143] Described below are ways different Al models/inferences can be combined with different computer vision algorithms. Computer vision algorithms may be integrated within the Al pipeline as post-processing or as pre-processing. Computer vision algorithms may also be integrated within the Al pipeline based on an architecture that allows for a deeper complementarity between computer vision and Al, taking advantage of the strengths of each. Computer vision algorithms may also be combined on their own, without using Al. [0144] Parenthetically, the disclosed deeper complementarity between CV and Al exemplifies what O’Mahony et al. (2020) characterized as “mixing hand-crafted approaches with [deep learning] DL for better performance.” As they observe, “[t]here are clear trade-offs between traditional CV and deep learning-based approaches. Classic CV algorithms are well-established, transparent, and optimized for performance and power efficiency, while DL offers greater accuracy and versatility at the cost of large amounts of computing resources.” However, CV and AI/DL can be fruitfully combined in applications where DL is (still) not yet well established (e.g., 3D vision), as O’Mahony et al. observed. See N. O’Mahony et al., Deep learning vs. traditional computer vision, in Advances in Computer Vision: Proceedings of the 2019 Computer Vision Conference (CVC), Springer Nature Switzerland AG, Volume 11 (pp. 128-144) (2020).

[0145] Whether as post-processing or pre-processing, whether as part of a deeper integration with Al or as a self-contained computer vision pipeline without Al, computer vision algorithms rely on invariant structures that are discernible across different images. Such invariant structures may be attributed to physical structures, such as grid patterns formed by photovoltaic cells and their electrical connections. Such invariant structures may also be attributed to the immaterial structure of illumination, such as the shadows cast between adjacent solar panels as a sort of negative space. Whether physical or immaterial, whether as positive or negative space, the invariant structures that underpin CV algorithms provide consistent visual patterns that can be analyzed for their semantic content.

Post-Processing with the “Two-Pass Homography” Algorithm

[0146] A post-processing that compensates or corrects for the inaccuracies of (Al -based or computer vision-based) segmentation is referred to as the “two-pass homography” (TPH) algorithm. The algorithm relies on regular visual patterns or fiducials on tire solar panel, such as the grid pattern. The grid pattern (or other visual fiducials) can be used to compensate/correct for inaccuracies in the segmentation because it provides reference points by which the pixel locations of panel comers can be deduced indirectly through homography transforms. In some embodiments, the TPH algorithm includes the following steps:

(1) Using a panel segmentation as a mask, filter out the background in an image (leaving only the foregrounded panel).

(2) Identify grid intersections in masked panel from (1) as “comers” using the Shi- Tomasi algorithm.

(3) Use the “comers” from (2) to compute homography matrix Hi. (4) Based on Hi from (3), determine where grid intersections from (2) should be in millimeter-space (described below).

(5) Determine homography matrix H2 to transform grid intersections from (2) in image-space to their locations in millimeter space in (4).

(6) With inverse of H2 from (5), back-project comers in millimeter space to imagespace.

[0147] Note that the term “comers,” a term of art in computer vision, refers to (roughly) points of interest in which intensity gradients are maximum in all directions. “Comers” (or italicized corners)' is to be distinguished from physical comers (of, say, the panel frame).

[0148] For (1), the fdtering operation may be as simple as a binary AND between the mask and tire raw image.

[0149] For (2), other comer-finding algorithms (e.g., Harris-Stephens algorithm) may be used. Note that the grid intersections or “comers” found in (2) may not coincide perfectly with the intersection of grid lines. The reason is that the grid intersections may comprise various shapes (e.g., diamond, two triangles with common vertex) with gradient intensities that result in an off- centered “comer” as detected by the comer-detection algorithm, an example (5300) of which is shown in Figure 53. However, the (random) off-centeredness of the detected “comers” cancels out, as it were, with increasing number of detected “comers” as input for the homography computation. In Figure 53, the more pronounced, thicker grid lines (sometimes referred to as “major” grid lines) are useful for the TPH algorithm. “Major” grid lines can be distinguished from (fainter) “minor” grid lines using various computer vision algorithms, such as Gaussian blur with a kernel size of, say, 5.

[0150] In some embodiments, Al models may also be used to find grid intersections. Al models may use as input the raw image or the projected (or warped) image based on the (“first- pass”) homography matrix Hi. In the Hi-space, the panel is projected to an approximate top-view. In this view, because perspectival variations are minimized, the Al model may be trained more easily. However, if the Al model takes as input the raw image, and if the inference yields all the grid intersections, then the “two-pass homography” algorithm reduces to a “single-pass” algorithm. The reason is that, with all the grid intersections detected by the Al model, the association in (4), as discussed below, reduces to simple enumeration (i.e., relative positions of grid intersections are identical).

[0151] For (3), the (“first-pass”) homography matrix Hi may yield a projection of the panel that is not perfectly square in “millimeter-space ” The term “millimeter-space” refers to the 2D projective space in which the four (physical) comers of the solar panel have the following (x, y) coordinates: (0,0), (0, L), (W, 0), (W, L), where W and L are the width and length, respectively, of the solar panel. However, homography matrix Hi should be accurate enough to allow an association (in Hi-space) of grid intersections found in (2) to where they should be in millimeterspace.

[0152] For (4), that association (in Hi-space) may be based on Euclidean distance (e.g., a fraction of the shortest cell dimension, such as width) or some other measure. For example, an Euclidean distance of a fraction of the shortest cell dimension, such as cell width, may ensure that the association (between grid intersections found in (2) and where they should be in millimeterspace) is unique. Too large of an associative Euclidean distance (e.g., greater than approximately half the width or height of individual cells) may result in non-unique ambiguous associations or associations that implicate an adjacent panel.

[0153] For (5), the (“second pass”) homography matrix H2 is recalculated based on more accurate (associated) grid-intersection locations in millimeter-space, as determined in (4).

[0154] For (6), the more accurate homography matrix H2 is used to back-projcct the comers with coordinates (0,0), (0, L), (W, 0), (W, L) in millimeter space to image-space, shown in Figure 53 as crosshairs.

[0155] Figure 54 is a schematic diagram of detection 5400 of grid intersections 5402 using the two-pass homography algorithm described herein, according to some embodiments.

[0156] Some embodiments use a form of tire TPH algorithm that is generalized beyond grid intersections for any panel features as visual fiducials, as follows:

1. Estimate in image-space the four comers of the panel, ki.

2. Find homography matrix Hi from ki to millimeter-space, i.e., (0, 0), (H, 0), (H, W), (0, W).

3. Find pi and qi, where pi are pixel locations of panel features (as fiducials) in image-space and qi are their corresponding locations in millimeter-space based on Hi.

4. Find homography matrix H2 from pi to qi.

5. Back-project comers (0, 0), (H, 0), (H, W), (0, W) from millimeter-space to image-space based on H2 1 .

[0157] Note that, in the generalized formulation above, pi and qi may be not only grid intersections but also the centroid of a diamond or the common vertex of two triangles, for instance. And pi and qiinay be some other pixel-structure, not just “comers.” [0158] Note further that, in the generalized formulation above, the association of corresponding source and destination pixel locations, as given in (3) and (4), is still ultimately based on the four comers (not “comers”) of panel, ki, as given in (1) and (2), as the fundamental datum. Other datums may be used as reference depending on the geometry and features of solar panels.

Pre-Processing with Multi-Baseline Stereo

[0159] As discussed above, depth estimation can be used for segmentation. Depth estimation can also be used for determining bounding boxes that encompass the entire extent of a panel. As such, bounding boxes can be used to extract or segment from a raw image a region-of-interest (RO1) for input into an Al pipeline.

[0160] Figure 55 shows a schematic diagram of RO1 segmentation 5500, according to some embodiments. The ROI segmentation is performed on a raw (full-sized) image from, for instance, two 12 MP RGB cameras (5502 and 5504), before the (segmented) image is passed as input to an Al pipeline 5506.

[0161] As pre-processing, the ROI segmentation may be based on any of the following:

• di: a disparity map from a single-baseline stereo.

• d2: a disparity map from a multi-baseline stereo.

• di, di: monocular depth estimation from neural network.

• f: any (weighted) combination of the above.

• m: onboard processing (e.g., processing provided by Luxonis OAK-D and Stereolabs ZED cameras) that runs bounding-box-as-ROl inferences.

[0162] Figure 55 is an illustration not of a design so much as a design space. Figure 55 captures the pre-processing design space described above, depicting a maximalist design of all the elements. All the elements need not be used in practice. The design space allows for structure in motion (with movement of the EOAT), ensemble (h) of Al models (model 1, .... model n), and a distributed Robot Operating System (ROS) architecture (e g., ROS node i, ... ROS node j).

Sensor Fusion with LiDAR and Monocular Al

[0163] Described above are embodiments that use computer vision techniques for preprocessing and post-processing with respect to monocular Al (i.e., using images from a single camera). As described below, in some embodiments, computer vision is used for more than a preparation or a corrective step, in the determination of 6D0F pose. Computer vision can be more tightly integrated (or baked-in) with an Al.

[0164] Figure 56 shows multiplc-data-channcl capabilities 5600 of LiDAR used by some embodiments. Range 5602 is the distance of a point from the lidar camera, calculated by using the time of flight of the laser pulse. Signal 5604 is the strength of the laser return from an object (commonly represented by the point cloud coloring). Ambient 5606 is the camera return capturing the strength of ambient light at a predetermined wavelength (e.g., 865 nm wavelength). Reflectivity 5608 is the reflectivity of the surface (or object) that was detected by the LiDAR sensor. Some embodiments use LiDAR that provide access to data or images of reflectivity, ambient Near Infrared (NIR), and (2D-projected) range and the LUT mapping between these images. Same pixel sensors may be utilized for tire various data channels. A single LUT or lookup table (that captures the physical dimensions of the hardware) may be sufficient. No extrinsic calibration may be needed, and no need to inquire whether the calibrated mapping generalizes. The LUT mapping allows analyses to go fluidly from one image-space to another, from the NIR image to the point-cloud, for instance.

[0165] Figure 57 is a schematic diagram of an example computer vision/Al architecture 5700 for determining the 6DoF pose of a solar panel, according to some embodiments. The CV/AI pipeline 5700 takes as input two sorts of data or images from a LiDAR: an ambient NIR grayscale image 5702 and range data 5706 in the form of a point-cloud. Transform 5704 transforms NIR to range data. First, the ambient NIR image 5702 is used as input to a CV/AI pipeline (preprocessing 5708 and Al 5710) that locates the coordinates (x P ixei, ypixei) 5718 in the image-space of a keypoint on the panel, such as a comer. The CV/AI pipeline may comprise any of the preprocessing, Al models (e.g., segmentation, keypoint detection, bounding box), and postprocessing described above. Also, the point-cloud range data 5706 is used as input to a CV pipeline that identifies the panel by planar segmentation 5712 (i.e., fitting a plane to the points in the point-cloud). The resulting fitted plane, in Hessian normal form, for instance, provides the rotational angles (rx, ry, rz). Additionally, the coordinates (xpkei, ypixei) 5718 in the NIR imagespace of the relevant keypoint is transformed (T) or mapped (via LUT, as discussed above) (5714) to the point-cloud to yield the coordinates (x, y, z) of the keypoint in world-space. The combined (5716) output (x, y, z, rx, ry, rz) is the 6DoF pose of the panel in question.

[0166] Figure 58 is a schematic diagram of another example computer vision/AI pipeline 5800 for determining the 6DoF pose of a solar panel, according to some embodiments. This architecture is similar to the one described above in reference to Figure 57, except that the pipeline 5700 relies only on data or images from the LiDAR. By contrast, the pipeline 5800 relies on both the LiDAR 5806 and another sensor — a camera 5802. The principles of operation and the architecture of the pipelines 5700 and 5800 are similar. For example, the preprocessing steps 5708 and 5808 are similar, the planar segmentation steps 5712 and 5812 are similar, the Al steps 571- and 5810 are similar, and the combinations 5716 and 5816 are similar. The pipeline 5800 similarly combines Al-based keypoint detection with point-cloud segmentation to arrive at a combined 6DoF pose estimation. The only difference is that the CV/AI pipeline depicted in Figure 58 uses a transform (T) 5814 that is not a lookup-table (LUT) as in pipeline depicted in Figure 57. Rather, the transform or mapping (T) 5814 between the image-space of the camera and the LiDAR is obtained by calibration.

[0167] Figure 59 is a schematic diagram of sensor fusion architecture 5900 for determining the 6DoF pose of a solar panel, according to some embodiments. The architecture 5900 combine tire two building blocks shown in Figures 57 and 58. The embodiment combines Al-based keypoint detection for both NIR imagery 5918 from a LiDAR and an RGB image from a camera 5908 with point-cloud segmentation to arrive at a combined 6 DoF pose estimation. The transforms or mapping between image-spaces TNIR (T-nir) 5930 and TRGB (T-rgb) 5940 are obtained from LUT or by calibration, respectively, as described above. The embodiment allows a heuristic or meta-model 5904 to determine how the various components of the 6 DoF pose from different sources (5916, 5928 ) are combined (5926) to form the final 6DoF pose estimation. Depending on lighting conditions or other environmental information 5902, the meta-model 5904 may use the coordinates (x, y, z) of a panel keypoint that is based on either the camera image 5908 or the LiDAR NIR image 5918 (or some other LiDAR data channel, e.g., Lidar Range 5932). For example, the meta-model 5904 may configure the CV/AI pipeline to rely only on the RGB camera 5908 for indoor operation, the LiDAR reflectivity data channel 6008 (Figure 60) for night-time outdoor operation, and the LiDAR NIR data channel 5918 for daytime outdoor operation. The meta-model 5904 may also weigh appropriately both sets of keypoint coordinates or even optimize tire Al models g., upper robot Al model 5912 for RGB, lower robot Al models 5938) for certain environmental conditions (e.g., by selecting appropriate model weights). The meta-model 5904 may also invoke a post-processing correction 5924 by the lower-robot CV/AI pipeline, as described below. The pre-processing steps 5910 and 5920 and the planar segmentation step 5934 are as described above, according to some embodiments. A database 5906 can store data from the environmental sensors 5902 and provide data to the meta-model 5904.

[01 8] Figure 60 is a schematic diagram of another sensor fusion architecture 6000 for determining the 6DoF pose of a solar panel, according to some embodiments. This example shows a sensor fusion architecture fusing traditional analyses of LiDAR imagery 6008 with monocular Al 6004 based on images from a RGB camera 6002. Three possible methods to calculate the 6DoF pose are shown. One method uses a PnP solver 6006 with the pixel locations of four panel comers (xl, y 1), ..., (x4, y4) as input; this method may be used for indoors. Another method combines rotational angles (rx, ry, rz) with the world-space coordinates (x, y, z) of a panel keypoint based on LUT mapping 6014 (from the NIR or reflectivity image-space of the LiDAR to the point-cloud). This method may be used for outdoors. Yet another method is extrinsic calibration 6018 (for mapping between the image-space of camera to the point-cloud). These different methods of calculating the 6DoF pose may be combined in various ways (e.g., for indoors/outdoors). Reflectivity 6008, NIR 6010, and range 6012 may be used for night, day and outdoors, respectively. Other LiDAR data channels, such as signal and reflectivity, although not shown, may be used additionally or alternatively to the other data channels.

[0169] Figure 61 is a schematic diagram of a sensor fusion architecture 6100, according to some embodiments. Unrestricted raw image from an NIR LiDAR 6102 is pre-processed (6104) using one of the techniques described above, and processed by an upper robot Al 6106. A first- stage post-processing 6108 provides a layer of redundancy. The first stage may include the following steps: [1] determine comers from NIR image (and/or other channels) in pixel space; [2] for each comer in pixel space, transform to world space coordinates and calculate other corners, given panel dimensions; [3] for each set for four comers in world space, transform back to pixel space; [4] for the four sets for (four) comers in pixel space from [3] above, select a single comer based on some metrics of convergence (e.g., Euclidean distance); [5] transform the selected comer from (4) above in pixel space to world space and its associated set of (other comers); and [6] from the set of comers in [5] above, use the datum corner (i.e., near left). In some embodiments, [7] if variance in pixel locations for comers in [3] above exceeds a threshold, then use lower-robot vision for a third estimate of 6DoF. Pixel location for a selected comer is transformed (T) (6124) to obtain (x, y, z) coordinates of a datum panel comer. This may be combined (6126) with the output of a plan and cylindrical segmentation 6122 (which produces the output based on a point cloud output from a LiDAR range 6120), to obtain a first estimate of 6DoF (X, Y, Z, RX, RY, RZ) of the panel. In a second-stage post-processing 6134, based on the first estimate, locations of four panel comers 6128 in coordinates (x, y, z) are refined further based on the two-pass homography algorithm described above. The locations 6128 is inverse transformed T 1 (6116) to obtain pixel locations 6110 of four comers in image space. These pixel locations are input to a two-pass homography algorithm 6112 (an example of which is described above). This includes [8] determining a first-pass homography matrix Hi, [9] determining a second-pass homography matrix H2 and [10] back-projecting four panel comers based on inverse of H2. The output of step [10] is input to a PNP solver 6114 to obtain a second estimate of 6D0F of the panel. This estimate may be input to a lower robot vision (Al) 6130, according to [7] described above. Shown explicitly in Figure 61 are layers of redundancy afforded by the first stage and second stage post-processing, and the lower-robot CV/AI pipeline vision. Although not shown in Figure 61, in some embodiments, the architecture can provide layers of redundancy, including an RGB camera image, other LiDAR data channels, such as signal and reflectivity, and complementary Al models.

[0170] The range data-channel from a LiDAR (e.g., an Ouster LiDAR) can be used as depth estimation as a type of segmentation which is, in turn, refined by the “two-pass homography” post-processing before input into a PnP solver. Depth estimation or segmentation may also be performed using stereo cameras — whether in the single-baseline or multi-baseline configuration — as described above.

[0171] Whether with stereo cameras or with LiDAR, segmentation of the panel can be achieved by extrinsic calibration of the camera with respect to the robotic system (hand-eye). Calibration can also be performed by mapping two image-spaces with homography transforms. For example, the image-space of the LiDAR range image can be mapped to the image-space of the camera in a calibration process that is purely image-based (i.e., without involving robotic movements).

[0172] Alternatively, the 6D0F pose of a panel may be determined solely by LiDAR. The rotational angles (rx, ry, rz) of a panel in question can be determined by planar segmentation of the point-cloud data from a LiDAR. And the (x, y, z) location of a keypoint of the panel (e.g., comer) may be determined by mapping the keypoint found in other data-channels (e.g., NIR, reflectivity). To improve tire latter determination of keypoint location (x, y, z), the LiDAR may be calibrated (e g., by extrinsics or homography, as described above) with respect to a high- resolution camera such that the keypoint(s) found in the camera image may be mapped or translated to the image space of the LiDAR.

[0173] Moreover, the various approaches to 6D0F pose estimation (and its associated postprocessing) described above can be supplemented by additional post-processing (of the initial 6D0F pose estimation) involving a different CV/AI pipeline with a different senor (e.g., camera) attached to the lower robot, for instance. For example, the CV/AI pipeline for the lower robot may refine or correct an initial estimation of the 6D0F pose of a panel by profiling the surface contour of the torque tube and/or clamp with structured light, such as laser-line or grid-line projections. The profiles of the torque tube, clamp, or other relevant structures, with their known geometry and dimensions, can be the basis by which the initial 6D0F pose of the panel can be refined. Such a refinement assumes that a panel installed on the torque tube is aligned rotationally along the tube and that the midpoint of the clamp coincides with the clamped edge of the installed panel.

[0174] Figure 62 is a schematic diagram of a solar panel installation 6200, according to some embodiments. Shown are cartesian axes X, Y and Z, and panel datum 6202 for six degrees of freedom (6DoF) pose estimation.

[0175] Figure 63 shows a flowchart of a method 6300 for autonomous solar installation, according to some embodiments. The method includes obtaining (6302) one or more images during installation. The one or more images includes an image of one or more solar panels and an installation structure.

[0176] The method also includes pre-processing (6304) the one or more images including one or more of compensating for camera intrinsics or distortions, rectifying the images, and determining depth information, in some embodiments, the pre-processing includes compensating for a camera distortion, rectifying the image, and/or determining depth information based on a single-baseline stereo camera, a multi-baseline stereo camera, a timc-of-flight sensor, or a LiDAR sensor. Examples of pre-processing techniques are described above in reference to non-AI methods for pose estimation, and example Al models and inferences, according to some embodiments. Al models may be trained on images that are not compensated for lens distortions and images that are not rectified.

[0177] The method also includes detecting (6306) the one or more solar panels by inputting the one or more images into one or more neural networks (in series or in parallel) that are trained to detect solar panels. In some embodiments, the one or more neural networks is trained to output bounding boxes, segmentation, keypoints, depth, and/or a 6DoF pose. These techniques are described above in the section Example Al Models and Inferences, according to some embodiments.

[0178] The method also includes a first post-processing (6308) to compute a first panel pose based on an output of the one or more neural networks. The first post-processing may include a different AI/CV pipeline than the one used for pre-processing and/or the neural networks. A goal may be to refine further an initial estimated panel pose.

[0179] In some embodiments, the first post-processing includes one or more computer vision algorithms (e.g., Ramer-Douglas-Peucker, Hough Transform, Shi-Tomasi, homography transforms) for processing the output of the one or more neural networks based on invariant structures (e.g., invariant structures include grid lines and other visual fiducials on the panel, shadows between panels, projection perspectival lines, relative position of torque tube and panels) in the images to determine locations of panel keypoints. Examples are described above in reference to Figure 51, according to some embodiments.

[0180] In some embodiments, the first post-processing further includes solving for Perspective- n-Point based on panel dimensions and panel keypoints. In some embodiments, the panel keypoints are four comers of the panel frame.

[0181] In some embodiments, the method also includes a second post-processing (6310) including one or more homography transforms (e.g., the two-pass homography algorithm described above) to obtain a second panel pose for the one or more solar panels, based on the first panel pose. The second post-processing compensates or corrects for inaccuracies in the first panel pose based on visual patterns or fiducials on a solar panel. In some embodiments, the visual patterns comprise a grid pattern on the solar panel.

[0182] In some embodiments, the output of the one or more neural networks includes panel segmentation. Figure 52 shows an example application 5200 of segmentation, according to some embodiments. Figure 55 shows a schematic diagram of ROI segmentation 5500, according to some embodiments. The second post-processing includes filtering outbackground (leaving only the foregrounded panel) in an image using the panel segmentation as a mask, to obtain a masked panel. The second post-processing also includes identifying grid intersections in the masked panel as comers using a comer finding algorithm (e.g., the Shi-Tomasi algorithm, Harris-Stephens algorithm). The second post-processing also includes computing a homography matrix Hi using the corners. The second post-processing also includes determining locations of grid intersections in millimeter-space, based on Hi. The second post-processing also includes computing a homography matrix H2 to transform the grid intersections in image-space to their locations in tire millimeter space (a two-dimensional (2D) projective space in which the four (physical) comers of the solar panel have the following (x, y) coordinates: (0,0), (0, L), (W, 0), (W, L), where W and L are the width and length, respectively, of the solar panel). The second post-processing also includes back-projecting comers in the millimeter space to image-space, using inverse of H2

[0183] In some embodiments, the filtering includes a binary AND between the mask and a raw image of the solar panel. In some embodiments, determining locations of the grid intersections includes an association in Hi-space based on Euclidean distance (e.g., a fraction of the shortest cell dimension, such as width).

[0184] In some embodiments, the second post-processing includes estimating four comers of a solar panel in an image-space ki. The second post-processing also includes computing a homography matrix Hi that maps ki to a millimeter-space. The second post-processing also includes identifying (i) pixel locations pi of panel features (as fiducials) in image-space, and (ii) corresponding locations qi for the pixel locations pi in millimeter-space, based on Hi. The second post-processing also includes computing a homography matrix H2 that maps pi to qi. The second post-processing also includes back-projecting comers (0, 0), (H, 0), (H, W), (0, W) from millimeter-space to image-space based on inverse of homography matrix H2 (H2 1 ).

[0185] In some embodiments, the pi and qi include locations of grid intersections, a centroid of a diamond or a common vertex of two triangles, based on the grid intersections, and/or pixelstructure other than comers of the solar panel.

[0186] Various examples of using the homography transforms and sensor fusion architectures for 6D0F pose estimation of solar panels are described above in reference to Figures 56 through 61, according to some embodiments. In particular, the pre-processing and post-processing steps described above in reference to Figures 56 through 61 may be used in the pre-processing, the first post-processing step and/or the post-processing step of the method 6300. For example, Figure 61 shows how the two-pass homography algorithm may be used for post-processing in any sensor fusion architecture that is based on comer detection, according to some embodiments.

[0187] In some embodiments, the installation structure includes a torque tube and a clamp and the method further includes a third post-processing including processing one or more images of the torque tube and/or clamp. In some embodiments, the one or more images of the torque tube and/or clamp is obtained with a high-resolution camera and structured lighting. In some embodiments, the structured lighting is a laser line that is approximately orthogonal or parallel with respect to the torque tube. In some embodiments, the processing of the one or more images of tire torque tube and/or clamp is performed by one or more neural networks and/or a computer vision pipeline. In some embodiments, the method further includes locating a nut associated with the clamp by using high-intensity illumination and computer vision algorithms. In some embodiments, the high-intensity illumination is a ring light.

[0188] Referring back to Figure 63, the method also includes generating (6312) control signals, based on the first panel pose, for operating a robotic controller for installing the one or more solar panels. In some embodiments, control signals are generated (6314) further based on the second panel pose. In various embodiments, the control signals are generated soley based on the first panel pose or soley based on the second pane pose, or the control signal can be generated based on both the first panel pose and the second pane pose. [0189] Figure 35A shows a block diagram of an example image processing pipeline 3500, according to some embodiments. The pipeline 3500 includes a module 3502 for acquiring images, a module 3504 for rectifying the images, a module 3506 for neural network image segmentation of the rectified images, a module 3508 for post-processing the output of the module 3506 using computer vision techniques, a module 3510 for performing Hough transform on the output of the module 3508, a module 3512 for filtering and segmenting Hough lines output by the module 3510, a module 3514 to identify horizontal and vertical Hough line intersections output by the module 3512, a module 3516 to estimate panel poses based on the horizontal and vertical Hough line intersections (e.g., using 3D panel geometry and location of comers in the image), and a module 3518 to publish pose estimates. Figure 35B shows an example rectified acquired image 3520 (output of the modules 3502 and 3504) that includes an image of a solar panel 3522 and other objects 3524-2 (e.g., tapes) and 3524-4 (e.g., wires). Figure 35C shows an example output 3526 (output of the module 3506) for neural network image segmentation for the acquired rectified image shown in Figure 35B, according to some embodiments. Figure 35D shows an example panel comer detection 3528 (output of the module 3514), according to some embodiments. In this example, comers 3530-2 and 3530-4 are detected based on horizontal lines 32532-4 and 3532-8 and vertical lines 3532-2 and 3532-6.

[0190] Figure 36 shows examples 3600 for images 3602, 3606 and 3610, of a road, under different lighting conditions, and segmentation masks 3604, 3608 and 3612, for the images, according to some embodiments. Conventional computer vision techniques are useful when the environment is ideal. However, glare, over/under exposure can negatively affect object detection algorithms. Machine learning techniques can overcome environmental inconsistencies, learn from examples with glare and illumination issues, and can generalize to new data during inference. Non-AI methods, e.g., using multi-baseline stereo, LiDAR/ToF, may be used for non-ideal conditions. Further, non-AI methods may be combined with Al based methods.

Example Solar Panel Segmentation

[0191] Some embodiments perform solar panel segmentation by capturing images of solar panels and torque tubes under varying lighting conditions. Figure 37A shows an example of a captured image 3700 that includes a solar panel 3702 and a torque tube 3704, according to some embodiments. Some embodiments annotate the captured image of solar panel. Figure 37B shows an example of an annotated image 3706 (sometimes called an annotated ground truth mask) for the captured image 3700, according to some embodiments. The annotated image includes a black background 3708, contours of a torque tube 3712 shown in dark grey, and contours of a solar panel 3710 shown in light grey. Some embodiments create a dataset based on the annotated images, train an image segmentation model using the dataset, and use the trained model to detect solar panels and torque tubes in poor lighting conditions. Figure 37C shows an example prediction 3714 by the trained model, according to some embodiments. The trained model predicts the background 3708, the solar panel 3710 and the torque tube 3712, and objects 3716 in the background (not shown in Figures 37A and 37B).

[0192] Some embodiments continuously collect images (and build datasets) and use the images for improving accuracy of the models. Some embodiments use human annotations to increase accuracy of the models. Some embodiments allow users to tune parameters of the segmentation model.

[0193] Some embodiments include separate models for semantic segmentation and instance segmentation. Figure 38A shows an example of image classification. In this example, the image classification detects a presence of a bottle 3802, a cub 2806 and cubes 3804. Figure 38B shows an example of object localization 3816 for the image shown in Figure 38A. In this example, a rectangle 3808 localizes the bottle 3802, a rectangle 3810 localizes a first cube, a rectangle 3812 localizes the cup 3806, and rectangles 3814-2 and 3814-4 localize the cubes 3804. Figure 38C shows an example of semantic segmentation 3818, according to some embodiments. Semantic segmentation helps identify a label 3820 for the bottle 3802, a label 3822 for the cubes 3804, and a label 3824 for the cup 3806. Figure 38D shows an example of instance segmentation 3826, according to some embodiments. Instance segmentation is able to distinguish between the instances of the cubes 3804, determining labels 3830, 3834, and 3836 for the cubes 3804, apart from identifying labels 3828 and 3832, for the bottle 3802 and 3806, respectively. Instance segmentation can distinguish between multiple solar panel instances in a single image. Instance segmentation generates masks for each class instance in a camera frame, enables individual panel identification and localization, utilizes the same data collected for semantic segmentation, and supports lighting invariance. Figure 39 shows an example of instance segmentation 3900 for solar panels, according to some embodiments. In this example, panel instances 3902, 3904, 3906, 3908, and 3910 are identified. The example shows instances of the panels (e.g., the instances 3902 and 3904) that have different orientations.

[0194] Figure 40 shows an example image processing system 4000, according to some embodiments. The system 4000 includes a plurality of cameras including a camera 4002 for coarse positioning, a camera 4004 for capturing images when panels are picked, and a camera 4006 for capturing images when panels are placed. The camera 4002 includes a narrow field of view lens, and the cameras 4004 and 4006 each include a wide field of view lens. The camera 4002 may be used to identify a trailer location and initial robot positions. In some embodiments, the cameras 4004 and 4006 may be the same camera. In some embodiments, the camera 4002 may also be used for locating clamps and center structures, during solar panel installation. The cameras 4002, 4004, and 4006 are coupled to respective image sensors 4008, 4010, and 4012 (e.g., AR0820 sensor). In some embodiments, the image sensors are optimized for both low light and/or high dynamic range performance. In some embodiments, the system 4000 includes a highspeed digital video interface (e.g., FPD-link) and Ethernet for connecting the cameras to one or more GPUs (e.g., a GPU 4014 that is suitable for edge Al processing, such as Nvidia XT, a GPU that is suitable for image processing applications, such as Nvidia AGX Xavier™). The GPU 4016 implements the example image processing pipeline 3500 described above, and is connected to a robot controller 4018 using Ethernet. The GPU 4014 may be removed in some systems, and the output from the sensors may be directly connected to the GPU 4016, according to some embodiments.

[0195] Some embodiments continue to capture training images while installing solar panels. Figure 41 shows a trailer system 4100 with a coarse camera 4102 that may be used for capturing training images, according to some embodiments.

[01 6] Figures 42A and 42B show histograms 4200 and 4202 of pose error norms for the neural network when coarse position is used and when coarse position is not used, respectively, according to some implementations. As shown, with coarse position, neural network error (difference between real position of the comers of a solar panel and estimates from the neural network) is substantially reduced (from close to 5 inches down to 0.7 inches or so, in some instances).

[0197] Figures 43A and 43B show examples 4300 and 4302 for coarse positions (e.g., positions 4304, 4036, 4308, and 4310) of solar panels using a B Mask R-CNN model, according to some embodiments. Mask R-CNN is a Convolutional Neural Network (CNN) used for image segmentation and instance segmentation. This deep neural network detects objects in an image and generates a high-quality segmentation mask for each instance. Mask R-C is based on a region-based Convolutional Neural Network. Image Segmentation is the process of partitioning a digital image into multiple segments or sets of pixels corresponding to image objects. This segmentation is used to locate objects and boundaries (lines, curves, etc.). Mask R-CNN can be used for semantic segmentation and instance segmentation. Semantic segmentation classifies each pixel into a fixed set of categories without differentiating object instances. In other words, semantic segmentation deals with the identification/classification of similar objects as a single class from the pixel level. All objects are classified as a single entity (solar panel). Semantic segmentation is sometimes called background segmentation because it separates the subjects of the image (e.g., solar panels, wires) from the background. On the other hand, instance segmentation (sometimes called instance recognition) deals with the correct detection of all objects in an image while also precisely segmenting each instance. In that sense, instance segmentation combines object detection, object localization, and object classification, and helps distinguish instances of each object in an image. In addition to having two outputs for each candidate object including a class label and a bounding-box offset, in Mask R-CNN, a third branch outputs an object mask. This mask output helps with extraction of a finer spatial layout of an object. Apart from being simpler to train, performant and efficient, when compared to other models, Mask R-CNN is particularly suited for solar panel identification, because of the neural network’s ability to perform both semantic segmentation and instance segmentation. Moreover, the mask branch adds only a small computational overhead, enabling fast solar panel detection and rapid experimentation. Mask R-CNN can be used for image segmentation, identifying’ objects in the image and creating a mask within the boundaries of the object

[0198] Figure 44 shows a system 4400 for solar panel installation, according to some embodiments. The system 4400 includes a main enclosure 4404, a battery enclosure 4402, an upper robot End-of-Arm Tooling (EOAT) 4406, a lower robot EOAT 4408, a cradle 4410 for holding solar panels 4414, on a trailer 4412, according to some embodiments.

[0199] Figure 45A shows a vision system 4502 mounted on the trailer and used to estimate the pose of the structure 4500, and Figure 45B shows an enlarged view of the vision system 4502, according to some embodiments. Various embodiments may have the vision system mounted on different parts of the ground vehicle, on the robotic arm, or on the end of arm tooling.

[0200] Figure 46A shows a vision system 4602 for module pick 4600, and Figure 46B shows an enlarged view of tire vision system 4602, which includes a high-resolution camera with laser line generation, according to some embodiments.

[0201] Figure 47A shows a system 4700 for distance measurement at module angle (i.e., when facing a module) between position 4702 (an enlarged view of which is shown in Figure 47B) and position 4704 (an enlarged view of which is shown in Figure 47C), according to some embodiments.

[0202] Figure 48A shows a system 4800 for laser line generation for detecting tube and clamp position, according to some embodiments. Figure 48B shows an enlarged view of the laser line generation system 4802, and Figure 48C shows a view 4804 of laser line generation (horizontal line detects a clamp, and a vertical line detects a tube), according to some embodiments. [0203] Figure 49A shows a vision system 4900 for estimating tube and clamp position and locating the nut on the clamp, according to some embodiments. Figure 49A also shows a socket wrench 4902 used to tighten the nut. Figure 49B shows an enlarged view of the vision system. As shown in Figure 49B, the camera uses the laser lines described above to locate tube and clamp, and uses a flash ring light to locate the nut on the clamp. The lasers provide an accurate estimation of tube and clamp position. The flash ring light is used to locate the nut on the clamp shown in Figure 49C. This nut, when tightened, compresses the clamp to keep the panels in place.

Example Solar Panel Installation Using Artificial Intelligence

[0204] Figure 50A shows a flowchart of a method 5000 for autonomous solar installation, according to some embodiments. The method includes obtaining (5002) an image of an inprogress solar installation. The image includes an image of one or more solar panels and one or more torque tubes. In some embodiments, obtaining the image includes using one or more filters for avoiding direct sun glare for detecting End-of-Arm Tooling (EOAT). In some embodiments, obtaining the image includes using a high-resolution camera with laser line generation for identifying the one or more torque tubes and/or a clamp position. In some embodiments, the image includes an image of a clamp and/or a center structure for the in-progress solar installation. In some embodiments, the image includes an image of a clamp and/or a center structure for the in-progress solar installation. In some embodiments, a plurality of images is acquired using wide angle fish-eye lens to create a composite HDR (High Dynamic Range) image inside a camera hardware. The images arc sent through a Robot Operating System (ROS) which is a high-level software framework for integration of robots and servos, using OpenCV (an image processing framework) modules to rectify the images (e.g., change from fish-eye distortion to flat image). Then a region and a bit depth are selected and used to collapse the HDR image into a standard 8- bit image, thereby effectively cropping the region and bit depth to prepare it as input for a trained neural network. At point of acquisition, a robot pose may be stored (using ROS) to create a transform camera result relative to a trailer (a trailer system used for solar panel installation). This may include a robot location and a camera location to identify where the image is in 3D space.

[0205] The method also includes detecting (5004) solar panel segments by inputting the image to a trained neural network that is trained to detect solar panels. Neural networks may be implemented using software and/or hardware (sometimes called neural network hardware) using conventional CPUs, GPUs, ASICs, and/or FPGAs. In some embodiments, the trained neural network comprises (i) a model for semantic segmentation for identifying a solar panel segment, and (ii) a model for instance segmentation for identifying a plurality of solar panel. In some embodiments, the trained neural network uses a Mask R-CNN framework for segmentation. The trained neural networks detect solar panel segments based on features extracted from an image of an in-progress solar installation. In some embodiments, the image obtained is input to the neural network through ROS (e.g., the input image goes from the OpenCV module to a neural network module). Example techniques for training the neural network are described below in reference to Figure 55B, according to some embodiment. In some embodiments, the neural network performs image segmentation to identify a panel (or panels) without identifying location(s) of the panel(s). In some embodiments, there is one model that does both functions (semantic segmentation and instance segmentation). Some embodiments use two instances of the same model to optimize throughput. In such cases, the camera takes two images, and one image goes through each instance. Running two models allows processing twice as many images in the same time.

[0206] The method also includes estimating (5006) panel poses for the one or more solar panels, based on the solar panel segments, using a computer vision pipeline. In some embodiments, the computer vision pipeline includes one or more computer vision algorithms for post-processing. Hough transform, filtering and segmentation of Hough lines, finding horizontal and/or vertical Hough line intersections, and panel pose estimation using predetermined 3D panel geometry and comer locations. In some embodiments, the computer vision pipeline locates the clamps and/or the center structures to estimate the panel poses. In some embodiments, the computer vision pipeline locates the one or more torque tubes and/or the clamp position to estimate the panel poses. In some embodiments, the computer vision pipeline locates the nut. After locating the nut, the socket wrench mounted on a smaller robotic arm may engage with the nut and tighten it to secure the panel in place. Before doing this step, the clamps may be loose and panels may fall off due to wind.

[0207] In some embodiments, estimating the panel poses is performed using conventional machine vision hardware for locating where panel(s) are in a 3-D space. In some embodiments, this is a rough identification of round edges, and is not intended to be very precise Hough transform may be used subsequently to determine precise locations of edges, which is followed by extrapolation of edge lines of panels, determination of where panels cross, and identification of a panel comer. The panel comers are published to identify where tire panel is with respect to the robot. For example, based on a panel geometry in 3-D, the panel’s pose is calculated based on the location of comers of the panel in the image.

[0208] In some embodiments, for estimating the panel poses, the computer vision pipeline uses a PnP (Perspective-n-Point) solver with camera intrinsic parameters (it is aware of its own camera distortion and parallax). Then the extrinsic parameters capture the camera’s position relative to the robot using the robotic arm and EOAT pose at the moment of image capture. The robot pose may be captured continuously with a time stamp. That time stamp may then be used to match the robot pose to the camera acquisition time stamp. In some embodiments, the computer vision pipeline uses a known pose of the robotic arm and end of arm tool (where the camera sits) at the time of image capture to calculate a position of one or more corners of a panel.

[0209] The method also includes generating (5008) control signals, based on the estimated panel poses, for operating a robotic controller for installing the one or more solar panels. In some embodiments, after the panel is found, the location is projected along the tube to seek clamp pixels to identify the clamp location (e.g., how far away the clamp is, how close it is for the clamp puller). Some embodiments use clamp positions to verify that clamps are within an allowable window required by the clamp puller on EOAT. Some embodiments use the center structures to determine sequence on whether to place one or two panels to avoid collisions with the fan gear. Some embodiments use panel position to make sure that the trailer is in a valid position relative to the tube so that robot is within reach of the work needed to perform. Some embodiments use the pose from the leading panel to then guide the lower robot in its fine tube acquisition, which drives the positions of the upper and lower robot for the panel place and the nut drive. In some embodiments, the fine tube acquisition described above uses a horizontal and vertical laser to create a profilometer system that finds the tube and the clamp positions. This refines the working pose from the coarse tube from 10-20 mm and reduces it to less than plus or minus 5 mm. At the first panel, the coarse tube error is within 5 mm, but as this is projected out, the errors grow and the fine tube is used to constrain that to under plus or minus 5 mm.

[0210] Figure 50B shows a flowchart of a method 5010 of training a neural network for autonomous solar installation, according to some embodiments. The method includes obtaining (5012) a plurality of images of solar panel installations under varying lighting conditions, annotating (5014) the plurality of images to identify solar panel images (human annotated images may be used instead of or in addition to automatically annotated images), and training (5016) one or more image segmentation models using the solar panel images to detect solar panels in poor lighting conditions. In some embodiments, the neural network is trained manually using various images, such as different backgrounds (e.g., grass, dirt), different quantities of panels, several images of clamps, panels, under various weather conditions (e.g., sunny conditions, rainy conditions). Within the images, lines are drawn to indicate which pixels represent a panel, clamps, tubes, and center structures. These images and their masks are used to create a series of pseudo images (sometimes referred to as image augmentation) that the neural network then uses in the training process. The pseudo images are the input images with distortions to angles in order to be able to train several times using a same input image. For example, 300-1000 real (input) images may be used for training, and for each real image, 10-20 pseudo images may be created.

[0211] Embodiments of the present invention have been described above with the aid of functional building blocks illustrating the implementation of specified functions and relationships thereof. The boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries can be defined so long as the specified functions and relationships thereof are appropriately performed.

[0212] It will be apparent to those skilled in the art that various modifications and variations can be made in the system for installing a solar panel of the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents. It is to be understood that the phraseology or terminology herein is for the purpose of description and not of limitation, such that the terminology or phraseology of the present specification is to be interpreted by the skilled artisan in light of the teachings and guidance.

[0213] The breadth and scope of the present invention should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.