Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND TECHNIQUES FOR WORKPIECE MODIFICATION
Document Type and Number:
WIPO Patent Application WO/2024/064281
Kind Code:
A1
Abstract:
The present disclosure provides processes and systems for modifying a workpiece. A process includes capturing at least one image of the workpiece, processing the at least one image, obtaining a nominal toolpath, measuring a surface of the workpiece to obtain a workpiece surface measurement, generating an updated toolpath, generating a planned path for the workpiece-modifying equipment, scanning a modification to the workpiece performed by a workpiece -modify ing equipment, determining modifier parameters and modifying the workpiece by the workpiece-modifying equipment according to the planned path and the modifier parameters. The modification may be to dispense a material on the workpiece. A system includes image capture equipment, online sensors, a robot motion controller, an offline registration unit, a nominal toolpath, an online localization unit, a path planner unit, a modifier including workpiece-modifying equipment, and a modification equipment controller.

Inventors:
ZHU ZHIJIE (US)
MIELKE ERICH A (US)
Application Number:
PCT/US2023/033377
Publication Date:
March 28, 2024
Filing Date:
September 21, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
3M INNOVATIVE PROPERTIES COMPANY (US)
International Classes:
B25J9/16; B05B13/00; B05C5/00; B25J11/00; G06T7/00; G06V20/00
Domestic Patent References:
WO2020174394A12020-09-03
WO2020174397A12020-09-03
WO2021074744A12021-04-22
WO2021124081A12021-06-24
Foreign References:
US20220048194A12022-02-17
US20220193709A12022-06-23
DE102005051533B42015-10-22
US4568816A1986-02-04
Attorney, Agent or Firm:
SRINIVASAN, Sriram et al. (US)
Download PDF:
Claims:
What Is Claimed Is: 1. A process for modifying a workpiece, the process comprising: capturing at least one image of the workpiece, using an image capture equipment; processing the at least one image, using an image processor, to determine at least one of a position or an orientation of the workpiece; obtaining a nominal toolpath; measuring, using an online sensor A, a surface of the workpiece to obtain a workpiece surface measurement; generating, using a waypoint modifier, an updated toolpath based on at least the nominal toolpath and the position and/or the orientation of the workpiece; generating, using a path planner, a planned path for the workpiece-modifying equipment, based on at least the updated toolpath and the workpiece surface measurement; scanning, using an online sensor B, a modification to the workpiece performed by a workpiece-modifying equipment; determining modifier parameters, based on at least one modifier characteristic; and modifying the workpiece by the workpiece-modifying equipment according to the planned path and the modifier parameters. 2. The process of claim 1, wherein the modifying the workpiece by the workpiece-modifying equipment comprises communicating a control signal to a modification equipment controller that causes the workpiece-modifying equipment to modify the workpiece. 3. The process of claim 1 or claim 2, wherein the at least one image of the workpiece further comprises an image of an end effector of the workpiece-modifying equipment or at least one other part of the robot arm. 4. A process for dispensing a material on a workpiece, the process comprising: capturing at least one image of the workpiece, using an image capture equipment; processing the at least one image, using an image processor, to determine at least one of a position or an orientation of the workpiece; obtaining a nominal toolpath; measuring, using an online sensor A, a surface of the workpiece to obtain a workpiece surface measurement; generating, using a waypoint modifier, an updated toolpath based on at least the nominal toolpath and the position and/or the orientation of the workpiece; generating, using a path planner, a planned path for the dispenser, based on at least the updated toolpath and the workpiece surface measurement; scanning, using an online sensor B, a bead of material on the workpiece dispensed by the dispenser; determining dispensing parameters, based on a bead profile; and communicating a control signal to a dispenser controller that causes the dispenser to dispense the material on the workpiece according to the planned path and the dispensing parameters. 5. The process of claim 4, wherein the at least one image of the workpiece further comprises an image of an end effector of the workpiece-modifying equipment or at least one other part of the robot arm. 6. The process of claim 5, further comprising varying at least one processing parameter, detecting at least one effect on a shape of the bead of material by online sensor B, and selecting at least one new process parameter. 7. The process of claim 6, wherein varying at least one processing parameter comprises varying a speed of movement of the dispenser, a distance of the dispenser from the workpiece, and/or a volumetric flow rate of the material from the dispenser. 8. The process of any of claims 4 to 7, wherein the material comprises an adhesive, a sealant, a paint, or a thermally conductive material. 9. The process of any of claims 4 to 8, wherein the dispenser comprises a nozzle having a Y-axis and the planning a path for the dispenser comprises aligning the nozzle Y-axis with an estimated normal direction of the workpiece surface. 10. The process of claim 9, further comprising imparting non-perpendicularity to the nozzle by rotating the nozzle to maintain an arbitrary angle between the nozzle Y-axis and the estimated normal direction of the workpiece surface. 11. The process of claim 9 or claim 10, wherein the nozzle is rotated such that the online sensor A is directed at an updated waypoint from a most recent time step. 12. The process of any of claims 1 to 11, wherein at least one of the online sensor A or the online sensor B comprises a laser profilometer. 13. The process of any of claims 1 to 12, wherein the workpiece surface comprises at least one feature selected from the group consisting of an edge, a rib, a seam, a corner, and a fiducial marker, and the updated toolpath incorporates detection of the at least one feature on the workpiece surface. 14. The process of any of claims 1 to 13, wherein the nominal toolpath comprises a predefined toolpath for modifying the workpiece that is defined on a surface model of the workpiece. 15. A system for modifying a workpiece, the system comprising: an image capture equipment configured to capture at least one image of the workpiece; an online sensor A configured to measure a surface of the workpiece; and an online sensor B configured to scan a modification to the workpiece; a controller comprising: a robot motion controller configured to control a movement of a robot arm; an offline registration unit configured to determine a list of waypoints based on at least one of the following sources of data: the at least one image of the workpiece captured by the image capture equipment, a digital model of the workpiece, and a nominal toolpath; an online localization unit configured to determine an updated list of waypoints based on a stream of measurements of the surface of the workpiece measured by the online sensor A and a stream of joint positions of the robot arm; a path planner unit configured to plan a path based on the list of waypoints and the updated list of waypoints; a modification equipment controller configured to control a workpiece-modifying equipment according to the planned path and at least one modification parameter; and a modifier comprising the workpiece-modifying equipment. 16. The system of claim 15, wherein the at least one image of the workpiece further comprises an image of an end effector of the workpiece-modifying equipment or at least one other part of the robot arm. 17. The system of claim 15, wherein the list of waypoints is determined further based on at least one of a set of joint positions of the robot arm or a digital model of the workpiece-modifying equipment or the robot. 18. The system of any of claims 15 to 17, further comprising a data store communicably coupled to at least each of the controller and the modifier, the data store comprising: a nominal path unit; a surface model unit; a modifier characteristics unit; a modification parameters unit; a modification model unit configured to determine modification parameters and output the modification parameters to the modification equipment controller; a material characteristics unit; a workpiece characteristics unit; an image capture equipment calibration parameters unit; and a robot model unit. 19. The system of claim 18, wherein the data store is communicably coupled with at least one of the offline registration unit, the online localization unit, the path planner unit, the modification equipment controller, the robot motion controller unit, the image capture equipment unit, the online sensor A, the online sensor B, or the modification model. 20. The system of claim 18 or claim 19, wherein the data store further comprises an information database. 21. The system of claim 20, wherein the data store further comprises a predictive model configured to determine a modification model based on at least data from the information database. 22. The system of any of claims 15 to 21, wherein the controller further comprises a graphical user interface (GUI) generator that is configured to generate a graphical user interface for a display. 23. The system of any of claims 15 to 22, wherein the offline registration unit comprises: a nominal toolpath retriever configured to retrieve a nominal toolpath; an image retriever configured to retrieve the image of the workpiece; an image processor configured to process the image of the workpiece; and a waypoint modifier configured to generate an updated toolpath based on the processed image of the workpiece, the nominal toolpath, and either a set of joint positions of the workpiece- modifying equipment or an image of an end effector of the workpiece-modifying equipment. 24. The system of any of claims 15 to 23, wherein the path planner unit comprises: a waypoint retriever configured to retrieve a set of waypoints; a path retriever configured to retrieve an actual path of the robot arm; a path analyzer configured to compare the actual path to the set of waypoints; and a waypoint modifier configured to determine a new set of waypoints based on the comparison from the path analyzer. 25. The system of any of claims 15 to 24, wherein the data store further comprises a machine learning unit configured to determine process parameters for modification of the workpiece. 26. The system of any of claims 15 to 25, wherein the workpiece-modifying equipment comprises a dispenser and the modification equipment controller is a dispenser controller. 27. The system of claim 26, wherein the modifier characteristics unit comprises a bead profile unit, the modification parameters unit comprises a dispensing parameters unit, and the modification model unit comprises a dispensing model unit.
Description:
SYSTEMS AND TECHNIQUES FOR WORKPIECE MODIFICATION BACKGROUND Automation capabilities widely exist for robotic modification of workpieces, e.g., application of dispensed adhesives. SUMMARY OF THE DISCLOSURE This disclosure recognizes a general need to more accurately modify the workpieces.  In a first aspect, a process for modifying a workpiece is provided. The process comprises capturing an image of the workpiece, using an image capture equipment; processing the image, using an image processor, to determine at least one of a position or an orientation of the workpiece; obtaining a nominal toolpath; measuring, using an online sensor A, a surface of the workpiece to obtain a workpiece surface measurement; and generating, using a waypoint modifier, an updated toolpath based on at least the nominal toolpath and the position and/or the orientation of the workpiece. The process further comprises generating, using a path planner, a planned path for the workpiece-modifying equipment, based on at least the updated toolpath and the workpiece surface measurement; scanning, using an online sensor B, a modification to the workpiece performed by a workpiece-modifying equipment; determining modifier parameters, based on at least one modifier characteristic; and modifying the workpiece by the workpiece- modifying equipment according to the planned path and the modifier parameters. In a second aspect, another process for modifying a workpiece is provided. The process comprises capturing an image of the workpiece, using an image capture equipment; processing the image, using an image processor, to determine at least one of a position or an orientation of the workpiece; obtaining a nominal toolpath; measuring, using an online sensor A, a surface of the workpiece to obtain a workpiece surface measurement; and generating, using a waypoint modifier, an updated toolpath based on at least the nominal toolpath and the position and/or the orientation of the workpiece. The process further comprises generating, using a path planner, a planned path for the dispenser, based on at least the updated toolpath and the workpiece surface measurement; scanning, using an online sensor B, a bead of material on the workpiece dispensed by the dispenser; determining dispensing parameters, based on a bead profile; and communicating a control signal to a dispenser controller that causes the dispenser to dispense the material on the workpiece according to the planned path and the dispensing parameters. In a third aspect, a system is provided. The system comprises an image capture equipment configured to capture an image of the workpiece; an online sensor A configured to measure a surface of the workpiece; and an online sensor B configured to scan a modification to the workpiece. The system further comprises a controller comprising: a robot motion controller configured to control a movement of a robot arm; an offline registration unit configured to determine a list of waypoints based on at least the image of the workpiece captured by the image capture equipment, a digital model of the workpiece, and a nominal toolpath; an online localization unit configured to determine an updated list of waypoints based on a stream of measurements of the surface of the workpiece measured by the online sensor A and a stream of joint poses of the robot arm; a path planner unit configured to plan a path based on the list of waypoints and the updated list of waypoints; and a modification equipment controller configured to control a workpiece-modifying equipment according to the planned path and at least one modification parameter. The system additionally comprises a modifier comprising the workpiece-modifying equipment. The above summary of the present disclosure is not intended to describe each disclosed embodiment or every implementation of the present disclosure. The description that follows more particularly exemplifies illustrative embodiments. In several places throughout the application, guidance is provided through lists of examples, which examples may be used in various combinations. In each instance, the recited list serves only as a representative group and should not be interpreted as an exclusive list. Thus, the scope of the present disclosure should not be limited to the specific illustrative structures described herein, but rather extends at least to the structures described by the language of the claims, and the equivalents of those structures. Any of the elements that are positively recited in this specification as alternatives may be explicitly included in the claims or excluded from the claims, in any combination as desired. Although various theories and possible mechanisms may have been discussed herein, in no event should such discussions serve to limit the claimable subject matter. BRIEF DESCRIPTION OF FIGURES FIG.1 illustrates a system for modifying a workpiece in which example embodiments can be implemented. FIG.1A illustrates an offline registration. FIG.1B illustrates a path planner. FIG.2 illustrates another system for modifying a workpiece in which example embodiments can be implemented. FIG.2A illustrates another offline registration. FIG.2B illustrates another path planner. FIG.3 illustrates a schematic of a portion of a modifying system in which example embodiments can be implemented. FIG.4 illustrates a schematic of coordinate systems for a portion of a modifying system in which example embodiments can be implemented. FIG.5 illustrates a method for modifying a workpiece in which example embodiments can be implemented. FIG.6 illustrates an example system for modifying a workpiece in accordance with embodiments herein. FIGS.7A–E illustrate schematics of feature correspondences between an image and a computer- aided design (CAD) file in accordance with embodiments herein. FIG.8 is a graph of mean pixel error versus iteration in accordance with an embodiment described herein. FIGS.9A–B illustrate schematics of a laser line with which example embodiments can be implemented. FIG.10A illustrates a schematic of a format of a toolpath in accordance with an embodiment described herein. FIG.10B illustrates a schematic of an adjusted waypoint location based on a surface profile scan in accordance with an embodiment described herein. FIGS.11A–C illustrate XYZ components of a robot TCP trajectory in accordance with an embodiment described herein. FIG.11D illustrates a 3D plot of the robot TCP trajectory in accordance with an embodiment described herein. FIG.12A illustrates a schematic of online orientation specifying an axial direction of a nozzle in which example embodiments can be implemented. FIG.12B illustrates a schematic of online orientation specifying rotation around the nozzle axis in which example embodiments can be implemented. FIG.13 illustrates a schematic of an inline surface normal estimation in accordance with an embodiment described herein. FIGS.14A–B illustrate schematics of orientation adjustment with non-perpendicularity with which example embodiments can be implemented. FIG.15 illustrates computed surface normal vectors along a robot toolpath of an embodiment described herein. FIGS.16A–B illustrate schematics showing an observability issue when TCP follows the tangent direction of the toolpath. FIG.17A illustrates a schematic of a lack of in-plane rotation around a nozzle axis. FIG.17B illustrates a schematic of in-plane rotation around a nozzle axis for optimal field of view along the toolpath. FIG.18 illustrates a schematic showing an adjustment of waypoint location based on the midpoint of rib features within the surface profile scan in accordance with an embodiment described herein. FIG.19 is a photograph of a test fixture having a curvilinear surface and parallel ribs in accordance with an embodiment described herein. FIG.20 includes graphs of recorded robot trajectory for rib feature tracking when rib feature tracking was applied in accordance with an embodiment described herein. FIG.21 illustrates a schematic showing adjustment of a waypoint location based on the edge point within a surface profile scan in accordance with an embodiment described herein. FIG.22 is a photograph of a test fixture having a curved edge in accordance with an embodiment described herein. FIGS.23A–C include graphs of recorded robot trajectory for edge feature tracking of XYZ components in accordance with an embodiment described herein. FIG.23D includes a graph of recorded robot trajectory for edge feature tracking that is a 3D plot of TCP trajectory in accordance with an embodiment described herein. FIG.24 illustrates a schematic of a method of modifying process parameters with which example embodiments can be implemented. FIG.25 illustrates a workpiece modification system in an example network architecture. FIGS.26–28 illustrate example computing devices that can be used in embodiments herein. FIG.29 illustrates an example system of this disclosure. FIG.30 illustrates a non-limiting example of a closed-loop robotic dispensing testbed of this disclosure. FIG.31 is a block diagram illustrating an example closed-loop robotic dispensing according to aspects of this disclosure. FIG.32 is a schematic showing an example format of a toolpath for the closed-loop robotic dispensing techniques of this disclosure. FIGS.33 illustrate aspects of bead location detection according to aspects of this disclosure. FIGS.34 illustrate bead geometry estimation on two scan profile samples. FIG.35 is a graph illustrating a log of time consumption for each computation cycle during a closed-loop dispensing test run. FIG.36 is a schematic showing an example of how variation of waypoint position and tool velocity can affect time parameterization along a toolpath. FIGS.37 illustrate schematics showing the proposed time re-parametrization strategy. FIGS.38 illustrate effectiveness aspects of utilizing the path streaming technique of this disclosure with time re-parametrization for closed loop-dispensing with variable velocity. FIGS.39 illustrate results of bead shape compensation with a naïve control law. FIGS.40 illustrate one or more potential issues with the naïve control law for bead shape compensation. FIGS.41 illustrate a transient state of bead width and tool velocity along the dispense path. FIGS.42 illustrate schematics of steady state checkers. FIG.43 illustrates a tool velocity profile in the upper plot and a bead width profile in the lower plot along a straight-line dispense path in a test run with bead shape control. FIGS.44 illustrate experimental results with desired bead width of 10mm (FIG.44A), 8mm (FIG. 44B), and 5mm (FIG.44C). FIGS.45 illustrates experimental results with control gains of 0.2 (FIG.45A), 0.5 (FIG.45B), and 0.8 (FIG.45C). FIGS.46 illustrate experimental results with wait time at the beginning of dispense process being 2 seconds (FIG.46A) and 3 seconds (FIG.46B). FIG.47 shows experimental results of dispensing on a relatively flatter substrate (FIG.47A) and a relatively more curved substrate (FIG.47B) with online part shape and bead shape compensation. DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS The present disclosure relates to methods and systems for modifying a workpiece, for instance using closed-loop feedback control of robot trajectory based on sensory feedback of surface geometry. In some cases, the methods and/or systems are used to achieve a higher level of process autonomy and higher dispensing quality for dispensing a material (e.g., an adhesive, a sealant, a paint, a thermally conductive material, etc.) on parts having complex geometry. A “workpiece” is meant to include any an object being worked on with a tool or machine, and the term may be used interchangeably with “part” herein. For certain tasks, a highly precise registration of location of a workpiece is needed for successful modification to the workpiece. It has been discovered that automatic part registration can be used to more precisely determine part location relative to a robotic workpiece modifier. Another issue is that local variations in a part’s geometry can affect the ability to precisely modify the workpiece. It has also been discovered that it is possible to adjust a toolpath of a robotic workpiece modifier based on part surface sensing and a closed-loop feedback system. Depending on how the workpiece is to be modified, it may further be necessary to make adjustments based on variations in the modification. For example, if a material (e.g., an adhesive, a sealant, a paint, a thermally conductive material, etc.) is to be deposited on a part, material characteristics can vary, such as rate of deposition, viscosity, etc. Additional sensor(s) and information can be employed and incorporated into the closed-loop feedback system to adjust the modification parameters as needed. FIG.1 illustrates a system for modifying a workpiece in which example embodiments can be implemented. The system 100 includes main components of a controller 110, an inspection system 120, a modifier 130. System 100 is illustrated in FIG.1 as in communication with a data store 140. However, it is expressly contemplated that, in some embodiments, data store 140 may be local to, or integrated into system 100. Similarly, system 100 is illustrated as projecting to a display 10. However, it is expressly contemplated that system may be integrated into a processor of a device that includes display 10. The system 100 may be implemented by one or more suitable computing devices in communication with each of these main components. The controller 110 comprises an offline registration unit 111 that receives as inputs at least an image of a workpiece taken by image capture equipment 121, a digital model of the workpiece (e.g., workpiece model 142 from data store 140), a predefined modification toolpath (e.g., nominal path 141) on the workpiece model, and a tool center point (TCP) pose from a robot motion controller 116 (e.g., controlling an articulated arm of the robot). In another example, the controller 110 also processes an image of a workpiece that includes a part of the robot (e.g., an end effector, such as a workpiece modification tool), and a digital model of at least a portion of the robot 153. The offline registration unit 111 outputs a spatially transformed toolpath (in the format of a list of waypoints) based on the estimated position and orientation of the workpiece in the robot coordinate system. These waypoints may be sent to a path planner unit 113. In some cases, the waypoints may be sent to an optional predictive model 148 and/or an optional information database 149 for future use/reference. For example, Referring to FIG.1A, there are optionally several units within the offline registration unit 111. A nominal toolpath retriever 111a retrieves a nominal toolpath 141 (e.g., from the data store 140). An image retriever 113b retrieves the image from the image capture equipment 121. An image processor 111c processes the image. Processing may include at least part registration (i.e., determining position and/or orientation) based on the taken images, feature detection (e.g., of features such as an edge, a rib, a seam, a corner, a fiducial marker, etc.) both in the image and on the surface model,), and 3D pose estimation based on corresponding features. A waypoint modifier 111d generates an updated toolpath based on the (e.g., processed) image of the workpiece and the nominal toolpath, plus one of a set of joint positions of a robot art (e.g., from which at least one TCP pose can be determined) or an image of an end effector of the workpiece-modifying equipment 131. The controller 110 further comprises an online localization unit 112 that receives as inputs at least workpiece surface profiles streamed in real-time from an online sensor A 122 and TCP poses streamed in real-time from the motion controller of the articulated robot arm. The online localization unit 112 outputs the updated waypoint locations that trace the actual workpiece surface with desired clearance from the workpiece-modifying equipment 131. The controller 110 also comprises a path planner unit 113 that receives as inputs at least the list of waypoint locations from the online localization unit 112, and outputs a smooth robot trajectory (e.g., minimum-jerk) for a modification equipment controller 114 to execute. “Jerk” is defined as the 3rd derivative of joint position trajectory, which is minimized throughout the path to achieve smoothness. The controller 110 additionally comprises a modification equipment controller 114 that applies modification parameters 144 exported from a modification model 145 and controls the on/off state of the workpiece-modifying equipment 131. Referring to FIG.1B, there are optionally several units within the path planner unit 113. A waypoint retriever 113a retrieves a current set of waypoints (e.g., from the data store 140). A path retriever 113b retrieves an actual path of the robot arm across the workpiece surface from the online sensor A 122. A path analyzer 113c compares the actual path with the waypoints. Based on the comparison, a waypoint modifier 113d calculates a new set of waypoints. In some cases, a robot motion controller 116 may receive input from the path planner 113 and sends instructions to the workpiece-modifying equipment 131 to move the workpiece-modifying equipment 131 along the updated toolpath, e.g., that is provided by the waypoint modifier 113d. The controller 110 optionally also comprises a graphical user interface (GUI) generator 115, which may be configured to send information to a display 10. The GUI generator 115 may generate a graphical user interface for display on a display component 10 based on some or all of the information gathered or generated by the controller 110. Suitable displays include for instance and without limitation, a computer screen, a smart phone, or some other user device. Other units 117 may further be included in the controller 110. The controller 110 is described as having the functionality of receiving and sending communicable information to and from other devices. This may be done through an application program interface, for example, such that the controller 110 can receive and communicate with any units and/or models within each of the inspection system 120, the modifier 130, and the data store 140. The inspection system 120 comprises image capture equipment 121 that captures an image of the workpiece and outputs the image to the offline registration unit 111. Any suitable equipment may be employed that captures an image, for instance and without limitation, a red, blue, and green (RGB) camera, a black and white (B&W) camera, or a three-dimensional (3D) image sensor. The inspection system 120 also comprises an online sensor A 122 that may be attached to a surface of the workpiece-modifying equipment 131 and that performs online scanning of a surface of a workpiece ahead of the workpiece-modifying equipment 131. The inspection system 120 further comprises an online sensor B 123 that may be attached to a different (e.g., opposite) surface of the workpiece-modifying equipment 131 and that performs online scanning of a modification to the workpiece performed by the workpiece-modifying equipment 131. Each of the online sensor A 122 and the online sensor B 123 senses in situ and obtains data in (e.g., near) real-time conditions. Any suitable equipment for each of the online sensors A and B may be employed that scans, for instance and without limitation, equipment independently selected from a laser profilometer, an area snapshot sensor, a triangulation-based sensor, a time-of-flight sensor, a laser point sensor, an optical coherence tomography sensor, a confocal sensor, and a dynamic vision sensor. In some embodiments, a laser profilometer is preferred as an online sensor as it advantageously scans faster and simpler than point clouds that provide data in 2.5D instead of the 2 dimension of the laser profilometer. Other units 124 may further be included in the inspection system 120. The modifier 130 comprises a workpiece-modifying equipment 131. Any suitable equipment that is configured to modify a workpiece may be employed in the system, for instance and without limitation, a dispenser, a sander, a polisher, a cutter, a drill, a sprayer, a welder, etc. In some embodiments a dispenser is employed and will be discussed in detail below. For example, the modifier 130 may be a robotic adhesive dispensing unit with a robot arm having a dispenser (e.g., 131). The current status may be a TCP received from the robot motion controller 116 of the controller 110. The data store 140 is configured to communicate with the controller 110, the inspection system 120, and the modifier 130. The data store 140 may be local to the controller 110 or may be accessible through a cloud-based network. Similarly, while the controller 110 is illustrated in FIG.1 as local to the system 100, it is expressly contemplated that the controller 110 may be remote from the system 100 and may receive signals, and send commands, using a wireless or cloud-based network. The data store 140 comprises a nominal path unit 141 that contains a predefined toolpath for modifying the workpiece. The toolpath is defined on a surface model 142, which may be a computer aided design (CAD) model, a depth image, a point cloud, or other model, which is also included in the data store 140 or otherwise retrievable. The data store 140 additionally comprises a modification parameters unit 144 that contains information about measurable factors of the workpiece-modifying equipment (e.g., speed, distance from the workpiece surface, angle from the workpiece surface, temperature, rate of deposition/spraying, etc.). The data store 140 further comprises a modifier characteristics unit 143 that contains information about properties that can be sensed by the online sensor B 123 of whatever material is used to modify the workpiece (e.g., size, shape, location, etc.). Similarly, the data store 140 comprises a material characteristics unit 146 that contains information about physical characteristics of whatever material is used to modify the workpiece (e.g., chemical composition, state of matter, temperature, viscosity, Mohs hardness, sharpness, adhesiveness, color, drill bit size, etc.). The data store 140 additionally comprises a workpiece characteristics unit 147 that contains information about properties of the workpiece (e.g., size, shape, material composition, etc.). The data store 140 optionally further comprises an information database 149 that contains any additional relevant information for access by any of the units in the data store 140 or in the controller 110. The data store 140 optionally also comprises a predictive model 148 that receives an input of at least historic information from prior modification involving the same or similar workpieces and/or modifiers. The model 148 outputs a modification model to the modification equipment controller 114 to assist in rapidly responding to discrepancies between planned paths/modification and actual paths/modification. The data store 140 optionally also comprises a machine learning unit 151 that is configured to forecast process parameters for modification of a workpiece, such as inputting to a modification model 145. The data store 140 also comprises a modification model 145 that receives as inputs at least modifier characteristics 143 streamed in situ from the online sensor B 123. Optionally, the modification model also receives as an input template/desired modifier characteristics defined by the user. The modification model 145 may also receive information from one or more of the material characteristics unit 146, the workpiece characteristics unit 147, the predictive model 148, the information database 149, or other units 154. The modification model 145 outputs modification parameters 144 to the modification equipment controller 114 to provide closed-loop feedback for prompt adjustment of the modification profile, in order to correct the errors between the actual modifier characteristics and the template modifier characteristics. The data store 140 further comprises an image capture calibration parameters unit 152 that contains information that the offline registration unit 111 uses to calibrate the image capture equipment 121 prior to capturing any images. Optionally, the data store 140 additionally comprises a robot model unit 153 that contains a digital model of at least a portion of the robot, for instance a robot arm and/or an end effector of the robot that is configured to modify a workpiece (e.g., the workpiece-modifying equipment). In some cases, the robot model unit 153 contains a digital model of the entire robot. Other units 154 may further be included in the data store 140. FIG.2 illustrates another system for modifying a workpiece in which example embodiments can be implemented. More particularly, FIG.2 illustrates an exemplary system in which a bead of adhesive is deposited on a workpiece. The system includes main components of a controller 210, an inspection system 220, a dispenser 230, and a data store 240. The system may be implemented by one or more suitable computing devices in communication with each of these main components. The controller 210 comprises an offline registration unit 211 that receives as inputs at least an image of a workpiece taken by image capture equipment 221, a digital model of the workpiece (e.g., CAD model 242), a predefined modification toolpath (e.g., nominal path) on the workpiece model, and a TCP pose from a robot motion controller 216 (e.g., controlling an articulated arm of the robot). In another example, the controller 210 also processes an image of a workpiece that includes part of the robot (e.g., nozzle of the dispenser), and a digital model of at least a portion of the robot 253. The offline registration unit 211 outputs a spatially transformed toolpath (in the format of a list of waypoints) based on the estimated position and orientation of the workpiece in the robot coordinate system. These waypoints may be sent to a path planner unit 213. In some cases, the waypoints may be sent to an optional predictive model 248 and/or an optional information database 249 for future use/reference. For example, Referring to FIG.2A, there are optionally several units within the offline registration unit 211. A nominal toolpath retriever 211a retrieves a nominal toolpath 241 (e.g., from the data store 240). An image retriever 213b retrieves the image from the image capture equipment 221. An image processor 211c processes the image. Processing may include at least part registration (i.e., determining position and/or orientation) based on the taken images, feature detection (e.g., of features such as an edge, a rib, a seam, a corner, a fiducial marker, etc.) both in the image and on the surface model,), and 3D pose estimation based on corresponding features. A waypoint modifier 211d generates an updated toolpath based on the (e.g., processed) image of the workpiece and the nominal toolpath, plus one of a set of joint positions of a robot art (e.g., from which at least one TCP pose can be determined) or an image of an end effector (e.g., nozzle) of the dispenser 230. The controller 210 further comprises an online localization unit 212 that receives as inputs at least workpiece surface profiles streamed in real-time from an online sensor A 222, TCP poses streamed in real-time from the motion controller of the articulated robot arm, and the waypoint list from the offline registration unit 211. The online localization unit 212 outputs the updated waypoint locations that trace the actual workpiece surface with desired clearance from the workpiece-modifying equipment 231. The controller 210 also comprises a path planner unit 213 that receives as inputs at least the list of waypoint locations from the online localization unit 212, and outputs a smooth robot trajectory (e.g., jerk- free) to a buffer for a dispenser controller 214 to execute. Referring to FIG.2B, there are optionally several units within the path planner unit 213. A waypoint retriever 213a retrieves a current set of waypoints (e.g., from the data store 240). A path retriever 213b retrieves an actual path of the robot arm across the workpiece surface from the online sensor A 222. A path analyzer 213c compares the actual path with the waypoints. Based on the comparison, a waypoint modifier 213d calculates a new set of waypoints. The controller 210 additionally comprises a dispenser controller 214 that applies dispensing parameters 244 exported from a dispensing model 245 and controls the on/off state of the dispenser 230. The controller 210 optionally also comprises a graphical user interface (GUI) generator 215, which may be configured to send information to a display 20. The GUI generator 215 may generate a graphical user interface for display on the display 20 based on some or all of the information gathered or generated by the controller 210. Suitable displays include for instance and without limitation, a computer screen, a smart phone, or some other user device. Other units 217 may further be included in the controller 210. The controller 210 is described as having the functionality of receiving and sending communicable information to and from other devices. This may be done through an application program interface, for example, such that the controller 210 can receive and communicate with any units and/or models within each of the inspection system 220, the dispenser 230, and the data store 240. The inspection system 220 comprises image capture equipment 221 that captures an image of the workpiece and outputs the image to the offline registration unit 211. Any suitable equipment may be employed that captures an image, for instance and without limitation, an RGB camera, a B&W camera, or a 3D image sensor. The inspection system 220 also comprises an online sensor A 222 that may be attached to a surface of the workpiece-modifying equipment 231 and that performs online scanning of a surface of a workpiece ahead of the workpiece-modifying equipment 231. The inspection system 220 further comprises an online sensor B 223 that may be attached to a different (e.g., opposite) surface of the dispenser 230 and that performs online scanning of a bead of adhesive dispensed on the workpiece by the dispenser 230. Each of the online sensor A 222 and the online sensor B 223 senses in situ and obtains data in (e.g., near) real-time conditions. Any suitable equipment for each of the online sensors A and B may be employed that scans, for instance and without limitation, equipment independently selected from a laser profilometer, an area snapshot sensor, a triangulation-based sensor, a time-of-flight sensor, a laser point sensor, an optical coherence tomography sensor, a confocal sensor, and a dynamic vision sensor. In some embodiments, a laser profilometer is preferred as an online sensor as it advantageously scans faster and simpler than point clouds that provide data in 2.5D instead of the 2 dimension of the laser profilometer or laser point sensor. Other units 224 may further be included in the inspection system 220. The dispenser 230 comprises any suitable dispenser of an adhesive material (e.g., including an extruder and a nozzle from which a bead of adhesive is deposited), for instance as described in more detail in at least FIGS.3, 12, and 14. For instance, one suitable dispenser is as described in detail in PCT Publication No. WO 2020/174394 (Napierala et al.), incorporated herein by reference in its entirety. The data store 240 is configured to communicate with the controller 210, the inspection system 220, and the dispenser 230. The data store 240 may be local to the controller 210 or may be accessible through a cloud-based network. Similarly, while the controller 210 is illustrated in FIG.2 as local to the system 200, it is expressly contemplated that the controller 210 may be remote from the system 200 and may receive signals, and send commands, using a wireless or cloud-based network. The data store 240 comprises a nominal path unit 241 that contains a predefined toolpath for modifying the workpiece. The toolpath is defined on a CAD model 242, which is also included in the data store 240 or otherwise retrievable. The data store 240 additionally comprises a dispensing parameters unit 244 that contains information about measurable factors of the dispenser (e.g., speed, distance from the workpiece surface, angle from the workpiece surface, temperature, rate of deposition, etc.). The data store 240 further comprises a bead profile unit 243 that contains information about adhesive bead properties that can be sensed by the online sensor B 223 (e.g., size, shape, location, etc.). Similarly, the data store 240 comprises a material characteristics unit 246 that contains information about physical characteristics of the adhesive material (e.g., chemical composition, state of matter, temperature, viscosity, adhesiveness, color, etc.). The data store 240 additionally comprises a workpiece characteristics unit 247 that contains information about properties of the workpiece (e.g., size, shape, material composition, etc.). The data store 240 optionally further comprises an information database 249 that contains any additional relevant information for access by any of the units in the data store 240 or in the controller 210. The data store 240 optionally also comprises a predictive model 248 that receives an inputs historic information from prior modification involving the same or similar workpieces and/or adhesives. The predictive model 248 outputs a modification model to the dispenser controller 214 to assist in rapidly responding to discrepancies between planned paths/modification and actual paths/modification. The data store 240 optionally also comprises a machine learning unit 251 that is configured to forecast process parameters for dispensing on a workpiece, such as inputting to a dispensing model 245. The data store 240 also comprises a dispensing model 245 that receives as inputs at least the bead profile 243 streamed in situ from the online sensor B 223. Optionally, the modification model also receives as an input template/desired modifier characteristics defined by the user. The dispensing model 245 may also receive information from one or more of the material characteristics unit 246, the workpiece characteristics unit 247, the predictive model 248, the information database 249, or other units 255. The dispensing model 245 outputs dispensing parameters 244 to the dispenser controller 214 to provide closed-loop feedback for prompt adjustment of the adhesive bead profile, in order to correct the errors between the actual adhesive bead characteristics and the template adhesive bead characteristics. The data store 240 further comprises an image capture calibration parameters unit 252 that contains information that the offline registration unit 211 uses to calibrate the image capture equipment 121 prior to capturing any images. Optionally, the data store 240 additionally comprises a robot model unit 253 that contains a digital model of at least a portion of the robot, for instance a robot arm and/or an end effector of the robot that is configured to modify a workpiece (e.g., the dispenser). In some cases, the robot model unit 253 contains a digital model of the entire robot. Other units 254 may further be included in the data store 240. Modification of dispensing process parameters As an example, and not by limitation, in one embodiment the material modifier is an adhesive dispenser dispensing adhesive on a surface. Adhesives have different properties based on ambient conditions, making it difficult to know exactly what speed to move a dispenser (i.e., the speed of movement of the dispenser through space), what force to apply to the material to achieve a desired volumetric flow rate, what temperature to heat one or more components to, etc. In the scenario where the adhesive is a 2 (or more) part mixture, each component presents these concerns. While some products may have a high tolerance for variation, others – such as airplane components, require precise adhesive application to ensure proper function. Therefore, it is important to have a feedback system that can, in-situ, characterize adhesive flow and adjust parameters to achieve the desired adhesive dispensing profile. Related to the system of FIG.2 described above, FIG.24 illustrates a schematic of a method of modifying process (e.g., dispensing) parameters with which example embodiments can be implemented. FIG.24 includes incorporating information of at least one of equipment (e.g., robot arm) velocity 2410, distance of the dispenser from a workpiece 2420, or volumetric flow rate 2430, for varying process parameters (2440). The method further includes detecting the effect(s) on bead shape (2450) as a result of varying the process parameters (2440) and selecting new process parameters (2460). In some cases, the method implements at least one repetition of a loop between varying process parameters (2440) and detecting the effect(s) on bead shape (2450), e.g., by using online sensor B. In certain embodiments of the present disclosure, the bead is a bead of adhesive (e.g., pressure sensitive adhesive, structural adhesive, etc.), although other materials are expressly contemplated to be dispensed in a form of a bead on a workpiece. Machine learning may be employed to assist in varying the process parameters (2440), selecting new process parameters (2460), or both. Machine learning is described below in more detail. Offline registration of part location Referring to FIG.3, a schematic is provided of a portion of a modifying system in which example embodiments can be implemented. More particularly, FIG.3 includes an illustration of a portion of a system in use. The system includes a dispenser 330, an online sensor A 322, an online sensor B 323, and image capture equipment 321. The system is shown as a snapshot in time during a process of dispensing a bead of adhesive 334 onto a major surface 362 of a workpiece 360, through a nozzle 332 of the dispenser 330. The arrow shows the direction D that the dispenser 330 is traveling with respect to the workpiece 360. The online sensor A 322 directs a signal 325 at a major surface 362 of the workpiece 360, then receives a return signal 326 that provides information of the actual profile of the major surface 362 of the workpiece 360 to allow for adjustment of the dispensing characteristics if needed before the dispenser arrives at the location the online sensor A 322 has sensed. The online sensor B 323 directs a signal 327 at the bead of adhesive 334 that has been dispensed onto the major surface 362 of the workpiece 360, then receives a return signal 328 that provides information of the actual profile of the bead of adhesive 334 to be used in determining if adjustments are needed to the dispensing parameters. Referring to FIG.4, a schematic of a robotic dispensing system is shown, which consists of a 6- axis robotic arm, a dispensing tool mounted on the tool flange of the robot, and a camera mounted on the dispensing tool. Let {A}, {B} and {D} denote the coordinate systems of robot base, image capture equipment, and dispensing tool, respectively. Let {E} denote the coordinate system fixed to the workpiece (user coordinate system). For representation of coordinate transformation, 3D translation and rotation from coordinate system 1 to 2 is expressed in a compact form as a transformation matrix T ^ଶ ℛସൈସ: T ൌ ^^ C ଶ ^^ ^ ൨ 1 to 2, and ^^ ^ ∈ ℛ ଷൈ^ denote the origin of coordinate system 1 expressed in coordinate system 2. T ^ୈ and T^ୈ can be calibrated using standard hand-eye procedures. T^^ is the unknown to be estimated based on vision-based localization method, which will be described below. The pose of the workpiece expressed in the base coordinate system T ^^ (required in toolpath planning algorithm) can then be computed using the chain rules: T ^^ ൌ T ^ୈ T ୈ^ T ^^ Image capture equipment parameters (intrinsic and extrinsic matrices, distortion coefficients) and the 3D model of the workpiece are assumed known prior to the workpiece localization process. Image capture equipment parameters can be acquired via standard calibration procedures. The 3D model of the workpiece can be acquired from the original CAD design file or via 3D scanning of a sample workpiece. In addition, one or more joint positions (e.g., poses) of the workpiece-modifying equipment (e.g., robot) relative to the robot coordinate system, can be streamed from the robot controller upon request. for instance, referring to the robot 400 in FIG.4, a body section 422 and several arm sections 424, 426, and 428 may be present, connected via joints 442, 444, 446 (e.g., the body section 422 and the arm section 424 are connected through the joint 442. Manipulating the joint 442 forms an angle α 1 between the body section 422 and the arm section 424, thus the angle α 1 varies with the particular relative positions of the body section 422 and the arm section 424. Similarly, the arm section 424 and the arm section 426 are connected through the joint 444. Manipulating the joint 444 forms a variable angle α 2 between the arm section 424 and the arm section 426. Likewise, the arm section 426 and the arm section 428 are connected through the joint 446. Manipulating the joint 446 forms a variable angle α 3 between the arm section 426 and the arm section 428. By obtaining streamed joint positions from the robot controller, it is possible to use at least one joint angle between adjacent arm sections of the robot to determine the TCP poses. FIG.5 illustrates a process 50 for modifying a workpiece in accordance with one or more aspects of this disclosure. While it will be appreciated that a number of systems may be configured to perform process 50 in accordance with this disclosure, process 50 is described as being performed by the systems illustrated in FIGS.1 and 2 for ease of discussion. FIG.5 provides an exemplary schematic of how various system components may suitably interact and/or be implemented to carry out such a method, although other interactions are expressly contemplated. Process 50 may begin with image capture equipment 121 or 221 capturing images (500). The captured images may include one or more of RGB images 501, B&W images 502, 3D images 503, or other images 504. In turn, system 100 or 200 may process the captured images offline (510). For example, system 100 or 200 may perform post-processing operations, comprising at least part registration (i.e., determining position and/or orientation) based on the taken images. The process also optionally includes feature detection (e.g., of features such as an edge, a rib, a seam, a corner, a fiducial marker, etc.) both in the image and on the surface model,), and 3D pose estimation based on corresponding features. Additional post-processing operations may also be performed, such as smoothing, filtering, compression, fish eye correction, etc. Image processing optionally incorporates data from the surface model 511 of the workpiece, e.g., a CAD model, a 3D rendering based on images of an identical part, and/or a previously taken laser scan of this same part. In some cases, the post-processing further includes searching for feature correspondences between the captured image (e.g., 501-504) and the surface model 511. Also, path planner 113 or 213 generates a toolpath (520). The toolpath may be a spatially transformed toolpath in a format of a list of waypoints, based on the estimated position and orientation of the workpiece from at least joint positions (e.g., which can be used to determine TCP poses) or a digital model of a robot 521, the nominal path 522, the captured images 501-504, and/or processing of the images offline. Additionally, an online sensor A 122 or 222 measures workpiece surfaces (530). Process 50 further generates an updated toolpath (540), which incorporates at least measurements of the workpiece surfaces and streamed TCP poses or joint angles 541. An online sensor B 123 or 223 provides updated toolpath and optionally other information; based on at least this data, process 50 plans a path (550). Process 50 typically incorporates desired modifier characteristics 561 when outputting modifier parameters (560). Additionally, (at least) modification equipment control and a robot controller 575 cooperatively modify the workpiece (570). The modification equipment control may include, for instance and without limitation, at least one of a dispenser control unit 571, a welding equipment control unit 572, a paint repair equipment control unit 573, or some other control unit 574. The actions involved in the process may be carried out in various orders and some repeated numerous times to provide in situ adjustments of toolpath, workpiece modification, etc. For instance, FIG.6 illustrates an example system for modifying a workpiece, in which some possible interactions between certain components are depicted. A workpiece 602 is depicted as interacting with each of image capture equipment 604, online sensor B 606, modification equipment control unit 608, robot control unit 610, and online sensor A 612. The image capture equipment 604 can interact with offline registration unit 614. Offline registration unit 614 can further interact with an online localization unit 616 and/or the robot control unit 610. The online localization unit 616 can also interact with the robot control unit 610 and/or a path planner unit 618. The path planner unit can also interact with the robot control unit 610. The robot control unit 610 can interact with each of a modification model 620, the workpiece 602, the modification equipment control unit 608, the path planner 618, the online localization unit 616, and/or the offline registration unit 614. The modification model 620 can interact with each of the online sensor B 606, a machine learning unit 622, the modification equipment control unit 608, and/or the robot control unit 610. Based on the sensed location of a surface of the workpiece 602 and on desired modification parameters, each of the path planner unit 618 and the modification equipment control unit 608 can provide input to the robot control unit 610 regarding specific toolpath and modification characteristics when the system is in use to modify the workpiece 602. A machine learning unit 622 may be configured to forecast process parameters for modification of a workpiece, such as inputting to the modification model 620. A non-exhaustive list of machine learning techniques that may be used on data obtained from systems or methods of the present disclosure include: support vector machines (SVM), logistic regression, Gaussian processes, decision trees, random forests, bagging, neural networks, Deep Neural Networks (DNN), linear discriminants, Bayesian models, k-Nearest Neighbors (k-NN), and the gradient boosting algorithm (GBA). However, it is expressly contemplated that other suitable machine learning techniques may be used. Referring to FIGS.7A-E, given the image capture equipment parameters as well as ^^ one-to-one correspondences between a set of 2D image points ^ ^^ ^ ^^ ^ ^ , ^^ ൌ 1, ... , ^^ and a set of 3D points in a 3D model (e.g., a CAD model) ^ ^^ ^ ^^ ^ ^^ ^ ^் , ^^ ൌ 1, ... , ^^, the 6D (3D translation and 3D rotation) of the target in the image capture system can be estimated based on a Perspective-n-Point (PnP) of 2D-3D correspondences for PnP without ambiguity in the solution is four, but more correspondences are preferred in real applications, so that optimization-based PnP methods can be performed for robust localization when noises are present in the system. The method to compute 2D-3D correspondences can be a major challenge in this process. Rather than 2D textures or corner features which can be found on everyday objects such as labeled bottles and cardboard boxes, edge features (e.g., straight or curved) were selected for identification of 2D-3D correspondences, due to the abundancy of edges on all categories of workpieces. Other image features could also or alternatively be used (e.g., Harris corners, ORB, SIFT, BRIEF, SURF, QR tags, etc.) In a 2D image, edge features are sharp changes in pixel intensities. They can be extracted using a Canny edge detector, for instance, which is widely adopted as the default edge detector in image processing, or another suitable model. To achieve automatic edge detection and avoid manual tuning of the thresholds, a parameter-free Canny detector was implemented that determines the lower and upper thresholds based on the median of the image. FIG.7A illustrates edge features 710 of a target (e.g., a part) obtained from image capture equipment. To find 3D edge features in a CAD model (e.g., “3D model edges”) that correspond to the 2D Canny edge in the image, the edges from the CAD model need to be projected to a 2D image (e.g., “2D model edges”) for subsequent search of correspondences in image space. The model edges can be the result of two effects: (1) edge features based on curvature change in the 3D model (denoted as “edges” here), and (2) edge features based on depth change in the 3D model (denoted as “contours” here). FIG. 7B illustrates edge features 720 of a target (e.g., a part) obtained from a CAD model and FIG.7C illustrates contours 730 of a target obtained from the CAD model. One method to identify these two types of edges in a 3D model is described as follows: Edges: The edges from curvature can be extracted by first computing the angles between the normal directions of each adjacent mesh surfaces in the CAD model (e.g., in the form of polygon mesh), and then selecting the element edges with angles above a pre-determined threshold (e.g., 90°). Next, given an initial guess of the camera pose, back-face culling is implemented to remove the invisible edges, as they do not correspond to any 2D image edge features. Contours: The contours are formulated due to rapid depth change in model geometry measured in the image capture equipment coordinate system. The resulting contours in the image plane are shown as the boundaries between the workpiece and the background. To reconstruct the contour edges based on the CAD model and an initial guess of the image capture equipment pose, a digital twin of the image capture equipment is implemented to render the CAD model. In the rendered image, the CAD model is masked in black and the background is masked in white, so that the contour edges can be robustly extracted using a Canny edge detector. The 3D coordinates of the extracted 2D contours can then be retrieved based on the depth data from the renderer. Various software packages may be utilized for 3D edge extraction and contour detection, respectively. In some examples, these software packages may include application programming interfaces (APIs) and/or other combinations of modules/subroutines. In some non-limiting examples, such as in cases in which PyVista and/or Pyrender are used, these software packages may incorporate open-source and/or free-and-open-source software libraries. Multiple formats of CAD model are supported, including STL and OBJ. The overall method for workpiece localization based on 2D-3D correspondence includes the following actions: In a user interface, the user selects a few 2D keypoints in the image that correspond to a set of pre-determined 3D keypoints in the CAD model. A schematic image of the CAD model with keypoint labels is also presented to the user to guide manual selection of the 2D keypoints in the image. A pose is estimated based on the 2D-3D correspondences specified by the user. This pose will be utilized as the initial guess of the image capture equipment pose in the following workpiece localization steps (i.e., pose refinement based on the initial guess). New 2D-3D correspondences are computed based on the current image capture equipment pose. The 2D model edges can be either edges from curvatures or edges from contours. To find correspondences between the 3D model edges and the 2D Canny edges from the camera image, points are first sampled along the 2D model edges with constant spacing and then projected to the image capture equipment image. Second, for each sampled point, the corresponding point on the Canny edges can be calculated via 1D search along the direction orthogonal to the 2D model edges. FIG.7D illustrates a schematic of a difference between edge features 720 and contours 730 of a target from a CAD model and edge features 710 of a target determined from the image capture equipment. The pose estimation is updated using PnP-RANSAC method or another suitable method. The computing and the updating are repeated until the pose estimation result converges (e.g., with error below a predetermined threshold). FIG.7E illustrates a schematic of the 1D searching method to find 2D-3D correspondences in the image. The top, angular, line 740 denote 2D model edges. The dots 750 denote the sampled points along the 2D model edges. The bottom, curved, line 760 denotes the Canny edge. The algorithm was tested using an iPhone 8 (back) camera with 12MP resolution, f/1.8 aperture and autofocus, as the image capture equipment. A printed chessboard was used for calibration of the intrinsic matrix and distortion parameters of the camera. The workpiece localization algorithm was first tested on a 3D printed block with a wavy top surface. Eight keypoints at the corners of the block were labeled in the CAD model. For user initialization, a guiding image with the labeled keypoints was shown to the user, to aid manual selection of the keypoints in the camera image. Note that not all the keypoints are required to be identified by the user for a successful initialization. The user can pick the keypoints with the best confidence and skip those that are not visible or hard to identify (e.g., the 8th keypoint was skipped in this example). To study the capability of the proposed method to compensate for large positioning errors in user initialization, an artificial positioning error with uniform distribution in the range of (-2, 2) mm was added to the workpiece location calculated based on user initialization. Edge features from the CAD model were detected based on edges from contours and edges from curvatures. Given the initial guess of the pose of the workpiece, correspondences between the model edges and the Canny edges were then calculated via bi-directional 1D search in the camera image. Ten iterations of pose refinement were conducted on the testing image of the wavy block. The 3D edges from the CAD model were projected to the camera image based on the pose of the workpiece from the initial guess and the final iteration. Localization error in the image space was characterized by computing the Euclidean distance (L2 norm) from the sampled points along the 2D model edges to their corresponding points along the Canny edges. The mean pixel error of all the sampled points in each iteration is plotted in FIG.8. For the wavy block, the methods based on edges and contours both successfully corrected the localization error from user initialization with similar converging speed. The residues could be due to geometrical difference between the CAD model and the 3D printed part, as well as the calibration error of the camera focal length.

Online toolpath control based on part surface geometry Methods according to at least certain embodiments of the present disclosure are advantageously capable of adjusting a robot toolpath in order to have a workpiece-modifying equipment (e.g., a dispenser nozzle tip) precisely track a part surface geometry at a desired (e.g., dispensing) angle with respect to surface normal and desired gap distance from the part surface. The knowns are: (1) The nominal toolpath that is defined on the CAD model of the part and expressed in the robot coordinate system (based on part registration process as described above). (2) The two-dimensional surface profiles streamed from an online sensor A (e.g., a laser profilometer) mounted on the robot. (3) The pose of the online sensor A relative to the TCP coordinate system which is fixed to the robot. (4) The TCP pose of the workpiece-modifying equipment relative to the robot coordinate system, which can be streamed from the robot controller upon request. This in combination with (2) and (3) can provide the location of part surface profile relative to the robot coordinate. The unknowns are the target workpiece-modifying equipment poses that maintain the desired angle and gap distance relative to the part surface as perceived by the online sensor A in situ (e.g., in near real-time). The difference between these target workpiece-modifying equipment poses and those from the nominal toolpath requires online toolpath adjustment. FIG.9A is a top view schematic of a laser line 910 from a laser profilometer that could be used in embodiments of closed-loop part registration. The width d of the laser line may vary, for instance 20 to 100 mm projected onto a part surface from a distance of 100 to 300 mm (although other widths and distances may be suitable). A suitable laser sensor lookahead distance L can be defined by the distance from the TCP to the laser projection plane (e.g., 20 to 100 mm). The laser sensor may be placed high above the TCP tip so that the dimension of the laser sensor does not affect the reachable radius of the TCP tip. Referring to FIG.9B, a perspective schematic is provided of an assembly including a laser profilometer 920 attached to a robotic arm 930, directing a laser line 910 ahead of a TCP of a workpiece- modifying equipment 940. An example use of the closed-loop part registration can also include dispensing. Such an example is described herein. A toolpath buffer stores a list of unfinished waypoints within a lookahead distance. At each time step, a waypoint is extracted from the toolpath buffer and sent to a robot controller via a Real-Time Data Exchange (RTDE) interface. Within the robot controller, each target waypoint in Cartesian space is transformed to joint positions via an inverse kinematic model, which is then fed to a proportional feedback controller for precise robot motion control. The actual joint positions from the encoders are fed back to the RTDE interface after being transformed back to Cartesian space (i.e., TCP pose) via a forward kinematic model. In the meantime, the laser profilometer streams the part surface profile to a scan data buffer. Once the current TCP pose becomes available from the RTDE interface, the latest surface profile is extracted from the scan data buffer and transformed to the TCP coordinate system. This is done within the online localization unit which outputs the transformed surface profile to the path planner unit. The path planner unit then computes the updated waypoints based on the nominal toolpath and the surface profile (to be described in more detail below). The updated waypoints can either be sent to an online path planner for path smoothing or be directly added to the toolpath buffer for future execution of the robot controller. The aforementioned robot controller framework was implemented in Python 3.6 and ran on a Linux virtual machine on a 64-bit Windows 10 laptop (HP ZBook). The specifications of the hardware are as follows: Intel(R) Core(TM) i7-10850 CPU @ 2.79 GHz, 16 GB memory. The robot toolpath is discretized into a sequence of waypoints. Referring to FIG.10A, each waypoint w i is defined as a tuple consisting of a TCP pose p i , and at least one of tool velocity, tool velocity and tool acceleration, timestamp, or some combination of these. If a timestamp is specified, this takes precedence over tool velocity and acceleration. In this example, timestamps were specified. The motion command for each waypoint is sent to the robot via the RTDE library function, which can take as arguments joint positions, timestamps, velocities, accelerations, lookahead time, and gain. The joint positions can be computed from the Cartesian expression p i using an inverse kinematic model. Lookahead is a variable that set the lookahead time in order to smoothen the trajectory. Lookahead may be set to 0.03 s, 0.04 s, 0.05 s, 0.06 s, 0.07 s, 0.08 s, 0.09 s, 0.10 s, 0.11 s, 0.12 s, 0.13 s, 0.14 s, 0.15 s, 0.16 s, 0.17 s, 0.18 s, 0.19 s, or 0.20 s; e.g., optionally being within a range from 0.03 to 0.2 s. Gain is the proportional gain for following target position. If timestamps are specified, these take precedence over velocities and accelerations, otherwise, the velocities and accelerations are used to compute timestamps. An exemplary method to adjust waypoint locations along the nominal path based on scanner data is visualized in FIG.10B. First, the nominal path 1010 defined in the robot base coordinate system and the scan profile 1020 defined in the scanner coordinate are all transferred to the TCP coordinate. Second, the intersection of the nominal path 1010 with the laser projection plane 1030 is computed and becomes the adjusted waypoint. A sequence of the adjusted waypoint will then form the actual toolpath 1040 that is conformal to the part surface (plus a dispenser nozzle gap offset value could be added in the final motion command). For example, TCP poses were recorded while a robot traversed through the toolpath of a wrinkled paper surface (as a sample irregular part) with waypoint adjustment based on the inline scan data. Referring to FIGS.11A-C, the XYZ components, respectively, of the robot TCP trajectory are plotted, including both the nominal 1110 and actual 1120 paths in each of FIGS.11B-D. It is noted that the nominal and actual paths overlap so extensively in FIG.11A that they are not readily distinguishable from each other. FIG.11D shows a 3D plot of the robot TCP trajectory. The robot followed the nominal path 1120 in XY plane as plotted in FIGS.11A-B. The major adjustment took place in the Z direction, where the Z component of the adjusted toolpath followed the irregular shape of the wrinkled paper surface, as plotted in FIG.11C. As a result, the executed toolpath with closed-loop control deviated significantly from the nominal path in order to track the actual part surface. Besides adjusting a position of an end effector (e.g., the end-of-arm component that interacts with a surface, such as a dispenser or a sander, etc.) based on part surface profile, the adjustment of end effector orientation (TCP pose) may also be needed, especially when the actual surface normal of the part deviates significantly from those of the nominal model (i.e., CAD model). Here, the TCP coordinate system is defined such that the Y-axis is aligned with the nozzle axis, and the X-axis is aligned with the marching direction of the nozzle (e.g., the same direction as the lookahead direction of the laser scanner). Axis-angle expression is used to represent 3D orientation adjustment: rotation of an angle around the nozzle axis. Thus, online orientation adjustment can be divided into two steps. Referring to FIG.12A, the direction of the Y axis (e.g., axial direction of nozzle) is first specified based on surface normal of the part. Referring to FIG.12B, second, the rotation angle around the Y axis (e.g., rotation in XZ plane) is specified. These principles also apply to any other type of workpiece-modifying equipment (e.g., a sander, a polisher, a cutter, a drill, a sprayer, a welder, etc.). Orientation adjustment based on surface normal Surface normal of a part at a specific dispensing location may be estimated based on multiple slices of laser scans acquired ahead and behind a dispensing location along the toolpath. The surface profiles from multiple laser scans form a point cloud that approximates the part surface geometry around the nozzle location. A polynomial fit (e.g., a plane fit or a paraboloid fit) can be applied to the point cloud to solve for the surface normal analytically. Here, a plane was fit to the point cloud as an example, which can provide good estimation of the local surface shape when the sampled region is small.   Referring to FIG.13, an exemplary schematic of an estimated surface normal of a curved part surface is shown as determined according to the above process. The locally fitted plane 1300 is depicted including three slices of the scan profile 1320, three trajectory waypoints 1330, and the axial direction of the nozzle 1340. Based on the computed surface normal at each waypoint, the nozzle axis (Y-axis) can be set to be always aligned with the estimated normal direction of the part surface. Non-perpendicularity can also be imposed to the orientation adjustment, for instance, by first setting the nozzle axis to be the surface normal direction, and then rotating the nozzle to maintain an arbitrary angle between the nozzle Y-axis and the estimated normal direction of the workpiece surface (e.g., rotating the nozzle around the marching/lookahead direction (X-axis) with an angle α, as depicted in the schematic illustration of FIG. 14A). This could be useful in scenarios such as avoiding collision of the nozzle 1440 with the part 1450 when dispensing a bead 1460 in a narrow space, for instance as depicted in the schematic illustration of FIG.14B. Another useful implementation of non-perpendicularity may be for controlling width of a dispensed bead 1460 via variable jet printing angles of a nozzle 1440, for example as depicted in the schematic illustration of FIG.14C. These principles also apply to any other type of workpiece-modifying equipment (e.g., a sander, a polisher, a cutter, a drill, a sprayer, a welder, etc.). Surface normal estimation based on the point cloud could possibly be subject to errors caused by sensory noise from the online sensor (e.g., laser profilometer). Additionally, small features such as small step changes on the part surface could lead to non-smooth transition of surface normal along the robot toolpath. Given such issues with surface normal estimation, the robot may exhibit jerky moves when following the surface normal, resulting in reduced modification (e.g., dispensing) quality. Two approaches are proposed to address these issues. First, thresholding may be applied to the change of angle between two subsequent surface normal vectors along the toolpath (e.g., at tn-1 and tn). If this angle is larger than a predetermined value, the surface normal estimated for time stamp t n will not be updated. The surface normal at t n-1 will be assigned to t n . Second, outliers in the point cloud may be identified and rejected automatically before applying plane fitting for surface normal estimation. Random sample consensus (RANSAC) is one suitable example method to determine an outlier-free set within the point cloud in an iterative way (see, e.g., Fischler, Martin A., and Robert C. Bolles. “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography.” Communications of the ACM 24.6 (1981): 381-395). For instance, RANSAC may be performed by following the actions below: (1) Three points are randomly selected from the point cloud, which form a set of hypothetical inliers. (2) A plane (Ax + By + Cy + Z = 0) is fitted to the set of hypothetical inliers. (3) Distances from all other points in the point cloud to the fitted plane are computed, which represent plane fitting errors. (4) Points with distance errors larger than a predetermined threshold are classified as part of the consensus set. (5) If the size of the consensus set is large enough (e.g., more than 90 percent of the total number of points in the point cloud), a refined plane fitting is conducted on the consensus set (linear least square estimation). Otherwise, go to (1) until reaching the maximum iteration number set by the user. Although RANSAC is based on a stochastic process, it can achieve (e.g., near) real-time performance with small number of iterations. It has been proved that the probability ^^ to find an outlier- free set of points can be computed based on the following equation: ^^ ൌ 1 െ ^1 െ ^1 െ ^^^ ^ ^ Wherein ^^ denotes the probability that a point is an outlier, ^^ denotes the minimum number of points to fit a model (form a set of hypothetical inliers), and ^^ denotes the number of total iterations. Given the fact that ^^ ൌ 3 for the plane fitting scenario and the assumption that 20% of the points in the point cloud are outliers ( ^^ ൌ 0.2), one can solve for the number of iterations ^^ that results in an outlier- free set with the probability ^^ ൌ 0.99: l og ^1 െ ^^^ ^^ ൌ l og ^1 െ ^1 െ ^^^^^ ൌ 6.42 This means that with a probability of 99%, just seven iterations at maximum are needed to produce a surface normal estimation without the disturbance of outliers. Surface normal tracking combined with online Z-tracking was tested on wrinkled papers and the nominal path was defined in the XY plane of the robot coordinate system. The time parametrization of the nominal path was set based on a tool speed of 20 mm/s. Robot orientation along the nominal path was set to be constant, with the nozzle axis aligned with the Z-axis of the robot coordinate system, and the lookahead direction at each waypoint was set to be tangent to the toolpath. Parameter settings of surface normal estimation algorithm were as follows: Subsampling of laser points in each scan slice was applied to reduce the size of the final point cloud for surface normal estimation. Specifically, one of every four points were sampled around the adjusted waypoint from the Z-tracking algorithm. Referring to FIG.15, a graph is provided of computed surface normal vectors plotted along the robot toolpath. Orientation adjustment for optimal field of view After the direction of the nozzle axis is specified, an arbitrary angle of rotation around that axis could be assigned to the nozzle without affecting the dispensing quality because of its centrosymmetric geometry. However, when considering a laser scanner assembly which projects a laser line in the lookahead direction, as depicted in FIG.16A, the rotation angle around the nozzle axis can affect the coverage of scanning region. Typically, this rotation angle is specified such that the lookahead direction aligns with the tangential direction of the toolpath. Referring now to FIG.16B, this default setting could result in observability issues when the robot traverses through a curved toolpath with small turning radius. Let ^^ denote the distance from the laser projection plane to TCP, and ^^ denote half of the width of the projected laser line. The critical turning radius ^^ ^ is defined as: ଶ ^ ^ ^^ ^ ^^ ^ 2 ^^ When the turning radius ^^ is radius ^^ ^ , the toolpath does not fall into the scanning region of the laser line, thus is not observable to the laser sensor. Consequently, online adjustment of waypoints along the toolpath (e.g., for Z-tracking) based on laser sensor feedback is not possible. Only when ^^ is larger than or equal to ^^ ^ , the toolpath is observable by the laser for potential closed-loop adjustment. For instance, in the tested dispensing system, ^^^ ൌ 68 ^^ ^^ ( ^^ ൌ 66 ^^ ^^, ^^ ൌ 50 ^^ ^^ when the scanning height is 240 mm). However, in scenarios where dispensing around tight corners is required, this limitation of turning radius for closed-loop adaptive dispensing could result in unpredictable bead quality at the corner regions that are not observable by the online sensor. To guarantee that the waypoint to be updated at the front of the lookahead horizon is always observable, the rotation angle around the nozzle axis may be set such that the online sensor is always pointing at the newly updated waypoint from the previous time step, for instance as depicted in the schematic top view illustrations of in-plane rotation around the nozzle axis along the toolpath in FIGS. 17A-B. Without in-plane rotation, the waypoint to be updated 1710 within the lookahead horizon 1720 is not visible to the laser sensor when the robot follows the tangent direction of the toolpath. The next waypoint to follow 1730 and the front laser line 1740 are also indicated in FIGS.17A-B. FIG.17B further indicates the most recently updated waypoint 1750. With in-plane rotation, robot orientation is adjusted such that the waypoint to be updated always falls inside the field of view of the laser sensor. Specifically, when the newly added slice of surface profile at the front laser line is added, the adjusted waypoint for TCP within that scan slice becomes the most recently updated waypoint ^^ ^ . After the adjusted nozzle axis is computed (e.g., based on surface normal estimation) for the next waypoint ^^ ^ , the new lookahead direction is computed via the so-called in-plane rotation (a rotation within a plane perpendicular to the nozzle axis so that the lookahead direction is pointing from ^^ ^ to ^^ ^ ). Tracking rib features On various parts (e.g., automotive parts such as spoiler assemblies), rib features having a small elevation from the part surface are presented as markers that indicate locations for adhesive bead deposition or ultrasonic welding. The ability to track these rib features can enable precise and repeatable deposition of adhesives on desired locations for optimal bonding performance. One suitable method to adjust waypoint locations along the nominal path based on rib locations is depicted in FIG.18. First, the nominal path 1810 defined in the robot base coordinate system and the scan profile 1820 defined in the scanner coordinate are all transferred to the TCP coordinate. Second, the intersection of the nominal path 1810 with the laser projection plane 1830 is computed by finding the first waypoint on the nominal toolpath that is behind the laser projection plane, denoted by ^^ ^^ . This will be the waypoint to be updated based on the actual rib locations. Third, rib peaks are detected, and the midpoint of the peaks becomes the position of the adjusted waypoint, denoted by ^ ^ ^ ^^ . If no rib feature is presented, the strategy for Z-tracking can be executed. The time stamp from ^^ ^^ is then assigned to ^^^ ^^ . A sequence of the adjusted waypoint ^^^ ^^ forms the actual toolpath 1840 that is conformal to the part surface and also follows the midpoint of the parallel ribs. If two rib peaks are detected from the profile, the midpoint on the surface profile between the two peaks can be computed, which will be the location for the adjusted waypoint. This detection method was tested on a curvilinear surface with rib features. The rib feature tracking function combined with the orientation adjustment method described above was tested on a fixture with parallel ribs on a curvilinear surface. The 3D nominal path was defined as a piecewise linear trajectory passing through the space between the parallel ribs but not precisely following the midline. The time parametrization of the nominal path was set based on a tool speed of 20 mm/s. Robot orientation along the nominal path was set to be constant, with the nozzle axis aligned with the Z-axis of the robot coordinate system. FIG.19 shows a photograph of the test fixture 1910 with curvilinear surface 1920 and parallel ribs 1930, with the directions of XYZ axes of the robot base coordinate system labeled. The TCP poses were recorded while the robot traversed through the nominal toolpath with rib feature tracking, as plotted in FIGS.20A-E, including both the nominal 2010 and actual 2020 paths in each of FIGS.20B-D. It is noted that the nominal and actual paths overlap so extensively in FIG.20A that they are not readily distinguishable from each other. The toolpath was adjusted in each of the X, Y and Z directions, so that the resulting trajectory precisely followed the midline of the parallel ribs. Tracking edge features Another example of surface feature tracking is edge detection. The goal is to detect the edge of a part and to track the waypoints with desired offsets from the detected edge. For instance, one potential application in the automotive industry is to apply a structural adhesive along the edge of a metallic sheet for subsequent bonding with another panel to form a door panel assembly. The metallic sheet may have a variable edge location due to manufacturing tolerance. Without a closed-loop dispensing system that is capable of performing inline tracking of edge features, the dispensed adhesive bead cannot precisely follow the edge line on multiple panels with variable shape, thus resulting in inconsistent bonding quality. Such a method to adjust waypoint locations along the nominal path based on edge location is depicted in FIG.21. First, the nominal path 2110 defined in the robot base coordinate system and the scan profile 2120 defined in the scanner coordinate are all transferred to the TCP coordinate. Second, the intersection of the nominal path with the laser projection plane 2130 is computed by finding the first waypoint on the nominal toolpath 2110 that is behind the laser projection plane, denoted by ^^ ^^ . This will be the waypoint to be updated based on the scan profile. Third, the edge point is detected in the scan profile, and the adjusted waypoint ^^^ ^^ is computed such that it maintains a predetermined offset from the edge. The time stamp from ^^ ^^ is then assigned to ^^^ ^^ . A sequence of the adjusted waypoint ^^^ ^^ forms the actual toolpath 2140 that is conformal to the part surface and also follows the contour of a curved edge. The edge feature tracking function combined with the orientation adjustment method described above was tested on a fixture with a 3D curved edge. The 3D nominal path was defined as a piecewise linear trajectory on one side of the curved edge. The goal was to adjust the nominal toolpath to maintain a constant offset of 10 mm from the actual edge. The time parametrization of the nominal path was set based on a tool speed of 20 mm/s. Robot orientation along the nominal path was set to be constant, with the nozzle axis aligned with the Z-axis of the robot coordinate system, as depicted in FIG.22, which is a photograph of the test fixture 2210 having a curved edge 2220 and with the directions of the XYZ axes of the robot base coordination system labeled. The TCP poses were recorded while the robot traversed through the toolpath with waypoint adjustment based on the inline scan data. The robot followed the actual edge contour in X, Y, and Z direction as shown in FIGS.23A-C, including both the nominal 2310 and actual 2320 paths in each of FIGS.23B-D. It is noted that the nominal and actual paths overlap so extensively in FIG.23A that they are not readily distinguishable from each other. Because the general trend of the edge line follows the X direction, the major adjustment induced by the inline feature tracking was most obvious in Y and Z directions. As a result, the adjusted toolpath based on edge tracking deviated significantly from the nominal path in order to maintain the 10 mm offset from the curved edge, as plotted in FIG.23D. FIG.25 illustrates a workpiece modification system architecture. Architecture 2500 illustrates one embodiment of an implementation of a workpiece modification system 2510. As an example, architecture 2500 can provide computation, software, data access, and storage services that do not require end-user knowledge of the physical location or configuration of the system that delivers the services. In various embodiments, remote servers can deliver the services over a wide area network, such as the internet, using appropriate protocols. For instance, remote servers can deliver applications over a wide area network and they can be accessed through a web browser or any other computing component. Software or components shown or described in FIGS.1-24 as well as the corresponding data, can be stored on servers at a remote location. The computing resources in a remote server environment can be consolidated at a remote data center location or they can be dispersed. Remote server infrastructures can deliver services through shared data centers, even though they appear as a single point of access for the user. Thus, the components and functions described herein can be provided from a remote server at a remote location using a remote server architecture. Alternatively, they can be provided by a conventional server, installed on client devices directly, or in other ways. In the example shown in FIG.25, some items are similar to those shown in earlier figures. FIG. 25 specifically shows that a controller 2510 can be located at a remote server location 2502. Therefore, a computing device 2520 accesses the controller 2510 through the remote server location 2502. A user 2550 can use the computing device 2520 to access user interfaces 2522 as well. For example, a user 2550 may be a user wanting to check on the progress of modification of a workpiece while sitting in a parking lot, and interacting with an application on the user interface 2522 of their smartphone 2520, or laptop 2520, or other computing device 2520, e.g., an augmented reality (AR) device such as AR glasses. FIG.25 shows that it is also contemplated that some elements of systems described herein are disposed at a remote server location 2502 while others are not. By way of example, each of a data store 2530, an inspection system 2560, and the modifier 2570 can be disposed at a location separate from the location 2502 and accessed through the remote server at location 2502. Regardless of where it is located, the data store 2530 can be accessed directly by a computing device 2520, through a network (either a wide area network or a local area network), hosted at a remote site by a service, provided as a service, or accessed by a connection service that resides in a remote location. Also, the data can be stored in substantially any location and intermittently accessed by, or forwarded to, interested parties. For instance, physical carriers can be used instead of, or in addition to, electromagnetic wave carriers. This may allow a user 2550 to interact with the controller 2510 through their computing device 2520. It will also be noted that the elements of systems described herein, or portions of them, can be disposed on a wide variety of different devices. Some of those devices include servers, desktop computers, laptop computers, imbedded computer, industrial controllers, tablet computers, or other mobile devices, such as palm top computers, cell phones, smart phones, multimedia players, personal digital assistants, etc. FIGS.26-28 illustrate example devices that can be used in the embodiments shown in previous Figures. FIG.26 illustrates an example mobile device that can be used in the embodiments shown in previous Figures. FIG.26 is a simplified block diagram of one illustrative example of a handheld or mobile computing device that can be used as either a worker’s device or a supervisor / safety officer device, for example, in which the present system (or parts of it) can be deployed. For instance, a mobile device can be deployed in the operator compartment of computing device for use in generating, processing, or displaying the data. FIG.26 provides a general block diagram of the components of a mobile cellular device 2616 that can run some components shown and described herein. The mobile cellular device 2616 interacts with them or runs some and interacts with some. In the device 2616, a communications link 2613 is provided that allows the handheld device to communicate with other computing devices and under some embodiments provides a channel for receiving information automatically, such as by scanning. Examples of communications link 2613 include allowing communication though one or more communication protocols, such as wireless services used to provide cellular access to a network, as well as protocols that provide local wireless connections to networks. In other examples, applications can be received on a removable Secure Digital (SD) card that is connected to an interface 2615. The interface 2615 and communication links 2613 communicate with a processor 2617 (which can also embody a processor) along a bus 2619 that is also connected to a memory 2621 and input/output (I/O) components 2623, as well as clock 2625 and location system 2627. I/O components 2623, in one embodiment, are provided to facilitate input and output operations and the device 2616 can include input components such as buttons, touch sensors, optical sensors, microphones, touch screens, proximity sensors, accelerometers, orientation sensors and output components such as a display device, a speaker, and or a printer port. Other I/O components 2623 can be used as well. The clock 2625 illustratively comprises a real time clock component that outputs a time and ate. It can also provide timing functions for the processor 2617. Illustratively, the location system 2627 includes a component that outputs a current geographical location of the device 2616. This can include, for instance, a global positioning system (GPS) receiver, a LORAN system, a dead reckoning system, a cellular triangulation system, or other positioning system. It can also include, for example, mapping software or navigation software that generates desired maps, navigation routes and other geographic functions. A memory 2621 stores operating system 2629, network settings 2631, applications 2633, application configuration settings 2635, data store 2637, communication drivers 2639, and communication configuration settings 2641. The memory 2621 can include all types of tangible volatile and non-volatile computer-readable memory devices. It can also include computer storage media (described below). Memory 2621 stores computer readable instructions that, when executed by the processor 2617, cause the processor to perform computer-implemented steps or functions according to the instructions. Processor 2617 can be activated by other components to facilitate their functionality as well. It is expressly contemplated that, while a physical memory store 2621 is illustrated as part of a device, that cloud computing options, where some data and / or processing is done using a remote service, are available. While shown as a single unit for ease of illustration, it will be appreciated that processor 2617, in various examples in line with the aspects of this disclosure, may include one or more processors, including one or more microprocessors, CPUs, GPUs, DSPs, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), processing circuitry (e.g., fixed function circuitry, programmable circuitry, or any combination of fixed function circuitry and programmable circuitry), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. The term “processor” or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry. A control unit comprising hardware may also perform one or more of the techniques of this disclosure. FIG.27 shows that the device can also be a smart phone 2771. The smart phone 2771 has a touch sensitive display 2773 that displays icons or tiles or other user input mechanisms 2775. Mechanisms 2775 can be used by a user to run applications, make calls, perform data transfer operations, etc. In general, the smart phone 2771 is built on a mobile operating system and offers more advanced computing capability and connectivity than a feature phone. Note that other forms of the devices are possible. However, while FIG.27 illustrates an embodiment where a device 2700 is a smart phone 2771, it is expressly contemplated that a display may be presented on another comping device. FIG.28 is one example of a computing environment in which elements of systems and methods described herein, or parts of them (for example), can be deployed. With reference to FIG.28, an example system for implementing some embodiments includes a general-purpose computing device in the form of a computer 2810. Components of the computer 2810 may include, but are not limited to, a processing unit 2820 (which can comprise a processor), a system memory 2830, and a system bus 2821 that couples various system components including the system memory to the processing unit 2820. The system bus 2821 may be any of several types of bus structures including a memory bus or memory controller, a peripheral bus, and a local bus using any of a variety of bus architectures. Memory and programs described with respect to systems and methods described herein can be deployed in corresponding portions of FIG.28. The computer 2810 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by the computer 2810 and includes both volatile/nonvolatile media and removable/non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media and communication media. Computer storage media is different from, and does not include, a modulated data signal or carrier wave. It includes hardware storage media including both volatile/nonvolatile and removable/non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by the computer 2810. Communication media may embody computer readable instructions, data structures, program modules or other data in a transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. The system memory 2830 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 2831 and random-access memory (RAM) 2832. A basic input/output system 2833 (BIOS) containing the basic routines that help to transfer information between elements within the computer 2810, such as during start-up, is typically stored in ROM 2831. RAM 2832 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 2820. By way of example, and not limitation, FIG.28 illustrates an operating system 2834, application programs 2835, other program modules 2836, and program data 2837. The computer 2810 may also include other removable/non-removable and volatile/nonvolatile computer storage media. By way of example only, FIG.28 illustrates a hard disk drive 2841 that reads from or writes to non-removable, nonvolatile magnetic media, nonvolatile magnetic disk 2852, an optical disk drive 2855, and nonvolatile optical disk 2856. The hard disk drive 2841 is typically connected to the system bus 2821 through a non-removable memory interface such as interface 2840, and optical disk drive 2855 is typically connected to the system bus 2821 by a removable memory interface, such as interface 2850. Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Application-specific Integrated Circuits (e.g., ASICs), Application-specific Standard Products (e.g., ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc. The drives and their associated computer storage media discussed above and illustrated in FIG. 28, provide storage of computer readable instructions, data structures, program modules and other data for the computer 2810. In FIG.28, for example, a hard disk drive 2841 is illustrated as storing operating system 2844, application programs 2845, other program modules 2846, and program data 2847. Note that these components can either be the same as or different from operating system 2834, application programs 2835, other program modules 2836, and program data 2837. A user may enter commands and information into the computer 2810 through input devices such as a keyboard 2862, a microphone 2863, and a pointing device 2861, such as a mouse, trackball or touch pad. Other input devices (not shown) may include a joystick, game pad, satellite receiver, scanner, or the like. These and other input devices are often connected to the processing unit 2820 through a user input interface 2860 that is coupled to the system bus but may be connected by other interface and bus structures. A visual display 2891 or other type of display device is also connected to the system bus 2821 via an interface, such as a video interface 2890. In addition to the monitor, computers may also include other peripheral output devices such as speakers 2897 and printer 2896, which may be connected through an output peripheral interface 2895. The computer 2810 is operated in a networked environment using logical connections, such as a Local Area Network (LAN) or Wide Area Network (WAN) to one or more remote computers, such as a remote computer 2880. When used in a LAN networking environment, the computer 2810 is connected to the LAN 2871 through a network interface or adapter 2870. When used in a WAN networking environment, the computer 2810 typically includes a modem 2872 or other means for establishing communications over the WAN 2873, such as the Internet. In a networked environment, program modules may be stored in a remote memory storage device. FIG.28 illustrates, for example, that remote application programs 2885 can reside on a remote computer 2880. Velocity-based Closed-loop Control of Bead Shape Aspects of this disclosure are directed to velocity-based closed-loop control of bead shape. Automation capabilities widely exist for application of liquid adhesives and hot-bonded thermoplastics. Adhesives development has yielded dispensable pressure sensitive adhesives (PSA) and structural adhesives. The robotically dispensed adhesive (RDA) platforms described above better address user needs for complex bonding applications by providing automated dispensing solutions. Material flowrate inconsistencies have been observed during this process, due to factors such as batch-to-batch variation of adhesive material and back pressure at the beginning of dispensing process. These material flowrate inconsistencies may lead to inconsistent bead geometry, which tends to negatively impact yield rate and bonding performance. Systems of this disclosure address and mitigate several potential problems. As one example, systems of this disclosure may address the need for online bead sensing for cycle-to-cycle dispensing process control. During cycle-to-cycle dispensing control, bead shape data collected from the past process cycles are used to generate control commands for the current cycle. This helps to compensate for system and environmental uncertainties/disturbances that are consistent or change slowly over multiple process cycles, such as batch-to-batch variation of adhesive material and fluctuation of humidity/temperature on production line. The bead shape data for cycle-to-cycle process control can be acquired with offline or online sensing systems. Offline sensing either requires two separate runs in a cycle for dispensing and scanning respectively (dispense-then-scan process) using a single robot arm, or requires two robot arms working in parallel for dispensing and bead sensing. Alternatively, online bead sensing with coordinated robot motion only requires a single run per cycle using a single robot arm (dispense-while-scan), with the potential benefit of reduced cycle time and system complexity compared to offline sensing. Another example of potential problems that the systems of this disclosure address is the need for online bead sensing and a velocity-based controller operable to provide in-cycle process control to improve bead quality and yield rate. During in-cycle process control, the systems of this disclosure may collect bead shape data in real-time fashion to generate control commands for the current cycle. Compared to cycle-to-cycle process control, in-cycle control provides a faster response to the detected bead defect. In-cycle control compensates for not only slow-varying disturbances from the system and environment, but also compensates for incidental disturbances that result in cycle-to-cycle variation in material flowrate. Working jointly with a cycle-to-cycle process controller, the in-cycle controller of this disclosure may further improve bead quality on each sample, and may reduce the number of defective samples, thereby resulting in higher yield rate for end users. The online bead sensing system for in-cycle control may capture bead parameters with low latency and high temporal resolution, coupled with advanced detection algorithm to robustly extract bead parameters such as bead shape and location. Given the detected bead shape error, one control strategy is to adjust material flowrate via changing dispensing variables such as extruder motor speed (e.g., for screw-driven dispensers), pump pressure (e.g., for pressure-driven dispensers), or temperature (e.g., for hot-melt adhesives). There are several drawbacks of this approach for in-cycle control, such ast he high complexity of the time-varying multiphysics model representing the dispensing system that makes it difficult to compute the proper control inputs based on desired material flowrate, or possibly missing the control time window due to low responsiveness of dispensing variables (e.g., for hot melt adhesive dispensing, after a new extruder motor speed or heater temperature set point is commanded, there is normally a significant time delay before desired amount of flowrate change is reflected at the nozzle tip, due to factors such as viscoelasticity of the adhesive material that is fed through a long pathway (e.g., a hose) and long heating/cooling time to reach target material temperature). As an alternative control strategy, the velocity-based bead shape controller of this disclosure may provide more rapid compensation for bead shape error without the need of direct control of material flowrate. Compared to dispensing variables such as pressure and temperature, robot tool velocity can be adjusted more rapidly in response to over- or under-extrusion detected by the bead sensing system. These controllers of this disclosure are also designed and configured to handle sensor noise and latency that are inevitable from the online sensing system, in order to deliver smooth and stable robot motion with variable tool velocity. Systems and techniques of this disclosure are directed to material dispensing with online bead sensing and closed-loop feedback control of tool velocity to improve precision of the dispensing process. FIG.29 illustrates an example system of this disclosure. An example of this disclosure incorporates one or more of: 1. a system with a dispensing robot mounted with a bead sensor (examples of which are described below); 2. estimation processes for bead/seam parameters based on laser profilometer data (examples of which are described below) a. bead/seam location estimation techniques based on rising/falling edge detection (examples of which are described below); b. techniques for bead geometric parameter estimation (e.g., width, thickness, section area) based on bead profile segmentation (examples of which are described below); 3. techniques for robot tool velocity control to compensate for detected bead shape error (examples of which are described below) a. techniques for path streaming with variable tool velocity and consistent time step duration based on time-reparameterization (examples of which are described below); and b. techniques that determine tool velocity set point based on aided by noise filtering and conditional checking of time-series data of bead parameters and tool velocity (examples of which are described below). Aspects of this disclosure use feedback data from a bead sensor to adjust the dispense velocity in order to compensate for bead shape error. For instance, various techniques of this disclosure employ tool velocity as the independent variable to correct detected bead shape error, using a robust velocity controller that takes into account of online bead parameter estimation, noise handling, and steady-state checking. Robotic dispensing technology, such as those related to advanced robotics and/or RDAA platforms are described in International Patent Applications with Publication Numbers WO2020/174394, WO2020/174397, WO2021/074744, and WO2021/124081, the entire disclosure of each of which is incorporated herein by reference. The velocity-based bead control aspects of this disclosure represent improvements in the areas of closed-loop robotic dispensing apparatuses and techniques for compensation of part shape and bead shape variation. FIG.30 illustrates a non-limiting example of a closed-loop robotic dispensing testbed of this disclosure. The system of FIG.30 includes a six-degree-of-freedom (6-DoF) robot arm (such as a model UR 10 available from Universal Robots), a hot-melt adhesive dispenser mounted on the robot arm, a first laser profilometer (part scanner) mounted on one side of the dispenser for part surface scanning, and a second laser profilometer (bead scanner) mounted on the other side of the dispenser for bead shape sensing. Both scanners are laser profilometers in the particular example of FIG.30, while it will be appreciated that other types of scanners are compatible with the systems of this disclosure. FIG.31 is a block diagram illustrating an example closed-loop robotic dispensing according to aspects of this disclosure. The “bead scanner” and “velocity adjustment” blocks of FIG.31 pertain to velocity-based bead shape control aspects of this disclosure. In one example, the controller program of FIG.31 was implemented in Python 3 that can run on a personal computer (PC) equipped with either a Linux®-based or Windows®-based operating system (OS). In this example, the PC-based controller was connected to a UR CB3 controller via a TCP/IP connection using an Ethernet® cable. In this example, various actions of streaming robot motion command and reading current robot pose were communicated via a Real-Time Data Exchange (RTDE) interface at a frequency of 125 Hz (maximum). The PC-based controller was also connected to the part scanner and bead scanner via UDP using Ethernet® cables. 2D laser profiles were streamed at a frequency of up to 4kHz from the scanners to a scan data buffer on the PC-based controller using the scanCONTROL software development kit (SDK) available from Micro- Epsilon. In the example of FIG.31, within the UR robot controller, each target waypoint in Cartesian space is transformed to joint positions via inverse kinematic model of the robot, which is then fed to a proportional feedback controller for motion control. The actual joint positions from the encoders are fed back to the RTDE interface after being transformed to tool center pose (TCP) via a forward kinematic model of the robot. In the example of FIG.31, within the PC-based controller, a nominal path defining projected adhesive bead locations on the workpiece/part is first generated using an offline path planner, which includes part registration methods and a CAD-to-path process (where “CAD” stands for computer-aided design). The user-defined nominal path is then discretized into sequences of waypoints for robot execution. At each computational step (e.g., with cycle times of 10ms–20ms) during the closed-loop dispensing process, the current robot TCP is first read from robot controller. Next, the most recent laser profiles from the scan data buffer of both scanners are transformed to the robot TCP coordinate system, which are used to determine adjustment for waypoint pose and velocity. Next, the updated waypoints are added to the toolpath buffer for future execution by the UR motion controller.   FIG.32 is a schematic showing an example format of a toolpath for the closed-loop robotic dispensing techniques of this disclosure. Each waypoint w i along the robot toolpath is defined as a tuple consisting of a time step duration Δt i and a TCP pose p i . In some examples of experiments conducted with respect to the techniques of this disclosure, tool velocity is not defined explicitly in this waypoint expression (direct velocity control is not provided by the UR CB3 Controller), but can be controlled implicitly by varying the value of time step duration Δt i . In the example of FIG.32, the motion command for each waypoint is sent to the robot via RTDE command servoJ (q, Δt, lookahead, gain). q is a vector of the joint positions, which can be computed from the Cartesian expression p i using an inverse kinematic model. Δt represents the blocking time to move to the next pose, which is the same as Δt i . Lookahead is a variable used to set the lookahead time in order to smoothen the trajectory (ranging from 0.03s to 0.2s). Gain represents the proportional gain to track target position (ranging from 100 to 2000). According to aspects of bead sensing techniques of this disclosure, 2D surface profile captured by the bead scanner can be expressed as points in an XZ plane. In this example, the Z-axis defines the direction of laser projection. The bead scanner used for a particular experiment of this disclosure has a measuring range of [190, 290] (mm) along Z-axis and [-72, 72] (mm) along the X-axis. In each computation cycle, the bead sensing algorithm, at runtime, takes a 2D profile consisting of 640 points as the raw data inputs, and extracts useful bead parameters via bead location detection (described below in greater detail) and bead geometry estimation (described below in greater detail). These parameters are saved and used as sensory inputs for the velocity-based bead shape controller that is described below in greater detail. FIGS.33 illustrate aspects of bead location detection according to aspects of this disclosure. FIG.33A shows schematics for detection of the rising and falling edges (indicated by arrows) within bead or seam profile F(x) via detection of zero-crossing locations in 2nd order derivative F’’(x) and thresholding in 1st derivative F’(x). FIG.33B shows an example graphical user interface (GUI) for configuration of search window for bead location detection (shown in a box). The profile of a bead on the substrate can be characterized as a rising edge followed by a falling edge along a positive X-direction. A shape of a seam is the opposite of the bead, with a falling edge followed by a rising edge along positive x direction (FIG.33A). Given this similarity, the bead location detection algorithm based on rising/falling edge detection can also be utilized for seam detection for applications such as seam tracking during robotic welding. An example bead detection technique of this disclosure is described below. According to this particular example, first, the systems of this disclosure may receive a user input defining a refined search window in the profile where the bead is most likely to appear. For instance, the systems of this disclosure may implement a GUI that enables the user to provide an input defining the center of the search window with the reference index of the selected center point in the profile (e.g., close to a nozzle location). The search range can then be defined by distance along the X-axis and Z-axis in millimeters (as shown in FIG.33B). A refined search window may mitigate or potentially even eliminate ambiguity in bead detection when multiple rising and falling edges are presented on the part surface in the neighborhood of the bead. Next, first order and second order Gaussian filters are applied to the surface profile within the user-defined search window, producing new profiles of the first derivative (gradient) and second derivative (curvature) of the original profile shown in FIG.33A. The level of smoothing to remove noise is controlled by the standard deviation parameter of the gaussian filter, which can be received via the GUI in the form of user input (e.g., as shown in FIG.33B). Pseudo code for bead location detection functionality is presented below: Bead Location Detection Inputs: Surface profile within user-defined search window P = [[x 1 ,z 1 ],…[x N ,z N ]] Apply 1 st order Gaussian filter to P to get “gradient” values G Apply 2 nd order Gaussian filter to P to get “curvature” values C Find all zero-crossing locations in C If number of zero-crossing locations is nonzero: Find maximum and minimum values in G, denoted by G max and G min If the absolute values of either G max or G min is too small: Return, bead location not detected because of weak edge signal Else: Define upper gradient threshold Gu = αGmax (α is a constant between 0 and 1) Define lower gradient threshold G l = αG min Mark current state as searching for rising edge For each zero-crossing location (with the increasing order of x): Compute gradient value at current zero-crossing location G i If currently searching for rising edge and G i > G u : Record rising edge location Mark current state as searching for falling edge If currently searching for falling edge and G i < G l : Record falling edge location Compute bead location as the midpoint between the rising and falling edge. Return locations of the bead as well as the rising and falling edges Examples of bead geometry estimation techniques of this disclosure are described below. FIGS. 34 illustrate bead geometry estimation on two scan profile samples. FIG.34A is a scan profile with a rounded bead on a tilted and curved substrate. FIG.34B illustrates a scan profile of a flat bead on flat substrate with discontinuous and erroneous point data at bead boundaries (e.g., points below the substrate plane). In both of FIGS.34A & 34B, a user-defined search window is indicated using a box with a dashed-line boundary. The center of the search window is indicated with a circle with a solid-lined boundary, the bead location is indicated with a cross, rising and falling edges are indicated with adjacent circles, and the fitted substrate plane is indicated with a labeled line. The goal of the bead geometry estimation process is to extract bead shape characteristics that define the quality of the deposited bead, such as width, maximum, and/or mean thickness and section area. The rising and falling edges from the previous bead location detection step are estimation of bead boundaries due to the following reasons. As one example, the detected bead boundaries (rising/falling edges) are zero-curvature locations, which could be off from the true bead boundaries by millimeters (as shown in FIG.34B). As a second example, portions of scan profile that are close to bead boundaries sometimes suffer from sparse and noisy data points caused by height discontinuity and reflection between the bead and the substrate surface (as shown in FIG.34B). As such, these edge locations cannot be used to estimate bead shape characteristics such as width with reliably high precision and consistency. The bead geometry estimation techniques of this disclosure provide a technical solution to the technical problems set forth above by refining the location of bead boundaries by segmenting the substrate profile from the bead profile. Assuming the bead is positioned on a portion of the substrate that is relatively “flat” with low curvature (which may be the case for a plurality or potentially a majority of applications), the bead geometry estimation techniques of this disclosure can fit a straight line to the “flat” portion of the substrate profile using random sample consensus (RANSAC) to remove noise and outliers, followed by linear regression to find an optimal line fit. All of the points above this fitted substrate line profile can be treated as part of the bead, and can be used to compute bead width, thickness, and/or section area. Pseudo code for an example implementation of this technique is presented below: Bead Geometry Estimation Inputs: Surface profile within user-defined search window P = [[x 1 ,z 1 ],…[x N ,z N ]] Rising and falling edge locations from Bead Location Detection Number of points from either side of the bead profile to be used for linear fit N (defined by user in the form of percentage of the total number of points in bead profile) Take N points from P that is to the left of bead rising edge to form P left Take N points from P that is to the right of bead falling edge to form P right Stack P left and P right to form the surface profile P fit for substrate line fitting Apply RANSAC-based linear fit to P fit to get outlier-free profile P fit ’ Apply linear regression to P fit ’ Transform profile P to P’ so that the fitted line lies on x-axis. Extract points with positive z values from P’ to form a new bead profile P’ bead Compute bead section area A via trapezoidal integration of P’ bead Compute bead width w using the left and right boundary points in P’ bead Compute maximum thickness t max using maximum Z value in P’ bead Compute mean thickness t mean = A/w Return [A, w, t max , t mean ] Aspects of bead shape control techniques of this disclosure are described below. Some of the bead shape control techniques of this disclosure relate to path streaming with velocity control. The path streaming method needs to be compatible with adjustable tool speed for the purpose of online bead shape control. To improve stability and motion smoothness, robot arm manufacturers often recommend streaming path command at higher frequency. As an example, for path streaming to an UR 10 robot arm via RTDE interface, the current recommendation is to stream waypoint commands at a constant rate of 125 Hz. Each waypoint can be represented by time interval Δt (travel time from previous waypoint to current waypoint), position p = [x, y, z], and orientation o = [α, β, γ] (Euler angles). By setting Δt to be a constant value (e.g., 8ms), a smooth path can be achieved with the following path streaming method for online part shape compensation (e.g., in the example of FIG.31): Path Streaming (constant time interval) Inputs: Nominal path (N waypoints) M = [[p 0 , o 0 , Δt 0 ], …, [p N-1 , o N-1 , Δt N-1 ]] Initialize waypoint buffer Q as an empty queue (first in first out) For each of the first n waypoints [p i , o i , Δt i ] from M. Add [p i , o i , Δt i ] to Q While waypoint buffer Q is not empty: Start a timer Pop the “head” waypoint from Q as current waypoint [p k , o k , Δt k ] Send current waypoint [p k , o k , Δt k ] to robot controller for execution If nominal path is not completely swept by the part scanner Find the waypoint from nominal path to be adjusted [p j , o j , Δt j ] Compute adjusted waypoint position ^ ^ ^ ^ based on scanner feedback Compute adjusted waypoint orientation^ ^^ ^ based on scanner feedback Add the adjusted waypoint [ ^^^ ^ ,^ ^^ ^ , Δt j ] to the tail of Q If the timer hasn’t reached time Δt k Wait until timer reaches Δt k The executed path has a velocity profile that is different from that of the nominal path. For instance, the nominal tool velocity for the jth waypoint can be estimated by ^^ ^ ൌ ฮ ^^ ^ െ ^^ ^ି^ ฮ/ ^^ ^^ ^ , while the executed tool velocity can be estimated by ^^^ ^ ^^^ ^ െ ^^^ ^ି^ฮ / ^^ ^^ ^ . This deviation from nominal tool velocity is relatively small when the adjusted waypoint ^^^ ^ is close to the nominal waypoint ^^ ^ . However, for applications where bead quality is sensitive to such deviation of tool velocity from nominal values, the time interval Δt can be adjusted online to maintain nominal tool velocity: ------------------------------------------------------------ ----------------- Path Streaming (maintaining nominal tool velocity) Inputs: Nominal path (N waypoints) M = M = [[p 0 , o 0 , Δt 0 ], …, [p N-1 , o N-1 , Δt N-1 ]] Initialize waypoint buffer Q as an empty queue (first in first out) For each of the first n waypoints [p i , o i , Δt i ] from M. Add [p i , o i , Δt i ] to Q While waypoint buffer Q is not empty: Start a timer Pop the “head” waypoint from Q as current waypoint [p k , o k , Δt k ] Send current waypoint [p k , o k , Δt k ] to robot controller for execution If nominal path is not completely swept by the part scanner Find the waypoint from nominal path to be adjusted [p j , o j , Δt j ] Compute nominal tool velocity ^^ ^ ൌ ฮ ^^ ^ െ ^^ ^ି^ ฮ/ ^^ ^^ ^ Compute adjusted waypoint position ^^^ ^ based on scanner feedback Compute adjusted waypoint orientation^ ^^ ^ based on scanner feedback C ompute adjusted time interval ^^ ^^̃^ ൌ ฮ ^ ^ ^^ െ ^ ^ ^^ି^ฮ/ ^^^ Add the adjusted waypoint [ ^^^ ^ ,^ ^^ ^ , ^^ ^^̃ ^ ] to the tail of Q If the timer hasn’t reached time Δt k Wait until timer reaches Δt k As a trade-off related to maintaining nominal tool velocity, time interval becomes inconsistent along an adjusted toolpath due to changes in waypoint positions for part shape compensation: ฮ ^^^ ^ െ ^^^ ^ି ฮ ^^ ^^̃ ^ ^ ൌ ^^ ^^^ Potential issues with such updating rate are as follows: ^ A time interval that is too small could reach the lower bound of the time interval for message exchange with robot controller. For instance, streaming of a waypoint to an UR 10 robot arm via RTDE interface can be no faster than 125 Hz (with minimum 8ms time interval); ^ A time interval that is too small could reach the lower bound of the required processing time to complete each computation cycle. FIG.35 is a graph illustrating a log of time consumption for each computation cycle during a closed-loop dispensing test run. For instance, according to the log of time consumption of each computation cycle during a closed-loop dispensing test run (as shown in FIG.35), the cycle time allocated for processing of sensory data and computation of control command should be no lower than 14ms. ^ A time interval that is too large could lead to a jagged movement of the robot due to coarse temporal discretization of pose commands.   FIG.36 is a schematic showing an example of how variation of waypoint position ||p|| (the uppermost horizontal axis of each of the three illustrated boxes) and tool velocity v (the bottommost horizontal axis of each of the three illustrated boxes) can affect time parameterization t along a toolpath (the middle horizontal axis of each of the three illustrated boxes). In the upper box, time parametrization along nominal path is configured to be evenly spaced with a constant updating time interval of 20ms. In the middle box, when the position of each waypoint is adjusted for part shape compensation, variation of time interval is introduced along the resulting adaptive path. In the lower box, when tool velocity adjustment is introduced in the process, the level of variation in time interval is further increased, with the maximum time interval being 75ms and minimum time interval being 7ms. When adaptive tool velocity control is applied using feedback data from bead scanner, issues with inconsistent time interval become more acute (as shown in FIG.36). In addition to change in waypoint position for part shape compensation, adjustment of tool velocity ^^ ^^ ^ (relative to nominal tool velocity) for bead shape compensation also contributes to the variation of time interval with the following relationship: ฮ ^^^ െ ^^^ ฮ ^ ^ ^^̃ ^ ^ି^ ^ ൌ ^^ െ ^ ^^ ^^ ^^ ^^ ^^^ FIGS.37 illustrate parametrization strategy. In the example of FIG.37A, based on the current time interval (40ms) being larger than the desired time interval (20ms), a new waypoint (the circular point that is not shaded in) is interpolated with a time interval of 20ms. In the example of FIG.37B, based on the current time interval (10ms) being smaller than the desired time interval (20ms), the current waypoint is skipped. To address the potential issues related to inconsistent updating rate along the adjusted toolpath, techniques of this disclosure provide a time re-parametrization for each command execution cycle. The strategy for time re-parametrization to achieve a relatively consistent time interval ^^ ^^ ^ (with tolerable range ^ ^^ ^^ ^^^ , ^^ ^^ ^^௫ ^) is as follows: (1) If ^^ ^^ ^ is smaller than ^^ ^^ ^^௫ and larger than ^^ ^^ ^^^ , execute the current waypoint command [ ^^ ^ , ^^ ^ , ^^ ^^ ^ ]; (2) If ^^ ^^ ^ is larger than ^^ ^^ ^^௫ , interpolate new waypoints with constant time interval ^^ ^^ ^ between previously commanded waypoint [ ^^ ^ି^ , ^^ ^ି^ , ^^ ^^ ^ି^ ] and current waypoint [ ^^ ^ , ^^ ^ , ^^ ^^ ^ ]( FIG.37A); and (3) If ^^ ^^ ^ is smaller than ^^ ^^ ^^^ , skip current waypoint [ ^^ ^ , ^^ ^ , ^^ ^^ ^ ] for execution (FIG.37B).   The modified path streaming method with time re-parametrization is as follows: Path Streaming (with time re-parametrization) Inputs: Nominal path (N waypoints) M = M = [[p 0 , o 0 , Δt 0 ], …, [p N-1 , o N-1 , Δt N-1 ]] Desired constant time interval: ^^ ^^ ^ Tolerable range of time interval: ^ ^^ ^^ ^^^ , ^^ ^^ ^^௫ ^ Initialize waypoint buffer Q as an empty queue (first in first out) For each of the first n waypoints [p i , o i , Δt i ] from M. Add [p i , o i , Δt i ] to Q Set flag F for readiness to extract new waypoint from Q While waypoint buffer Q is not empty: Start a timer If flag F indicates readiness to extract new waypoint from Q: Pop the “head” waypoint from Q as current waypoint [p k , o k , Δt k ] Else: Use [p k , o k , Δt k ] extracted from previous cycle as current waypoint If ^^ ^^ ^^^ ^ ^^ ^^ ^ ^ ^^ ^^ ^^௫ : Send [p k , o k , Δt k ] to robot controller for execution Set flag F for readiness to extract new waypoint from Q Else if ^^ ^^ ^ ^ ^^ ^^ ^^௫ : Interpolate one waypoint as [p k ’, o k ’, Δt c ] Clear flag F for readiness to extract new waypoint from Q Send [p k ’, o k ’, Δt c ] to robot controller for execution Else ( ^^ ^^ ^ ^ ^^ ^^ ^^^ ): Set flag F for readiness to extract new waypoint from Q Continue (skip to the end of current cycle) If nominal path is not completely swept by the part scanner Find the waypoint from nominal path to be adjusted [p j , o j , Δt j ] Compute nominal tool velocity ^^ ^ ൌ ฮ ^^ ^ െ ^^ ^ି^ ฮ/ ^^ ^^ ^ Compute adjusted waypoint position ^^^ ^ based on scanner feedback Compute adjusted waypoint orientation ^^^ ^ based on scanner feedback Bead shape controller generates adjusted tool velocity ^^^ ^ Compute adjusted time interval ^^ ^^̃^ ൌ ฮ ^ ^ ^^ െ ^ ^ ^^ି^ฮ/ ^^^^ Add the adjusted waypoint [ ^^^ ^ ,^ ^^ ^ , ^^ ^^̃ ^ ] to the tail of Q If the timer hasn’t reached time Δt k Wait until timer reaches Δt k FIGS.38 illustrate effectiveness aspects of utilizing the path streaming technique of this disclosure with time re-parametrization for closed loop-dispensing with variable velocity. An additional benefit of time re-parametrization is that the part surface can be sampled with higher temporal resolution compared to the case without time re-parametrization. The conformal toolpath planned based on this surface mesh with higher fidelity may yield improved bead quality with more precise gap height control. The experimental results for the closed-loop dispensing using path streaming techniques without time re- parameterization are shown in FIG.38A. The experimental results for the closed-loop dispensing using path streaming techniques with time re-parameterization are shown in FIG.38B. The plots from the 1st row show velocity profile of the nominal path (“nominal”) and the adjusted path (“planned”) for bead shape compensation. The plots from the 2nd row show time parametrization along the adjusted toolpath, with the desired time interval being 20ms. The plots from the 3rd row show the sampled point cloud from part scanner. The adaptive toolpaths for part shape compensation are indicated by the left-to-right (and ascending) trajectories. FIGS.39 illustrate results of bead shape compensation with a naïve control law. FIG.39A shows bead width profiles for three runs with good control quality (top plot, bead width converging to the target width), bad control quality (middle plot, bead width diverging from target width), and intermediate control quality (bottom plot, bead width oscillating around target width). FIG.39B shows tool velocity profiles of a nominal path (“nominal”), adjusted/commanded path (“planned”), and executed path (“executed,” smoothed using moving average filter with window size of 20). Given tool velocity as the only independent variable for bead shape control, one of the bead geometric parameters estimated by the bead sensing algorithm can be selected as the dependent variable to be controlled along the toolpath. Because bead thickness can be effectively controlled via gap height tracking using a part scanner, the bead width can be treated as the dependent variable in this example. The control law can be applied to other bead parameters such as section area without loss of generality. The control law of this disclosure is based on the following constant volume assumption: the volume of the deposited material within a constant time period Δt can be approximated as a constant through two adjacent time steps when Δt is close to zero. This assumption is valid when material flowrate changes smoothly or remains constant over time. Let ^^ and ^^ denote measured bead width from current time step and desired bead width for the next time step, respectively. Let ^^ and ^^ denote measured tool velocity from current time step and desired tool velocity for the next time step, respectively. The constant volume assumption gives the following: ^^ ∙ ^^ ∙ ∆ ^^ ൌ ^^ ∙ ^^ ∙ ∆ ^^ The desired tool velocity for next time can then be computed using the following: ^ ^ ൌ ^^ ^^ ^^ௗ One naïve approach for bead shape control is to reduce the difference between current tool velocity and desired tool velocity. Let ^^ denote the tunable gain for tool velocity adjustment. The tool velocity command ^^ ^ for next time step can then be computed using the following: ^^ ^ ൌ ^^ ^ ^^ ∙ ^ ^^ െ ^^^ Profiles of bead width along three dispense paths with the naïve control law described above are shown in the example of FIG.39A. In this particular example, the controller did not deliver repeatable quality for bead shape control, with the bead width sometimes diverging from or oscillating around the desired width. The tool velocity profiles from a test run expose an issue with the controller (as in the example of t FIG.39B), namely, that the adjusted tool velocity command computed by the control law is overly oscillatory and cannot effectively correct bead shape errors. Additionally, even if this oscillatory velocity commands can potentially compensate for bead shape errors to an extent, the actual velocity profile of the executed toolpath could vary from the commanded values due to the dynamic limits of the robot arm. The causes of the oscillatory behavior of the computed velocity commands are threefold, namely,: (1) Delayed measurement of bead width ^^. To avoid blockage of light path by the extruder body, a laser line from bead scanner is projected to a location with an offset distance from the nozzle (as shown in FIG.40B). Control commands computed based on this delayed measurement could lead to instability issues (oscillatory motion) and overshoots in bead width response (as shown in FIG.40A). (2) High noise level in bead width measurement ^^ (as shown in FIG.39A) and tool velocity measurement ^^ (as shown in FIG.40C). These noise values are transferred to the controller without any noise compensation, thereby resulting in oscillatory velocity command ^^ ^ . (3) Long duration of transient state for bead width ^^ and tool velocity ^^ (as shown in FIG.41). Bead width is under transient state after change of tool velocity or when fluctuation of flowrate is presented within the extrusion system (as shown in FIG.41A). Tool velocity is under a transient state after a tool velocity set point is commanded and before the set point is reached (as shown in FIG.41B). The duration of this transient state depends on the gain settings for robot motion controller and acceleration/deceleration limit(s) of the robot. The constant volume assumption could be invalid when using these transient-state variables to compute control commands, resulting in errors in bead width tracking. FIGS.40 illustrate one or more potential issues with the naïve control law for bead shape compensation. FIG.40A is a schematic (top view) of a nozzle (circle with solid-line border) traversing through a straight-line path. The tracking error of bead width is caused by delayed sensing of bead scanner (with a vertical line indicating the laser line). In each plot shown in FIGS.40, the deposited bead is indicated by the filled-in (or shaded-in) region, and the desired bead width is indicated by two dashed lines. The laser line is indicated by a vertical line intersecting the deposited bead and the desired bead. FIG.40B shows delayed sensing distance for bead scanner. FIG.40C shows tool velocity profile read from robot controller. FIGS.41 illustrate a transient state of bead width and tool velocity along the dispense path. FIG. 41A is a plot of bead width profile along a dispense path, with the transient state indicated by a box with dashed boundary lines (and labeled “transient state”), and steady state indicated by a box with dashed boundary lines (and labeled “steady state”). FIG.41B is a plot of tool velocity profile along a dispense path, with the commanded velocity indicated by the “planned” curve and the measured/executed velocity indicated by the “executed_raw” curve. The transient state of executed velocity is indicated by boxes with dashed boundary lines (labeled “transient state”). Aspects of control law with signal filtering and steady state checking are described below. To address various potential issues described herein, techniques of this disclosure incorporate an advanced bead shape control method with signal filtering and steady state checking of measured bead width w and tool velocity v. according to these techniques, a sliding average is used for signal filtering of bead width and tool velocity measurements. Upon arrival/receipt of a new measurement from the sensor during each computation cycle, the average of a sequence of measurements from a time window in the past is computed as the filter output. The implementation of the algorithm can be described with the following pseudo code: Sliding average Inputs: Latest measurement (bead width or tool velocity): ^^ ^ Filter window size: N Sliding average from the previous time step ( ^^ ^ 0): ^̅^ ^ି^ Sum of squares from previous time step ( ^^ ^ 0): ^^ ^ ି ^ ^ൌ ^^ ^^ ^ Popped value from the sliding window ( ^^ ^ ^^): ^^ ^ିேି^ If ^^ ൌ 0: ^̅^ ^ ൌ ^^ ^ ^^ ^ ൌ 0 If 0 ^ ^^ ^ ^^: ^̅^ ^ ൌ ^̅^ ^ି^ ^ ^ି ̅^షభ ^ ^̅^^ି^ ^^ ^^^ െ ^̅^^ ^ If ^^ ^ ^^: ^̅^ ^ ൌ ^̅^ ^ି^ ^ ^ି௫^షಿ ே ^^ ^ିேି^ ^^ ^^ ^ ^ ^^ ^ିேି^ െ ^̅^ ^ െ ^̅^ ^ି^ ^ Return current sliding average ^̅^ ^ and variance ^^ ^ ൌ ^^ ^ / ^^ Algorithm Sliding_average (shown in the pseudocode block above) acts as a low-pass filter to remove the high-frequency noise presented in the measured bead width and tool velocity. The incremental way of updating sliding average ^̅^ ^ and sliding variance ^^ ^ improves computation efficiency for optimal real-time performance. At each computation cycle, the mean and standard deviation of bead width and tool velocity as outputs from algorithm Sliding_average are fed to two steady state checkers for bead width and tool velocity, respectively. The returned binary values from both steady state checkers indicates readiness to compute a new tool velocity command ^^ ^ based on filtered bead width ^ഥ^ and tool velocity ^̅^. FIGS.42 illustrate schematics of steady state checkers. FIG.42A illustrates steady state checking for bead width measurement. The “return false” and “return “true” regions indicate time steps where False and True values are returned by the algorithm, respectively. FIG.42B illustrates steady state checking for tool velocity measurement. The “return false” regions indicate time steps with zero steady state distance d (return False), and also indicate time steps with increasing steady state distance d (return False). The “return true” region indicates time steps where steady state distance d is larger than the threshold value d_thres (return True). FIG.42C is a schematic showing the minimum steady state distance d_thres equal to the delayed sensing distance of the bead scanner. The algorithm for checking bead steady state can be described with the following pseudo code (and relating to FIG.42A): Bead steady state checker Inputs: Sliding average of bead width: ^ഥ^ Standard deviation of bead width: ^^ Desired bead width: ^^ Maximum allowable standard deviation: ^^ ௪_௧^^^^ Dead zone for bead width tracking error: ^^ ௗ^^ௗ௭^^^ If ^^ ^ ^^ ௪_௧^^^^ : Bead width is in steady state. I f | ^ഥ^ െ ^^ௗ | ^ ^^ௗ^^ௗ௭^^^: Return True (ready for new control command) Else: Return False (no need for new control command) Else: Return False (not ready for new control command) Conditions of tool velocity steady state is checked in the following algorithm (and relating to FIG.42B): Velocity steady state checker Inputs: Sliding average of tool velocity: ^̅^ Standard deviation of tool velocity: ^^ Commanded tool velocity from previous time step: ^^ Maximum allowable velocity tracking error: ^^ ௧^^^^ Maximum allowable standard deviation: ^^ ௩_௧^^^^ Traveled distance with steady state: ^^ (initialized as zero) Minimum steady state distance: ^^ ௧^^^^ If ^^ ^ ^^ ௩_௧^^^^ and | ^̅^ െ ^^ | ^ ^^ ௧^^^^ : Tool velocity is in steady state. Increment ^^ with the distance traveled in this time step Else: Reset ^^ to 0 If ^^ ^ ^^ ௧^^^^ : Return True (ready for new control command) Else: Return False (not ready for new control command) In algorithm Velocity_steady_state_checker presented above, the minimum steady state distance ^^ ௧^^^^ can be set to be equal or larger than the delayed sensing distance of the bead scanner (and relating to FIG.42C), in order to synchronize tool velocity with the delayed bead width measurement. The overall control law with signal filtering and steady state checking can be described by the following pseudo code: Bead shape controller Inputs: Desired bead width: ^^ Control gain: ^^ For each newly bead width ^^ ^ and newly measured tool velocity ^^ ^ : Apply Sliding average to ^^ ^ to get ^ഥ^ , ^^ Apply Sliding average to ^^ ^ to get ^̅^, ^^ If Velocity steady state checker returns True: If Bead steady state checker returns True: Compute desired tool velocity ^^ ௪ഥ ௗ ൌ ^̅^ ∙ ௪^ Compute new velocity command: ^^ ^ ൌ ^̅^ ^ ^^ ∙ ^ ^^ െ ^̅^^ Return ^^ ^ Else: Return ^^ ^ from previous time step Else: Return ^^ ^ from previous time step FIG.43 illustrates a tool velocity profile in the upper plot and a bead width profile in the lower plot along a straight-line dispense path in a test run with bead shape control. In the upper plot of tool velocity vs. time, the “nominal” curve indicates nominal velocity, the “planned” curve indicates commanded/adjusted velocity, the “executed_raw” curve indicates raw velocity measurement, and the “executed_smooth” curve indicates filtered velocity measurement. The “steady state” regions indicate time steps where steady state distance d is incremented. In the lower plot of bead width vs. time, the “desired width” line indicates desired width, while the “sliding average of bead width” curve above the “desired width” line indicates a smoothed bead width measurement, and “sliding std of bead width” curve below the “desired width” line indicates standard deviation (“std”) of bead width measurement. The “Deadzone” region indicates dead zone of bead width tracking error, while the “oscillation threshold” region indicates range of allowable standard deviation that indicates a steady state of bead width. The thick vertical lines indicate time steps for bead width condition checking when steady state distance d reaches the target threshold. The example of FIG.43 shows tool velocity and bead width data recorded from a test run with bead shape control. The filter window sizes for sliding average of bead width and tool velocity were set to be 50 and 20 samples, respectively, in this instance. The control gain ^^ was set to be 0.8. From t=4s to t=~7s, standard deviation of bead width ^^ was larger than the threshold value ^^ ௪_௧^^^^ (0.2mm) due to a rapid change from 12mm to 8mm, and so the steady state condition of bead width was not met. When t=~6s, the steady state condition was met for tool velocity, and so steady state distance ^^ started to accumulate. When t=~7s, steady state conditions of both bead width and tool velocity were met, with steady state distance ^^ already larger the threshold value d ^୦୰^^ (20mm). This indicated readiness for a new tool velocity command, which was then computed (~30mm/s) based on Bead_shape_controller, resulting in a step change in the commanded velocity profile. The steady state of tool velocity was interrupted right away by the ramp-up stage of the robot tool velocity, until the steady state condition was met again at t=~8s to restart accumulation of ^^. This adjustment of tool velocity was repeated until the sliding average of bead width was close enough to the desired bead width (with an error less than ^^ ௗ^^ௗ௭^^^ =0.5mm). FIGS.44 illustrate experimental results with desired bead width of 10mm (FIG.44A), 8mm (FIG. 44B), and 5mm (FIG.44C). Control gain is 0.8. Wait time at the beginning of dispense process is 2 seconds (compensating for delayed extrusion of hot melt adhesive). As such, FIGS.44 illustrate experimental results demonstrating effectiveness of the bead shape controller to achieve different desired bead width. FIGS.45 illustrates experimental results with control gains of 0.2 (FIG.45A), 0.5 (FIG.45B), and 0.8 (FIG.45C). Desired bead width is 5mm. Wait time at the beginning of dispense process is 2 seconds (compensating for delayed extrusion of hot melt adhesive). As such, FIGS.45 illustrate experimental results demonstrating how control gain affects tool velocity commands and the resulting bead shape. FIGS.46 illustrate experimental results with wait time at the beginning of dispense process being 2 seconds (FIG.46A) and 3 seconds (FIG.46B). Desired bead width is 5mm. Control gain is 0.8. The dispensing experiments that produced the results shown in FIGS.46 were conducted under different wait time settings at the beginning of the tool path to demonstrate effectiveness of the bead shape controller under different initial conditions, dispensing experiments. FIG.47 shows experimental results of dispensing on a relatively flatter substrate (FIG.47A) and a relatively more curved substrate (FIG.47B) with online part shape and bead shape compensation. Control gain is 0.8. Desired bead width is 5mm. Wait time at the beginning of dispense process is 2 seconds (compensating for delayed extrusion of hot melt adhesive). The dispensing experiments that produced the results shown in FIGS.47 were conducted on a flat substrate and a curved substrate to demonstrate simultaneous part shape compensation and bead shape compensation.  In the present detailed description of the example embodiments, reference is made to the accompanying drawings, which illustrate specific embodiments in which the invention may be practiced. The illustrated embodiments are not intended to be exhaustive of all embodiments according to the invention. It is to be understood that other embodiments may be utilized, and structural or logical changes may be made without departing from the scope of the present invention. The following detailed description, therefore, is not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims. Unless otherwise indicated, all numbers expressing feature sizes, amounts, and physical properties used in the specification and claims are to be understood as being modified in all instances by the term “about” or “approximately” or “substantially.” Accordingly, unless indicated to the contrary, the numerical parameters set forth in the foregoing specification and attached claims are approximations that can vary depending upon the desired properties sought to be obtained by those skilled in the art utilizing the teachings disclosed herein. As used in this specification and the appended claims, the singular forms “a,” “an,” and “the” encompass embodiments having plural referents, unless the content clearly dictates otherwise. As used in this specification and the appended claims, the term “or” is generally employed in its sense including “and/or” unless the content clearly dictates otherwise. It is to be recognized that depending on the example, certain acts or events of any of the methods described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the method). Moreover, in certain examples, acts or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially. The techniques described in this disclosure may be implemented, at least in part, in hardware, software, firmware or any combination thereof. For example, various aspects of the described techniques may be implemented within one or more processors, including one or more microprocessors, CPUs, GPUs, DSPs, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), processing circuitry (e.g., fixed function circuitry, programmable circuitry, or any combination of fixed function circuitry and programmable circuitry), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. The term “processor” or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry. A control unit comprising hardware may also perform one or more of the techniques of this disclosure. Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various operations and functions described in this disclosure. In addition, any of the described units, modules or components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features as modules or units is intended to highlight different functional aspects and does not necessarily imply that such modules or units must be realized by separate hardware or software components. Rather, functionality associated with one or more modules or units may be performed by separate hardware or software components or integrated within common or separate hardware or software components. The techniques described in this disclosure may also be embodied or encoded in a computer- readable medium, such as a computer-readable storage medium, containing instructions. Instructions embedded or encoded in a computer-readable storage medium may cause a programmable processor, or other processor, to perform the method, e.g., when the instructions are executed. Computer readable storage media may include random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash memory, a hard disk, a CD-ROM, a floppy disk, a cassette, magnetic media, optical media, or other computer readable media.