**FEATURE DRIVEN NEXT VIEW PLANNING OF 3-DIMENSIONAL SURFACES**

ALLADKANI, Fadi (Worcester, Massachusetts, US)

CALLI, Berk (Worcester, Massachusetts, US)

MORTON, Roger (Warrington WA5 7NS, GB)

*;*

**G05B19/4099***;*

**G05B19/19***;*

**G06T19/20***;*

**G06T7/30***;*

**G06T7/70***;*

**B25J11/00**

**B25J9/16**CLAIMS What is claimed is: A method for reconstructing a curve on a 3-dimensional object, comprising: generating a point cloud based on a visual image of 3-dimensional object having visual pattern denoted by features captured in the visual image; determining, based on an extrapolation of the features, a viewpoint for obtaining successive visual image having the features; computing a segment indicative of the visual pattern based on the point cloud; nd iteratively generating a successive point cloud from a successive visual image athered based on the viewpoint and accumulating successive segments indicative of e reconstructed curve. The method of claim 1 wherein the point cloud includes a set of points, further omprising: sorting the set of points in the point cloud; and determining the viewpoint based on the sorted set of points. The method of claim 1 further comprising accumulating the successive point ouds until a termination of the visual pattern, the accumulated point clouds presenting a subset of a surface of the 3-dimensonal object. The method of claim 1 wherein the visual pattern denotes a branchless, ongated line having two extremities, and depicting a direction. The method of claim 1 wherein the point cloud includes a depiction of voxels in 3-dimnsional space indicative of a traversal of the visual pattern across a surface of e 3-D object. The method of claim 1 further comprising: generating a sequential ordering of the set of points in each respective pointoud; the ordering based on a correspondence of a color depicted by each point in the t of points matching a color of the visual pattern on the 3-D object; determining a direction defined by the ordered points; and determining a viewpoint corresponding to the determined direction. The method of claim 2 further comprising determining a plurality of directions defined by the ordered points; selecting a direction towards a base extremity of the visual pattern and arminal extremity of the visual pattern; and selecting the viewpoint based on which direction is closer to the terminalxtremity. The method of claim 1 further comprising: determining a normal orientation to the computed segment; and determining the viewpoint for a pose orthogonal to the segment. The method of claim 1 further comprising: generating an octree representation of the point cloud based on voxels; determining a plurality of frontier regions defined by voxels indicative ofvailable successive viewpoints; and computing the successive viewpoints based on a best viewpoint quality frommong the plurality of frontier regions. 0. The method of claim 9 further comprising, for each frontier region of the urality of frontier regions, computing a probability that the successive viewpointefines the best viewpoint quality. 1. The method of claim 9 further comprising: clustering the plurality of fr6ntier regions into a number of frontier clusters less an the number of the plurality of frontier regions; and selecting the successive viewpoint based on the frontier cluster corresponding to e least distance from the viewpoint. 2. The method of claim 1 further comprising aggregating the accumulated gments for defining a path along the 3-D object corresponding to the visual pattern, e accumulated segments depicting less area than a total area of the 3-D object. 3. A method of directing a cutting head along a scrap item, comprising: traversing a prescribed path based on non-exhaustive analysis of a local area of atures along the prescribed path; evaluating a successive cutting position based on a likelihood of a direction that e prescribed path proceeds in; and iteratively traversing the prescribed path based on a probabilistic determinationf cells defining a search space including the prescribed path. 4. The method of claim 13 further comprising identifying the prescribed path basedn an optical surface characteristic; the prescribed path defining a cutting operation rected towards a target object. 5. The method of claim 13 wherein the prescribed path is defined by contrastingptical features and recognition of the optical features. 6. The method of claim 15 wherein the optical features are defined by a surfacepplied pigmentation. 7. The method of claim 13 wherein evaluating further comprises: generating a probabilistic occupancy grid indicative of a search space in the cal area; and scoring each cell in the grid based on a probability of path progression. 8. The method of claim 17 wherein generating the probabilistic occupancy grid rther comprises gathering a sampling of surface features and corresponding cuttingatterns, the cutting position evaluated based on the surface features and the likelihoodf the direction computed based on the cutting patterns. 9. The method of claim 13 further comprising: forming the prescribed path by marking a line on the scrap item having variedptical features; scanning the features based on an optical recognition to define a 3-dimensional -D) recognition of the prescribed path; and directing a cutting apparatus based on the 3-D recognition. 0. A computer program embodying program code on a non-transitory computer adable storage medium that, when executed by a processor, performs steps for constructing a curve on a 3-dimensional (3-D) object, the method comprising: generating a point cloud based on a visual image of the 3-D object having a sual pattern denoted by features captured in the visual image; determining, based on an extrapolation of the features, a viewpoint for obtaining successive visual image having the features; computing a segment indicative of the visual pattern based on the point cloud;nd iteratively generating a successive point cloud from a successive visual imageathered based on the viewpoint and accumulating successive segments indicative of e reconstructed curve. |

^{c }C

_{k }be the cloud obtained at step k with respect to the camera frame c. The camera’s pose at step k is

^{b }T

_{k }in the base frame b. These acquired clouds

^{c }C

_{k }are concatenated to iteratively expand the cumulative knowledge of the drawing. Accordingly, let

^{b }C

_{total,k }be the cloud containing all the RGB-D information obtained thus far, i.e., the cumulative cloud after step k expressed in the base frame. This can be expressed recursively for k ≥ 1 as follows: The transformations

^{b }Tk map all clouds acquired at different steps k to the base frame for concatenation. The expression bT

_{k }

^{c }C

_{k }maps all of the cloud’s member points from the camera frame at step k to the base frame. The loop’s next step is to segment the drawing from the current cumulative cloud

^{b }Ctotal,k. Let ϕξ(·) be the filtering function defined by its parameter ξ which determines its filtering behavior. In our setup, ϕ filters by color such that ξ is an admissible range of colors. Let

^{b }D

_{total,k }be the cloud representing the drawing such that ϕξ :

^{b }Ctotal,k −→

^{b }Dtotal,k. With these first four subtasks defined, we summarize their inputs, outputs, and interaction in the pseudocode below. Until now, we have defined the acquisition and processing of point clouds at a particular viewpoint pose

^{b }T

_{k }. The initial viewpoint at k = 0 supposedly provided by the mobile search (see Fig.2) is assumed given. Beyond this, we must obtain the subsequent viewpoints to gradually explore and reconstruct the drawing. For this, we invoke the next view planner 160. Conceptually, the NVP approach performs higher-level reasoning on the raw point clouds obtained from the ProcessClouds(·) procedure. Specifically, the NVP generates candidate viewpoints using information from the cumulative feature cloud

^{b }Dtotal,k, the surface reconstruction

^{b }Ctotal,k, and the latest transformed cloud

^{b }Ck. The pseudocode below conceptually sketches the exploration and reconstruction routine. The loop’s stopping condition is determined and checked by the viewpoint planner, terminating the exploration at some eventual step kstop. After termination, the cumulative cloud

^{b }Ctotal,kstop should contain the entirety of the desired feature. The final and fully reconstructed drawing’s point cloud

^{b }Dtotal,kstop is outputted for cutting trajectory generation. The NextViewpoint(·) service can be fulfilled by each of the next view approaches in Tables I-III below. Irrespective of choice, the viewpoint planner controls these two decisions during exploration: 1) Termination: Determine if the drawing is fully explored, and accordingly either proceed searching or terminate. 2) Viewpoint Generation: Provide and select candidate camera poses to continue the robot’s search. For scrap cutting, the NVP approach is subject to performance constraints. An inefficient planner slows down the exploration routine, which would worsen cutting productivity. The planning time is affected jointly by the number of steps and by the step duration. This often comes with a tradeoff, as planners that finish with less steps tend to spend more time per view, and by contrast planners which compute steps rapidly tend to iterate more. This trade-off is presented further below in the approaches of Tables I-III. The NVP approach determines poses to visit sequentially, throughout the exploration task, to reconstruct the drawing on the object surface. The motion planner attempts to plan a feasible trajectory towards these poses and executes the first viable one. This motion then executes while avoiding collisions. Fig.3B is a flowchart of next best view reconstruction based on the drawing of Figs.1-3A. After selection of a scrap object 101 and application of a visually distinct drawing, the planner 160 generates a point cloud 154 based on a visual image 152 of the 3-D object 101 having a visual pattern or drawing 110 denoted by features captured in the visual image, as depicted at step 302. The planner 160 determines, based on an extrapolation of the features, a viewpoint for obtaining a successive visual image having the features, as shown at step 304. The features are the color pigmentation of the drawing that differs from the base object 101. Image pixels of the drawing 110 exhibit this feature information. The reconstructor 162 computes a segment 158 indicative of the visual pattern based on the point cloud 152, as depicted at step 306. Using the imaged features 156, the planner 160 determines a plurality of directions defined by the ordered points, at step 308, and selects a direction towards a base extremity of the visual pattern and a terminal extremity of the visual pattern, as depicted at step 310. The drawing 110 is an elongated feature that has a clear dimension along its length, differentiable from the narrower width. This indicates two directions in a given point cloud, one towards the origin, or base, and the other towards the end terminus. The planner 160 selects the viewpoint based on which direction is closer to the terminal extremity, i.e. “facing” the progression towards the end, as opposed to an already traversed region of the object 101, disclosed at step 312. The planner 160 iteratively generates a successive point cloud 154 from a successive visual image 152 gathered based on the respective viewpoint 151 and accumulating successive segments 158 indicative of the reconstructed curve, as shown at step 314. Successive point clouds 154-(n+1) are accumulated until a termination of the visual pattern, where the accumulated point clouds represent a subset of a surface of the 3-dimensonal object 101, as depicted at step 316. In general, the drawing 110 is a visual pattern denoting a branchless, elongated line having two extremities, and depicting a direction, as shown at step 318; there is a clear delineated path by following the line without ambiguous forks or branches. A check is performed, at step 320, to determine if a drawing extremity has been reached. Typically this means that the point cloud does not indicate an ongoing direction of the drawing, as when the end of the scrap portion for cutting has been attained. As will be discussed further below, an occluded line, as when the drawing wraps a corner or traverses a partially intact member, can be recovered. Iteration for determining successive viewpoints, point clouds and segments otherwise continues. Upon completion, the accumulated segments define a path along the 3-D object corresponding to the visual pattern (drawing 110), such that the accumulated segments 158 depict less area than a total area of the 3-D object, as depicted at step 322. Aggregating the accumulated segments defines a path along the 3-D object corresponding to the visual pattern that can be pursued by a cutting torch or other robotic actuator. Efficiency is assured since the accumulated segments depict less area than a total area of the 3-D object, as only the drawing, and not the entire 3-D object, needed to be considered. Figs.4A-4C show alternate approaches to drawing reconstruction through next best view approaches described herein. In Figs.4A-4C, feature-driven NVP approaches including extrapolation, constrained NBV, and a guided NBV approach are described. Each NBV approach: • Obtains point clouds 154 of the object that is marked with the cutting path. • Segments the points that belong to the cutting path. • Processes the object point cloud and the cutting path point cloud to calculate the next viewpoint. • Moves the robot to the next viewpoint 151, and repeats this process until a termination condition is reached. The approaches are generally distinguished by performance tradeoffs in speed and robustness at finding an occluded portion/segment. Fig.4A generalizes the procedure for point cloud fitting and curve extrapolation. In contrast to conventional approaches, Fig.4A reconstructs an unknown drawing on an unknown surface, given a known feature (here, the color red). As such, this process is used as a baseline against which the remaining two are compared. The second approach of Fig.4B relies on a probabilistic occupancy voxel grid and a quality metric (based on information gain) to explore and rank candidate viewpoints, thereby selecting that which maximizes quality. This planner searches the grid in a rather exhaustive manner. In contrast, a third approach in Fig.4C exploits greedy-like optimizations for faster searching. The search for the next viewpoint is optimized to seek the most likely path of the drawing. Referring to Fig.4A, the point cloud 154 includes a set of points, and the planner 160 sorts the set of points in the point cloud, and determines the viewpoint based on the sorted set of points. This planner 160 treats the feature NVP task like a path exploration problem—by iteratively exploring the branches of an unknown path until all endpoints are found. The point clouds are converted to more useful and more structured representations using fitting methods. The next view is obtained by extrapolating those fits. Exploring along a path requires a sense of its direction. However, the data obtained from the stereocamera, which is then processed in the procedure ProcessClouds(·), remains in the format of point clouds. Essentially, this is an unordered list of 3-D colored points where the direction of the drawing cannot be directly inferred. In this form, viewpoint planning on the desired feature is not straightforward. This planner solves this representation problem by mapping the point cloud to a more usable structure. The cloud data is used to construct or fit spatial curves parametrized in 1-D. output space (a curve) can equivalently be rewritten as a rule point. The cloud is thus reduced to a curve as follows: This 1-D ordering of 3-D points yields a sense of direction along the curve, which provides the planner 160 with a way to determine the next viewpoint 151 to continue exploring the drawing. Since the curve represents one continuous and known portion of the drawing, the drawing’s unknown regions come after the endpoint of this curve. This planner assumes that the entire drawing has two extremities, meaning it is branchless. Thus, after the initial viewpoint, there are two scenarios for curve endpoint selection. If the initial viewpoint happened to already contain one extremity of the drawing, then the planner explores in the direction of the second curve endpoint. On the other hand, if the initial view contains an intermediate portion of the drawing, then the planner has two candidate directions to explore. The curve endpoint closest to the camera is chosen to reduce movement time between unexplored endpoints. With a chosen curve endpoint, the planner extrapolates to the next viewpoint at each step at a configurable distance δ

_{distance }. When δ

_{distance }= 0, the extrapolated endpoint itself becomes the next viewpoint. Exploring one curve endpoint at a time is slower, but plans towards the drawing extremity more conservatively. Alternatively, δdistance > 0 places the next viewpoint, beyond the curve endpoint. This speeds up exploration but may cause some overshoot. Accordingly, keeping the extrapolation distance small is preferable. Extrapolation only determines the position

^{b }dk+ 1 of the next viewpoint. To fully provide the next pose

^{b }T

_{k+ 1 }, the planner must also determine the next orientation

^{b }Rk+ 1 of the camera at that position

^{b }dk+ 1. The planner sets the orientation to be orthogonal to the surface at the viewpoint position

^{b }d

_{k+ 1 }. This orientation is obtained by estimating the vector normal to the surface surrounding the viewpoint position

^{b }d

_{k+ 1 }. This region is contained in the cumulative cloud

^{b }Ctotal,k . The viewpoint 151 can be selected by determining a normal orientation to the computed segment effectively define a camera 150 orientation by determining the viewpoint for a pose orthogonal to the segment. Orienting the camera orthogonally helps the robot obtain a more accurate capture of the surface and thus avoid leaving gaps in the drawing point cloud during exploration—which would otherwise require backtracking and cost additional time. The orientation about the normal does not affect output significantly, and is relaxed for motion planning. The planner continues searching by iteratively obtaining new clouds, fitting a curve on the desired feature, and extrapolating towards the next viewpoint. The planner considers a drawing extremity to be found when the size difference between two consecutive reconstructions

^{b }Dtotal,k falls below a threshold δsize. After both extremities are found, the search terminates and returns the reconstruction

^{b }D

_{total,k }stop. This iterative extrapolation procedure is outlined in Table I where the result is the feature’s reconstruction. TABLE I : Extrapolated next view planner (E-NVP) The approach of TABLE I and Fig.4A may be summarized as generating a sequential ordering of the set of points in each respective point cloud, such that the ordering is based on a correspondence of a color depicted by each point in the set of points matching a color of the visual pattern on the 3-D object, and determining a direction defined by the ordered points. The planner 160 then determines a viewpoint corresponding to the determined direction. Referring to Fig.4B, a constrained next-view planning approach is depicted, and shown in TABLE II below. This approach restructures the NVP task into a search problem on a voxel grid 402. In brief, this grid is constructed from the point clouds 154 and updated at each measurement. A frontier region is computed to constrain and generate the candidate search space, within which each candidate is scored with a viewpoint quality metric. For each frontier region of the plurality of frontier region, a probability is computed that the successive viewpoint defines the best viewpoint quality. The best candidate is selected as the next view. This repeats until a termination criterion is met. The approach of Fig.4B voxelizes the obtained processed clouds into an octree grid of occupancy probabilities. The point cloud 154 includes a depiction of voxels in a 3-dimnsional space indicative of a traversal of the visual pattern across a surface of the 3-D object. The frontier region represents the forward progress of the reconstructed segments towards a distal terminus of the drawing. The planner 160 may generate an octree representation of the point cloud based on the voxels, and determine a plurality of frontier regions defined by voxels indicative of available successive viewpoints. Successive viewpoints are then computed based on a best viewpoint quality from among the plurality of frontier regions. The grid distinguishes between occupied, unoccupied, and unknown cells based on their occupancy probabilities—respectively more than, less than, and equal to 0.5. A suitable data structure is used for representing an octree structure as a probabilistic voxel occupancy grid. This grid allows efficient storage and querying of cell probabilities. Let Gk be the voxel grid generated at step k. The local scene’s cumulative cloud

^{b }C

_{total,k }is voxelized to map the currently available knowledge, while the reconstructed drawing’s cumulative cloud

^{b }Dtotal,k is used to identify those voxels belonging to the drawing. Since the drawing 110 is the region of interest, we determine its frontier on the grid. Here we define a frontier cell to have at least one unknown neighbor and at least another neighbor belonging to the drawing’s region. The frontier at step k is denoted by F

_{k }and represents the boundary of current knowledge about the drawing used to determine high-vantage locations for generating viewpoint search spaces. This constrains the search at step k for the next view in search space Sk where candidate viewpoints are scored and ranked. The space is constructed by generating regions from geometric primitives (e.g., cubes, spheres) around each frontier cell, and then concatenating them such that S

_{k }= f ∈ F k S(f ) where S(·) generates a search space primitive for a single cell. TABLE II : Constrained Next Best View Fig.5 shows examples of the drawing on differing surface categories of mapped 3-D objects. Referring to Figs.1 and 6, test shapes 51 are shown with a corresponding drawing 110, demonstrating variations and occlusions in the object 101. Sample objects 52, portray actual rendering of the drawing 110 on an object for showing how occlusions and surface conditions can affect point cloud information based on the contrasting features in the drawing. In other words, turning corners and surface inconsistencies degrade the quality of the observed features. Fig.6 shows camera orientation for directing a pose for a successive viewpoint in drawing reconstruction. Referring to Fig.6, the camera’s viewpoint sk can be expressed in the base frame as

^{b }sk = (

^{b }x,

^{b }y,

^{b }z, αx , αy , αz ), where (

^{b }x,

^{b }y,

^{b }z) is the position in the grid, and (α

_{x }, α

_{y }, α

_{z }) is the orientation obtained as anticlockwise rotations about the respective axes. It can also be expressed in the camera frame as: where is the position in the grid, and (α

_{x }, α

_{y }, α

_{z }) is the orientation obtained as anticlockwise rotations about the respective axes. It can also be expressed in the camera frame as: where is the position with respect to the camera’s frame, and (α

_{R }, α

_{P }, α

_{Y }) is the orientation specified with respect to the local roll-pitch-yaw frame: about the camera’s body. The transformations

^{b }s

_{k }=

^{b }T

_{k }

^{c }s

_{k }relate these expressions. All candidates in Sk are scored using a viewpoint quality metric Q(·). The next view is set to the best-scoring candidate: The quality of a candidate viewpoint at step k is given by: This is a convex combination of a gain term and a cost term, expressed as a proportion of their search space totals. The parameter λ ∈ [0, 1) determines, for a particular view- point, the relative weight between its gain and cost terms: expressed as fractions of the candidate population totals—that is, the aggregate values for every s ∈ S

_{k }. Here, gain(s

_{k }) quantifies the relevant information gain obtained from choosing sk as the next viewpoint. The function cost (sk ) offsets this gain and is the Euclidean distance between the current viewpoint and the candidate sk . This penalizes the quality of a viewpoint on distance from the current position. Therefore, a smaller value for λ prioritizes the highest-ranking viewpoint candidates which are also near the current position. Hence, the parameter λ directly affects the distance traveled within a measurement step and thus the exploration time. However, λ does not affect the NBV coverage performance in terms of feature reconstruction. In practice, for highly constrained spaces, it is preferable that the robot explores in short bursts, thus favoring a reduced value for λ. Conversely, for highly spacious conditions, the distance constraints can be relaxed, allowing the robot to move to viewpoints further away, which in turn favors an increased value for λ. In our evaluations, the parameter is set to λ=

^{½ }to give equal weight to the gain and cost terms, as the evaluation’s focus is to assess the feature reconstruction capability. With that said, the parameter λ can be readjusted to application- specific and case-specific needs, if necessary. The gain(·) function is obtained by summing over the grid, Here, h(g | sk ) is the information entropy decrease which measures absolute information gain at a cell g after placing the next viewpoint at sk . The feature probability pϕ(g) estimates the feature membership of the cell g, i.e., the chances of belonging to the drawing. The visibility probability pv (g | sk ) estimates the chance that the cell g is visible from s

_{k }. The probabilities p

_{ϕ }(g) and p

_{v }(g | s

_{k }) respectively penalize distance from the desired feature (the drawing) and occlusion. Referring to Fig.7, the absolute information gained (about the occupancy state at a cell g 180 by moving from the viewpoint s k − 1 to the viewpoint sk corresponds to the decrease in information entropy about its occupancy state. The entropy decrease h(g | s

^{k }) is obtained through the entropy function H(·) used in information theory, as follows: The binary occupancy random variable og models whether the cell g is unoccupied (o

_{g }= 0) or occupied (o

_{g }= 1). The information entropy H(o

_{g }| s

_{k }) quantifies the uncertainty of the binary random variable og denoting the occupancy of cell g. When this information entropy decreases across two consecutive measurements, i.e., when h(g | sk ) > 0, then we have “ lost some uncertainty” or “ gained information” about the occupancy state of the grid cell g, after the k

^{th }measurement from cell s

_{k }. The feature probability is modeled with exponential decay: The feature indicator random variable ϕg models whether the cell g belongs to the drawing (ϕ

_{g }= 1) or otherwise (ϕ

_{g }= 0). The function dist

_{F }k (·) returns the shortest Euclidean distance from g to the nearest frontier cell in Fk . The parameter αϕ > 0 is used to tune the exponential decay profile. Maximizing only the absolute information gain between two consecutive viewpoints would lead to seeking novel information about the state of the grid regardless of its relevance to the desired feature. Therefore, new information gained about each cell must also be scaled by the cell’s probability of belonging to the desired feature. Referring to Figs.8, the feature probability approximates the probability that a grid cell g 182 belongs to the desired feature. The frontier cells in this case represent the cells that have been determined to be part of the desired feature. This feature probability p

_{ϕ }is approximated using the assumption that cells near the desired feature have a higher probability of belonging to the feature. This assumption is modeled using an exponential decay profile applied onto the distance to the nearest frontier cell. The visibility probability at cell g from candidate s

_{k }is: The binary visibility random variable vg,sk models whether the cell g is visible (vg,sk = 1) or otherwise (v

_{g,s }k = 0) from the candidate s

_{k }. An unobstructed view to the cell g from a candidate sk must be must preferred. Raycasting is performed from sk to g, where R

_{k }contains all cells traversed by the ray. The probabilities that the ray cells are unoccluded P[o

_{r }= 0] are multiplied to yield p

_{v }(g | s

_{k }), i.e., the probability that

^{the cell g is visible from the candidate viewpoint }s

^{k . }Fig.9 shows occupancy probability of intermediate cells as representing part of the feature drawing. Referring to Figs.7-10, the visibility probability of a cell g 184 from the candidate viewpoint sk is computed as the probability pv that all their intermediate cells are occupied. During the search, all grid positions (

^{b }x,

^{b }y,

^{b }z) within S

_{k }are attempted for scoring. For each candidate position, the orientation is searched by varying the angles α

_{P }and α

_{Y }. The C-NBV planner relaxes the angle α

_{R }to not overly constrain the motion planner, since the normal direction about the lens has a comparably lesser effect on the information gain in the acquired image. After determining the next viewpoint s

^{∗ }, the camera’s grid position (x, y, z) and orientation αP and αY are set accordingly. The angle αR is determined by the motion planner, and the camera is moved to its new pose. With the quality metric Q(·) fully defined, a next viewpoint can be obtained, and the procedure elaborated above repeats until the following stop condition is met: That is, the routine stops once the optimal viewpoint quality falls below a certain threshold δ

_{Q }, or when the number of frontier cells reaches zero, whichever condition is met first . This procedure is outlined in TABLE II. The approach of TABLE II presents features to reformulate these approaches. We modify the frontier definition to focus on the desired feature. We also generate constrained search spaces around specific frontier cells to vastly decrease the search’s time and memory costs. To determine the NBV’s orientation, we constrain its range by pointing the camera at the frontier cells. A further enhancement to next view planning is a Guided Next Best View (G-NBV) approach, shown in TABLE III and Fig.4C. This planner implements specialized modifications to radically increase performance. It does this by not only constraining the search, but also explicitly guiding it along the feature. The planner thus uses feature information to perform greedy optimizations. Unlike the E-NVP planner of TABLE II, the approach of TABLE III does not assume that it explores a path. However, by guiding its search along the feature, it retains the performance advantages of path exploration. Yet, its probabilistic NBV formulation is more robust against unknown space, occlusions, and adversarial geometries. In a sense, this carries the performance advantages of the path exploration paradigm and the robustness characteristics of the probabilistic NBV formulation. This approach repeats the formulation of the C-NBV approach until after the frontier F

_{k }is determined. Despite the large performance improvements of the already reduced frontier, the search space can still become being quite large. Consider the scenario of a large object with a drawing traversing long parts of its surface. This produces a long frontier with little overlap between the individual search spaces S(·), causing the concatenated search space S

_{k }to significantly grow. To address this, the G-NBV planner reduces the frontier Fk into a single frontier cell fˆk . A plurality of frontier regions may be clustered into a number of frontier clusters less than the number of the plurality of frontier regions. The successive viewpoint is selected based on the frontier cluster corresponding to the least distance from the viewpoint. For this, the frontier cells are clustered using a suitable voxel clustering method. We use a connectedness-based clustering method, whereby any neighboring frontier cells are lumped into the same cluster. Now, the frontier is composed of several clusters, of which the nearest one (to the current pose) is chosen. The centroid fˆk of this nearest cluster is computed and then used as the single-voxel frontier for generating the search space.

TABLE III : Guided Next Best View (G-NBV) approach This offers numerous performance advantages. First, by trimming the other clusters, the planner explores any number of branches, one at a time, avoiding long and exhaustive searches. Second, since there is only one frontier cell, then the entire search space consists of a single geometric primitive around the cell, e.g., a sphere around the centroid. This is a reasonable step, as frontier cells tend to surround the current reconstructed drawing’s endpoints. Thus, searching around this centroid would implicitly guide the planner along the drawing. A third advantage lies in determining orientations for the candidate viewpoints. This is normally expensive as it drastically increases the search space, since for every position there are several the camera orientations. We eliminate this problem entirely by constraining the orientation from the candidate cell s to the frontier centroid fˆk , thus predetermining the values for the angles αP and αY . This reduces G-NBV’s search to only the grid positions, since αP and αY are predetermined and αR is delegated to the motion planner. The search space is thus not only reduced cell-wise, i.e., attempt only the search space around fˆk rather than around the entire frontier, but also pose-wise, i.e., search only grid positions in the search space while constraining the camera orientation. This constraint is justified due to the localized information on the drawing. In effect, high viewpoint qualities concentrate in the unknown space around the current reconstructed drawing’s endpoints. Fixing the orientation towards the centroid eliminates the exploration of alternatives where the gain is often marginal or negative. By exploiting the drawing’s structure, the planner is able to rapidly select the next viewpoint. By using any of the three aforementioned approaches, we obtain the fully- reconstructed cloud of the drawing. This cloud is used to generate a suitable cutting path from the generated segments (along the drawing) that can be used as a reference for cutting control. We accomplish this task of converting unstructured point clouds to ordered paths, as in the mapping (2), by using suitable curve fitting methods. This is the same class of methods used in the Extrapolated NVP’s curve fitting step. In effect, any of the corresponding approaches of TABLES I-III are is suitable for obtaining a cutting path from the fully-reconstructed drawing. Those skilled in the art should readily appreciate that the programs and methods defined herein are deliverable to a user processing and rendering device in many forms, including but not limited to a) information permanently stored on non- writeable storage media such as ROM devices, b) information alterably stored on writeable non-transitory storage media such as solid state drives (SSDs) and media, flash drives, floppy disks, magnetic tapes, CDs, RAM devices, and other magnetic and optical media, or c) information conveyed to a computer through communication media, as in an electronic network such as the Internet or telephone modem lines. The operations and methods may be implemented in a software executable object or as a set of encoded instructions for execution by a processor responsive to the instructions, including virtual machines and hypervisor controlled execution environments. Alternatively, the operations and methods disclosed herein may be embodied in whole or in part using hardware components, such as Application Specific Integrated Circuits (ASICs), Field Programmable Gate Arrays (FPGAs), state machines, controllers or other hardware components or devices, or a combination of hardware, software, and firmware components. While the system and methods defined herein have been particularly shown and described with references to embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.

**Previous Patent:**SYSTEMS AND METHODS FOR DETERMINING CHARGING STATION COMPATIBILITY

**Next Patent: ONE-STEP SUPRAMOLECULAR MULTIFUNCTIONAL COATING ON PLANT VIRUS NANOPARTICLES FOR BIOIMAGING AND THER...**