Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
USING AUGMENTED REALITY FOR CONTROLLING INTELLIGENT DEVICES
Document Type and Number:
WIPO Patent Application WO/2019/046559
Kind Code:
A1
Abstract:
A method for controlling an intelligent device in a first environment may include displaying a simulation model of the intelligent device on a display of an Augmented Reality (AR) device in a second environment. The method may receive one or more user interactions over the simulation model of the intelligent device, where the interactions indicate a user desired operation of the intelligent device. The method may generate programming instructions based on the one or more user interactions, transmit the programming instructions to the intelligent device, and cause the intelligent device to perform the user desired operation in the first environment. In some examples, the method described herein may be implemented in a system for controlling a robotic arm. The system may receive one or more user interactions to allow a user to define a desired operation of the robotic arm via natural user interactions.

Inventors:
ZOU XINLI (US)
KLASSEN MAXIMILIAN (US)
ZHOU XIAODONG (US)
PAN JIANTAO (US)
Application Number:
PCT/US2018/048811
Publication Date:
March 07, 2019
Filing Date:
August 30, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
LINKEDWYZ (US)
International Classes:
B25J9/00; B25J9/16; B25J9/22; B25J13/00
Foreign References:
US20160257000A12016-09-08
US20150005785A12015-01-01
US20150224650A12015-08-13
US20100045701A12010-02-25
US7714895B22010-05-11
Attorney, Agent or Firm:
ENG, Kimton et al. (US)
Download PDF:
Claims:
CLAIMS

We claim:

1. A system for controlling a robotic arm in a first environment, the system comprising:

a user interaction unit installable on an Augmented Reality (AR) device, the user interaction unit is configured to:

display a simulation model of due robotic arm on a display of the AR device in a second environment; and

receive one or more user interactions over the simulation model of the robot, the one or more interactions indicate a user desired operation of the robotic arm; and a device control system configured to generate programming instructions based on the one or more user interactions and transmit the programming instructions to the robot, wherein the progiamrning instructions cause the robotic arm to perform the user desired operation in the first environment.

2. The system of claim 1, wherein the one or more user interactions include a user selection of one or more spatial position tracking points, each tracking point corresponds to an actuatable part of the robotic arm and includes one of more device parameters consisting of a movement, a speed, a velocity, a torque and a duration of a pause in time.

3. The system of claim 2, wherein the user interaction unit is further configured to display one or more device parameters on the display of the AR device while receiving the one or more user interactions, wherein the one or more device parameters correspond to each of the track points being selected.

4. The system of claim 3 further comprising a robotic simulation system configured to:

receive from the user interaction unit the one or more user interactions over the simulation model of the robotic arm;

generate a trajectory of movement of the robotic arm based on the one or more user interactions, wherein the user interaction unit is further configured to display the trajectory on the display of the AR device.

5. The system of claim 4, wherein:

the one or more user interactions also include an order of execution of the one or more spatial position tracking points; and

the robotic simulation system is configured to generate the trajectory based on the ordered execution of the one or more spatial position tracking points.

6. The system of claim 4, wherein:

the robotic simulation system is further configured to:

determine whether the trajectory will cause a collision with one or more obstacles in the first environment;

upon determining that the trajectory will cause a collision with the one or more obstacles in the first environment., receive one or more additional user interactions over the simulation model of the robot, wherein the one or more additional user interactions indicate a modified user desired operation of the robotic arm so that the collision is avoided.

7. The system of claim 1 further comprising a device calibration unit configured to generate an alignment between a first coordinate system in the first environment and a second coordinate system in the second environment, wherein the device control system is configured to use the alignment to generate the programming instructions based on the one or more user interactions.

8. The system of claim 7 further comprising a coordinate alignment unit configured to determine a relative position of the robotic arm in the first environment to the AR device.

9. The system of claim 8, wherein the coordinate alignment unit is configured to use one or more visual markers placed in a proxim ity of the robotic arm in the first environment to determine the relative position of the robotic arm to the AR device.

10. The system of claim 8, wherein the coordinate alignment unit is configured to use one or more barcodes placed on a portion of the robot, each barcode has a size and includes data that indicates the size.

11. A method for controlling an intelligent device in a first environment, the method comprising:

displaying a simulation model of the intelligent device on a display of an Augmented Reality (AR) device in a second environment;

receiving one or more user interactions over the simulation model of the intelligent device, the one or more interactions indicate a user desired operation of the intelligent device; generating programming instructions based on the one or more user interactions; transmitting the programming instructions to the intelligent device; and causing the intelligent device to perform the user desired operation in the first environment.

12. The method of claim 11, wherein the one or more user interactions include a user selection of one or more spatial position tracking points, each tracking point corresponds to an actuatable part of the intelligent device and includes one of more device parameters consisting of a movement, a speed, a velocity, a torque and a duration of a pause in time.

13. The method of claim 12 further comprising displaying one or more device parameters on the display of the AR device while receiving the one or more user interactions, wherein the one or more device parameters correspond to each of the track points being selected.

14. The method of claim 13 further comprising:

generating a trajectory of movement of the intelligent device based on the one or more user interactions over the simulation model of the intelligent device; and

displaying the trajectory on the display of the AR device.

15. The method of claim 14, wherein:

the one or more user interactions also include an order of execution of the one or more spatial position tracking points; and

generating the trajectory comprises generating the trajectory based on the ordered execution of the one or more spatial position tracking points.

16. The method of claim 14 further comprising: determining whether the trajectory will cause a collision with one or more obstacles in the first environment;

upon determining that the trajectory will cause a collision with the one or more obstacles in the first environment, receiving one or more additional user interactions over the simulation model of the intelligent device, wherein the one or more additional user interactions indicate a modified user desired operation of the intelligent device so that the collision is avoided.

17. The method of claim 11 further comprising:

generating an alignment between a first coordinate system in the first environment and a second coordinate system in the second environment; and

using the alignment to generate the programming instructions based on the one or more user interactions.

18. The method of claim 7 further comprising determining a relative position of the intelligent device in the first environment to the AR device.

19. The method of claim 18 further comprising:

using one or more visual markers placed in a proximity of the robotic arm in the first environment to determine the relative position of the intelligent device to the AR device.

20. The method of claim 18 further comprising using one or more barcodes placed on a portion of the intelligent device to determine the relative position of the intelligent device in the first environment to the AR device, wherein each barcode has a size and includes data that indicates the size.

Description:
USING AUGMENTED REALITY FOR CONTROLLING INTELLIGENT DEVICES

CROSS-REFERENCE TO RELATED APPLICATION

[0001] This application claims the filing benefit of U.S. Provisional Application No. 62/551,917, filed August 30, 2017. This application is incorporated by reference herein in its entirety and for all purposes.

BACKGROUND

[0002] This patent document relates generally to controlling intelligent devices in an augmented reality (AR) environment. For example, methods and systems for controlling a robotic arm using an AR device are disclosed.

[0003] A robotic arm is a typical type of intelligent device, capable of performing various tasks. Today, there is not a good solution for a user to interact with a robotic arm. Even some simple tasks require programming skills - this l imits the applications that a robotic arm can be utilized. For example, instructing a robotic arm to do a simple task, such as picking up an object, will not only need profound programming skills in programming the robotic arm, it also requires accurate positioning of the object in the digital coordinate of the robotic arm. These requirements are difficult to meet for a normal user. Part of the underlying problems is due to the complicated three-dimensional (3D) geometry task description associated with controlling a robotic arm. For example, the controlling of a robotic arm is often described in computer instructions, which require users to be trained for months or years to master. SUMMARY

[0004] In one aspect of the disclosure, a system for controlling a robotic arm in a first environment includes a user interaction unit installable on an Augmented Reality (AR) device and a device control system. The user interac tion unit is configured to display a simulation model of the robotic arm on a display of the AR device in a second environment, and receive one or more user interactions over the simulation model of the robotic arm, where the one or more interactions indicate a user desired operation of the robotic arm. The device control system is configured to generate robotic arm programming instructions based on the one or more user interactions and transmit the robotic arm programming instructions to the robotic arm, where the robotic arm programming instructions cause the robotic arm to perform the user desired operation.

[0005] Additionally and/or alternatively, the one or more user interactions include a user selection of one or more spatial position tracking points, each tracking point corresponding to an actuatable part of the robotic arm and including one or more device parameters consisting of a movement, a speed, a velocity, a torque and a duration of a pause in time. The user interaction unit may further display one or more device parameters on the display of the AR device while receiving the one or more user interactions, where the one or more device parameters correspond to each of the track points being selected.

[0006] Additionally and/or alternatively, the system also includes a robotic arm simulation system configured to receive from the user interaction unit the one or more user interactions over the simulation model of the robotic arm and generate a trajectory of movement of the robotic arm based on the one or more user interactions. The user interaction unit is further configured to display the trajectory on the display of the AR device.

[0007] Additionally and/or alternatively, the system also includes a device calibration unit configured to generate an alignment between a first coordinate system in the first environment and a second coordinate system in the second environment, and the device control system is configured to use the alignment to generate the robotic arm programming instructions based on the one or more user interactions.

[0008] In one aspect of the disclosure, a method for controlling an intelligent device in a first environment includes displaying a simulation model of the intelligent device on a display of an AR device in a second environment. The method also includes receiving one or more user interactions over the simulation model of the intelligent device, where the one or more interactions indicate a user desired operation of the intelligent device. The method further includes generating programming instructions based on the one or more user interactions, transmitting the programming instructions to trie intelligent device, and causing the intelligent device to perform the user desired operation in the first environment. In some examples, the intelligent device may include a robotic arm.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] The present solution will be described with reference to the following figures, in which like numerals represent like items throughout the figures.

[0010] FIGs. 1A and IB illustrate an example robotic arm and AR device in accordance with various examples described herein.

[0011] FIG. 2 illustrates a diagram of a device control system that includes various components in accordance with various examples described herein.

[0012] FIG. 3 illustrates an example of a simulated robotic arm overlaid on the physical robotic arm in accordance with some examples described herein.

[0013] FIG. 4 illustrates an example of a process of robotic arm planning in accordance with various examples described herein. [0014] FIG. S illustrates an example of a process of controlling a robotic arm in accordance with various examples described herein.

DETAILED DESCRIPTION

[0015] As used in this document, the singular forms "a", "an", and "the" include plural references unless the context clearly dictates otherwise. Unless defined otherwise, all technical and scientific terms used herein have the same meanings as commonly understood by one of ordinary skill in the art. As used in this document, the term "comprising" means "including, but not limited to." Unless defined otherwise, all technical and scientific terms used in this document have the same meanings as commonly understood by one of ordinary skill in the art.

[0016] An intelligent device may generally include any type of equipment, instrument, or machinery that includes its own computing and network communication capabilities. Although various methods and systems are described with examples of a robotic arm, the scope of this patent disclosure may not be limited to robotic arms. Examples of an intelligent device may also include a network connected excavator, paiver, or smart home devices, etc.

[0017] FIGs. 1 A and IB illustrate an example robotic system, such as a robotic arm, and AR device. In some examples, in FIG. 1 A, an intelligent device 120 may include one or more movable parts, such as 124(1, ... n), 126(1, n) and a control system 134 configured to control the one or more movable parts 124(1. ... n), 126(1, ... n). For example, intelligent device 120 may include a robotic arm system that includes multiple movable arms 124(1), 124(2), ... 124(n), which may be joined via one or more rotatable joints, e.g., 126(1), 126(2), ... 126(n). Each of the rotatable joints may be actuated by actuator which causes the one or more movable arms to move according to a desired path. The actuators associated with the rotatable joints may be controlled by the control system 134. Control system 134 may include a processing device and non-transitory computer readable medium that contains programming instructions configured to cause the processing device to cause the actuators to actuate the associated rotaiabie joints. Control system 134 may also be configured to communicate with an external device to receive programming instructions for controlling various parts of the robotic arm 120.

[0018] In some examples, robotic arm 120 may also have a base 128 which hosts the control system 134. Alternatively control system 134 may be installed in any part of the robot, such as one of the robotic arms. In controlling a robotic arm the control system needs to know the destination position of the robotic arm, such as the position of the object that the user instructs the robotic arm to grab. This destination position is relative to the coordinate system 122 of the robotic arm. In other words, when a user controls the robot, the user needs to pass the destination position in the coordinate system of the robotic arm to the control system. Alternatively, and/or additionally, a user may also provide the parameters of the various actuators that control the movement of the robotic arm to reach its destination position.

[0019] With further reference to FIG. 1A, an AR device 100 may be used to control the robotic arm 120. In some examples, the AR device may include one or more image sensors 106, one or more lenses 112 and one or more displays 104. The image sensor(s) 106 may be configured to capture the physical environment as the user of the AR device 100 sees through the lens(es) 1 12. For example, image sensor 106 may include two or more sensors to achieve stereo vision. In some examples, display 104 may also include two displays to display 3D objects. In some examples, display 104 may include a 3D display. In some examples, AR device 100 may include a head mounted headset 102, where each display being mounted to a respective lens 104 of the head mount headset 102. Alternatively, AR device 100 may include other wearable devices. In a non-limiting exam ple, when a user wears AR device 100 and aims towards the robotic system 120 through the tenses of AR device 100, the image sensor 106 may capture the robotic system 120 in the coordinate system of the AR device 110 and display the captured robotic system on the display 104 in 3D. An example of an AR device may include the HoloLens from Microsoft. In other examples, an AR device may not be a wearable device. For example, an AR device may have a lens an d a display configured to display what is being seen through the lens.

[0020] FIG. 1B illustrates a snapshot of a scene as can be seen on the display from a user wearing a head mounted AR device 100. As shown, the AR device 100 may be configured to display a physical robotic system 170. In a non-limiting example, the physical robotic system may be captured by the image sensor 106 (FIG. 1A) and rendered on the display of the AR device. In another non-limiting example, the lens of the AR device may allow lights to pass through so that the user can directly see the physical robotic system through the lens.

[0021] In some examples, AR device 100 may include a control system 108. Alternatively, the control system may be installed on other devices, e.g., robotic system 120 or on a cloud. In some examples, AR device 100 and the robotic system 120 may communicate over a communication link 140. For example, each of AR device 100 and robotic system 120 may be configured to communicate with other devices using any suitable communication protocols, e.g., Wi-Fi, Bluetooth, infrared, near-field communication (NFC) or any other wireless or wired communication protocols.

[0022] In some examples, AR device 100 may be configured to communicate with robotic system 120 and align the digital coordinate of the AR device 100 to the digital coordinate of the robotic system 120. AR device 100 may be; configured to allow a user who wears the AR device 100 to define the desired movements of the robotic arms. Consequently, AR device 100 may generate digital position information in the AR device indicating the desired movement of the robotic arms, and transmit the digital position information to the robot, to cause the robotic system to operate according to the digital position information. Additionally, in some examples, the control system 108 may be configured to perform a calibration process, for example, by using one or more markers 130(1, ..., n), 132 to align the digital coordinate of the robotic system to that of the AR device.

[0023] With further reference to FIG. 1 A, control system 108 may also be configured to receive desired movements of one or more robotic arms via user interactions. For example, with reference to FIG. 1 B, the control system may cause the AR device to display a simulated intelligence device 170a on the display of the AR device. In some examples, the simulated intelligent device 170a is a simulated robotic system of the physical robotic system 170 that can be virtually rendered on the display of the AR device. For example, the system may render the virtual simulated robotic system 170a on the same position with the physical robotic system. The system may also render one or more tracking points 160(1)...160(5), and trajectory 161 on the display of the AR device. These digital virtual content could hold their position relative to the coordinate system of the robotic system (or physical environment) based on the positioning tracking capacity of the AR device. The system may also allow the user to define the desired movements of the robotic aims wi th the simulated robotic arm. For example, the AR device may receive user interactions that indicate the desired position of a particular robotic arm or the desired movements of one or more robotic arms. In a non-limiting example, the AR device may receive gestures from the user interactions. For example, the gestures may be user's hand or finger gestures or eye gazes captured by a camera of the VR device. In another example, the gestures may be user's motion captured by one or more motion controllers. In another example, the gestures may include special gesture data captured from special input device, e.g., special digital gloves as input. Additionally, and/or alternatively, the AR device may be configured to display trajectories of the robotic arms on the display of the AR device, where the trajectories are generated based on the desired movements. This provides feedback to the user as to how the robotic arms will react according to the user defined movements to allow the user to make necessary adjustment to the desired movements. [0024] FIG. 2 illustrates a diagram of a con trol system 200, which may be implemented in 108 in FIGs. 1A and IB, which includes various components in accordance with various examples described herein. In some examples, control system 200 may include user interaction unit 206, a camera calibration unit 218, a device calibration unit 210, a coordinate alignment unit 212, a robotic simulation system 214, a device control system 216, a 3D spatial rendered 218, and/or a processing device 220. Various component units in control system 200 are further described. For example, camera calibration unit 208 may be configured to calibrate the image sensors 202, such as calibrating the camera distortion. Existing calibration systems may be available. For example, the camera calibration system may cause the image sensor(s) to scan a normal chessboard to make sure that the captured images from the image sensors match the actual chessboard without significant bias. This step may be implemented using, for example, OpenCV: camera calibration.

|0025] In some examples, device calibration unit 210 may be configured to align the coordinate system of the camera space of the 2D image sensor to the 3D space coordinate system of the AR device. For example, device calibration unit 210 may be configured to calibrate the coordinate system 122 in FIG. 1 A, with the coordinate system 110 in FIG. 1 A. In some examples, device calibration unit 210 may determine the correct relative position and rotation of the camera in the digital space of the device, and use these relative position and rotation to align the camera to the digital space of the intelligent device, such as the robotic arm.

|0026] In some examples, the calibration unit may use optical markers placed in a neighborhood relative to the robotic arm. For example, the optical markers may be placed in the proximity of the robotic system 120. As shown in FIG. 1A, one or more optical markers, e.g., 130(1), 130(2), 130(3), 130(4) are placed on a platform 150 relative to the robotic system 120. The optical markers may be of any suitable shape and color that would allow a camera to distinguish them from the environment background. For example, an optical marker may be a black square, a black circle, or a 2D barcode. The relative positions among the optical markers are fixed and known. Additionally, the sizes of the optical markers are also fixed and known. Returning to FIG. 2, device calibration unit 210 may be configured to receive one or more images of the robotic system and the optical markers that are captured by the image sensor(s) 202, and use the position of the robot, e.g., the base (128 in FIG. 1 A), relative to the optical markers to determine the relationship between the coordinate system of the robotic system (e.g., 122 in FIG. 1 A) and the coordinate system of the AR device (e.g., 110 in FIG. 1 A).

[0027] In a non-limiting example, with reference to FIG. 1 A, the base of the robotic system 128 may be placed in between the optical marker 130(2) and 130(3). Whereas the relative positions of 130(2) and 130(3) in the coordinaite system 122 may be known (given), and the relative position of 130(2) and 130(3) in the coordinate system 110 may also be determined from the captured image, the coordinate systems 122 and 110 may be aligned.

[0028] Additionally, and/or alternatively, lite optical marker may include a barcode that includes information about the size of the marker. For example, the optical market may include a 2D barcode, such as a QR code, where the 2D barcode has a physical size (measured by the edges) and also includes data that contains information about the physical size of the barcode in the coordinate system (e.g., 122 in FIG. 1 A). In a non-limiting example, a 2D barcode 132 is placed on the robotic arm 124(6). Returning to FIG. 2, device calibration unit 210 may be configured to receive an image of the 2D barcode (e.g., 132 in FIG. 1 A) that is captured by the image sensor (e.g., 106 in FIG. 1A), identify the 2D barcode from the captured image and decode data contained in the 2D barcode. Various types of 2D barcode may be used, such as QR code disclosed by U.S. Patent No. 5,726,435 titled "optically readable two-dimensional code and method and apparatus using the same " Device calibration unit 210 may further determine the physical size of the barcode embedded in the decoded data. For example, the physical size of the marker will be encoded in 2D format. Some of the binary bits will be used to present the unit (mm, cm, dm, m, 10m), and other bits will be used to describe the dimension of the marker. In a non-limiting example, a 5cm QR code or a 10cm QR code may be used. In other non-limiting examples, a square 2D barcode with different sizes, or non-square code, such as rectangle or circular 2D markers with different dimensions or radius units and sizes may also be used. In some examples, various bit length may be used to embed size information in a barcode. For example, 8 bits may be used to indicate the physical size of the barcode. In some examples, 4 bits may be used to indicate the length and 4 bits may be used to indicate the width of the barcode. It is appreciated that other variations may be possible.

[0029] With further reference to FIG. 2, device calibration unit 210 may be configured to determine the size of the barcode in the coordinate system of the AR device (e.g., 1 10 in FIG. 1 A) and compare that with the actual size of the barcode decoded from the captured image. The device calibration unit 210 may use the comparison to determine the relative distance of the camera to the optical marker, and use the relative distance between the camera and the optical marker to align the coordinate system in the intelligence device (e.g., 122 in FIG. 1 A) and the coordinate system in the AR device (e. g., 110 in FIG. 1A).

[0030] With further reference to FIG. 2, coordinate alignment unit 212 may be configured to determine the relative position of the physical intelligent device, such as the robotic system (e.g., 120 in FIG. 1A), to the AR device (e.g., 100 in FIG. 1A). Coordinate alignment unit 212 may further be configured to generate a transformation between the 3D coordinate of the AR device (e.g., 110 in FIG. 1 A) and the digital 3D coordinates of the intelligent device (e.g., 122 in FIG. 1 A) by using visual alignment markers. In some examples, the relative position detected based on an optical marker may contain measurement errors, and multiple markers may be used to increase the calibration accuracy. In a non-limiting example, multiple markers may be laid out in a specific partem to effectively reduce the error. For example, as shown in FIG. 1 A, four markers 130(1), 130(2), 130(3), 130(4) may be placed on four comers of a square, with the base of the robotic system centered on the square.

[0031] In a non-limiting example, the device calibration unit 210 may receive images captured from the 2D camera of the AR device, where the images contain the visual markers. The calibration unit 210 may locate the position of each marker from the captured images. Coordinate alignment unit 212 may be configured to perform a geometric mapping of the marker to locate the relative position of the marker based on the device calibration and map the marker to the 3D space coordinate system (e.g., 122 in FIG. 1A) of the robotic arm.

[0032] With further reference to FIG. 2, coordinate alignment unit 212 may be configured to additionally perform a dynamic robotic alignment procedure to achieve an alignment with higher accuracy. For example, an additional optical marker may be placed onto a robotic arm, such as an end-effector of a robotic arm. Coordinate alignment unit 212 may send commands to control the robotic arm to move one or more joints to predefined positions. A user may be prompted to move the wearable AR device to follo w the position of the marker as the robotic arm is moving to the specified spatial position. Coordinate alignment unit 212 may be configured to generate position data of the robotic arm based on the predefined position. In a non-limiting example, the robotic arm may be controlled to move to the left most reachable position, and then to the right most reachable position. Coordinate alignment unit 212 may be configured to determine the middle point of the left most and right most positions of the optical marker that are captured by the image sensor(s) as the center of the robotic arm. This dynamic alignment process could be used multiple times to refine the result to a desired accuracy.

[0033] With further reference to FIG. 2, robotic simulation system 214 may be configured to provide a simulation model of a robotic system (e.g., 120 in FIG. 1 A). The simulation of the robotic system may include a digitally simulated robotic system for displaying in the AR device (e.g., 100 in FIG. 1 A). In creating the simulated robot, the AR device may simulate the structure of the robotic arm, the joint behaviors and also the path generating algorithms. The simulation system may be capable of parsing and processing the robotic programming instructions and simulating the result based on the instructions. In such case, the behaviors of the robotic arm could be predicted and verified with the simulated robotic system.

[0034] In some examples, the simulated robotic system may include information of possible moves, positions, maneuvers of the target robotic arms in the digital device. To achieve that, a digital simulated robotic arm, for example, is created and displayed in the AR device. This simulated robotic arm matches the physical dimensions of the actual robotic arm to be controlled, and also have similar actuators parameters. In some examples, robotic simulation system 214 may receive user interac lions with gestures, digital gloves or other kinds of motion controllers that allow a user to manipulate the simulated robotic arm in the AR device to define a specified movement of the physical robotic arm, or control the poses of robotic arm behavior directly. The specified movement may include specified position of one or more robotic arms in the AR coordinates, e.g., 110 in FIG. 1A. Robotic simulation system 214 may convert the specified position in the AR space to the position in the physical space of the robotic arm based on the alignment between the coordinates of the robotic arm, e.g., 122 in FIG. 1 A and the coordinates of the AR device, e.g., 110 in FIG. 1 A.

(0035] Additionally, the digital simulated robotic arm may be capable of demonstrating the desired movement of the physical robotic arm based on robotic programming instructions without actually powering on the robotic arm. For example, the robotic simulation system may be configured to receive user defined destination position and desired path, and based on the destination position and desired path, determine required movements of one or more robotic arms to reach the destination position. In some examples, these determined movements may include actuations of the robotic aim joints and movements via inverse kinematic algorithms. Additionally, robotic simulation system may be configured to calculate a trajectory of the robotic arm and have the path of the movement rendered in the AR device based on the movements which are determined via inverse kinematic algorithms. Whereas the simulated robotic arm may facilitate the simulation of con trol of movements, the behaviors of the physical robotic arm may be predicted and verified on the user's AR device without requiring the physical robotic arm to be powered on.

[0036] With further reference to FIG. 2, 3D spatial Tenderer 218 may be configured to render the digital simulated robotic arm on the display of the AR device, such as 104 in FIG. 1 A. The rendering of the simulated robotic arm on the display will allow a user to observe and make adjustments on the robotic arms by defining the movement of the robotic arm. The rendering may also allow the user to see a predicted trajectory of the robotic arm as the result of the defined movement. This provides user feedback to the controlling of the robotic arm and allow the user to make necessary changes to the movement of the robotic arm based on the behavior prediction.

[0037] Robotic simulation system 214 and 3D spatial rendered 281 facilitate the rendering of the various positions and trajectory of the movements of the robotic arm as viewable to an operator in the AR device, allowing the operator to observe the trajectory in the 3D physical space around the robot, and also observe the 3D path from any angle. In such a way, the operator could determine, without powering on the robotic arms hardware, whether the robotic arm will reach the target or may hit obstacles unexpectedly, and change the movement of the simulated robot, as needed, before any acciden t could happen.

|0038] FIG. 3 further illustrates an example of a simulated robotic arm overlaid on the physical robotic arm according to some examples described herein. In FIG. 3, both the simulated robotic arms 320 and the physical robotic arms 300 are displayed in the AR device. With the simulated robotic arms rendered on the AR device, the operator may actually see the robotic programming instructions being executed without turning on the physical robotic arm. Via user interaction unit (206 in FIG. 2), a useir may interact with the simulated robotic arms to control the movements of the physical robot ic arms.

[0039] In some examples, such as shown in FIG. 3, the simulated robotic arms may be shown in holograms in the 3D space and overlaid on the physical robotic arms captured by the image sensor(s), such as 106 in FIG. 1A. The user interaction unit may receive user interactions that allow a user to move a robotic arm of the simulated robotic arm to a desired position relative to a target object, e.g., 308, 3 10, which are captured by the image sensor(s) of the AR device.

[0040] In a non-limiting example, the user interaction unit (e.g., 206 in FIG. 2) may capture user inputs, such as a gesture via the image sensor or controller inputs with spatial position information. For example, a user may move the user's finger to a starting position, e.g., point 328 on the robotic arm 324 of the simulated robotic arm which the user desires to move, and may make a gesture (e.g., a pinch) to indicate a start of a move from that position. User may subsequently move the finger to a destination position, e.g., the handle bar 330 of object 308, and pause at the destination position to indicate an end of the move. The end of the move may also be indicated by a user gesture, e.g., a second pinch. In response, the 3D spatial renderer (e.g., 218 in FIG. 2) may move the robotic arm 324 away from the original position towards the destination position 330. The user defined movement of the robotic arm 324 may cause the simulated robotic arm to move to reach the destination position. As described in this patent disclosure, the robotic simulation system (e.g., 214 in FIG. 2) may receive user-defined movement of the robotic arm. The user-defined movement may be already converted to robotic programming instructions that can be understood by the robotic simulation system. Alternatively, the user-defined movement may be converted by the user interaction unit (e.g., 206 in FIG. 2) to the robotic programming instructions to be received by the robotic simulation system. Further as described in the present patent document, 3D spatial tenderer (e.g., 218 in FIG. 2) may display a trajectory of the movement of the simulated robotic arm on the display of the AR device based on the robotic programming instructions.

[0041] With further reference to FIG. 3, user interaction unit (e.g., 206 in FIG. 2) may be configured to receive multiple tracking points (e.g., 160(1)...160(5) in FIG. IB) from the user to facilitate user-defined path of movement of the simulated robotic arm. In some examples, user may manipulate multiple spatial position tracking points on the hologram of the simulated robotic arms. These spatial position tracking points may be manipulated via user interaction unit (e.g., 206 in FIG. 2) to describe the 3D behaviors of the simulated robotic arms in 3D space. In some examples, these position tracking points are the holograms which hold the spatial position and rotation in the 3D spaces, and each tracking point may include a position, a rotation, a velocity, pause duration and/or a command to end effector. The end effector commands associated to these points may include opening or closing of the claws, checking the end effector sensors, etc.

(0042] In some examples, the spatial position tracking points may be manipulated and adjusted via user interaction unit of the AR device, such as controllers, menus, gestures or voice commands, etc. For example, user sees through the lens of a head-mount AR device, such as Microsoft's HoloLens, moves a finger to a desired spatial position tracking point, and selects the desired spatial positional tracking point by a gesture, such as a pinch. Upon the spatial positional tracking point being selected, the user interaction unit may display a drop down menu on the display of the AR device to allow the user to select a movement (e.g., rotation or shift) and velocity. In some examples, the user may use a controller connected to the AR device to define and manipulate the spatial position tracking points. In some examples, the user interaction unit may also include a voice command unit configured to receive and recognize user voice commands. [0043] In some examples, the user interaction unit may be configured to receive the parameters for each spatial position tracking point and transmit these parameters to the robotic simulation system (e.g., 214 in FIG. 2). Additionally, the user interaction unit may allow user to define the order of execution for spatial position tracking points. For example, user may point to each of the spatial position tracking points and select an order of execution. Alternatively, user may select each of the spatial position tracking points in the order in which they are to be executed. The user interaction unit may send the order of execution for multiple spatial position tracking points to the robotic simulation system (e.g., 214 in FIG. 2) to cause the simulated robotic arms to generate a trajectory (e.g., 161 in FIG IB) that includes the moving path sequence to move through all of the position tracking points. Consequently, the robotic arm movement trajectory will be changed once the position and rotation of tracking points are changed. The "playback" of the multiple spatial position tracking points in the robotic simulation system provides an intuitive way to control a complicated path of the robotic arm.

[0044] Additionally, and/or alternatively, the robotic simulation system may also be configured to provide additional information that can be displayed on the display of the AR device. For example, the robotic simulation system may provide the electronic current of each running actuator, the real-time torque of each motor, motor wearing out of each joint, and/or the overall machine loads, etc. to be overlaid on the robotic arms. These data may be spatial, dynamic and real time. For example, with this information overlaid on top of the robotic arm on the display of the AR device, the operator could see the current loads and power consumption, and then optimize the robotic arm path to lower the power consumption to achieve a higher efficiency of the robotic arm usage, which may result in at least an extension of the life span of the robotic arm. For example, if a portion of the trajectory path is red, which indicates that the power consumption has exceeded a threshold while following this path, the user may make some adjustment on the path o r target positions (e.g., raise or lower the target position) so that the path does not show red or the path shows green, which means that the power consumption of the new path is below a threshold.

[0045] Returning to FIG. 2, device control system 216 may be configured to serialize executable robotic programming instructions for controlling the robotic arm (e.g., 120 in FIG. I A) based on the multiple spatial position trac king points and the order of execution of these points defined by the user. Device control system 216 may transmit the robotic programming instructions to the robotic control system 222 of the robotic arm to control the physical robotic arm. Upon execution of the robotic programming instructions on the physical robot, the physical robotic arm may perform the same movement as provided by the robotic simulation system.

[0046] Returning to FIG. 3, the control system (e.g., 200 in FIG. 2) may be configured to generate a 3D environment to be overlaid on the display of the AR device. For example, the system may use one or more sensors (e.g., image or laser) to perform a 3D scanning to detect 3D features of the physical environment with the depth-sensing ability and generate a 3D mesh to represent the physical environment. For example, the system may use 3D scanning to determine the geometry, position, and size of some objects around the robot, or some unexpected obstacles that may be in the way of a robotic arm movement.

[0047] In some examples, the robotic simulation system (e.g., 214 in FIG. 2) may be configured to detect whether a user defined movement of a robotic arm will cause a collision of the robotic arm with an object in the physical environment based on the 3D geometry information of the physical environment. If a potential collision is detected, the robotic simulation system may use a collision avoidance algorithm to generate an alternative trajectory to get around the obstacles. Alternatively, and or additionally, the robotic simulation system may also display the obstacle mesh on the display of the AR device so that the user may make the adjustment on the planned path by moving the spatial position tracking points.

[0048] FIG. 4 further illustrates an example of a process for robotic planning. The process may be implemented in the various systems described in FIGs. 1-3. In some examples, a process 400 may include receiving all spatial position tracking points at 402, generating joint positions for the current tracking point (may be initialized as the first tracking point) at 404 and generating joint positions for the next tracking point at 406, and causing each joint to move for a time duration (e.g., a At) at 408. In generating the joint positions at 404 and 406, the process may use an inverse kinetics (IK) algorithm to determine the joint positions associated with the moving of a tracking point In moving each joint at 408, the movement may be based on the device parameters selected by the user, such as the velocity and speed. After each move at 408, the process may include detecting collision at 410 to determine whether a movement may cause a potential collision with an obstacle in the physical environment at 412. If a collision is detected, the process may include adding a new tracking point to avoid the obstacle at 414 and continue at 402. If a collision is not detected, the process may proceed with adding the current tracking position to the trajectory at 416. The process may further include determining whether the last tracking point is reached at 418. If the last tracking point is reached, the process may end at 420. If the last tracking point is not reached, the process may update the current tracking point as the next tracking point and continue at 404.

[0049] FIG. 5 illustrates an example of a process for controlling a robot, which process may be implemented in various embodiments described in FIGs. 1-4. In some examples, a process 500 may include aligning the coordinate system of the AR device to the coordinate system of the robotic arm at S04. This process may be implemented by the coordinate alignment unit 212 in FIG, 2. In aligning the coordinates at 504, process 500 may optionally include refining the alignment of the coordinates at 506 by allowing the user to operate the robotic arm according to a predefined movement, and compare the position of the robotic arm to the predefined movement. Optionally, process 500 may include calibrating one or more cameras of the AR device at 502.

[0050] With further reference to FIG. 5, process 500 may further include displaying a simulated robotic arm in the AR device at 508. This process may be implemented, for example, in user interaction unit 206 of FIG.2. The process may include obtaining the simulation model of the robotic arm at 510. As an example, robotic simulation system 214 in FIG.2 may be used to obtain such simulation model of the robotic arm. Process 500 may further include receiving user interaction in the AR device at 514. For example, user interaction unit 206 in FIG. 2 may implement such process to display the simulated robotic arm on the display of the AR device and also receive user commands. In user interactions, process 500 may display device control parameters at 516, such as motor and torque control, to assist the user to define operations of the robotic arm. Optionally, process 500 may also include mapping 3D physical environment to the AR device at 512 and display the physical environment (e.g., a mesh model) on the display of the AR device.

[0051] Process 500 may further include determining an operation plan of the robotic arm at 518 via user interaction. For example, user interaction unit 206 and robotic simulation system 214 in FIG. 2 may be configured to display a simulated robotic arm to the user, receive user- defined operation plan (e.g., movement) of the robot, and display a trajectory of the simulated robotic arm according to the defined operation. Process 500 may further include transmitting the operation plan to the robotic arm to cause the robotic arm to operate according to the user- defined operation. The processes in FIGs.4 and 5 may be implemented in the various systems described in FIGs. 1-3. For example, a process may function as a user interface component of the system that facilitates user interfaces to allow a user to control the intelligence device. [0052] Various embodiments in FIGs. 1-5 may be implemented to perform a variety of tasks, such as remote robotic planning (remote robotics). In some examples, the control system may provide all of the 3D information of the robotic arm and the surrounding mesh so an operator could move the tracking points to plan the robotic arm movement in a remote location without requiring the operator to be on site where the actual robotic arm is located. Alternatively, and/or additionally, the operator may work on site, and use a remote assistant to help remotely. The remote assistant may see what the onsite operator sees and may also control the tracking points and make changes to the plan to provide assistance. The robotic programming instructions generated by the control system (e.g., device control system 216 in FIG.2) maybe transmitted wirelessly to the actual robotic arm for controlling the actual robotic arm.

[0053] In robotic planning, the system may further be configured to limit the movement of the robotic arm. For example, on a construction site, the robotic arm may preferably be movable within a safe zone. The system may be configured to use visual indicators to generate a safety bounding box on the display of the AR device. In a non-limiting example, the safety bounding box may be a spatial area within which the robotic arm may safely operate. The system may create holographic areas to display the safe bounding box in the AR device. In some examples, the system may be configured to also keep track of the operator's position through the AR device and create the safety bounding box based on the real time position of the operator. This will create a better robot-human collaboration environment and keep the operators safe.

|0054] Various embodiments described in FIGs. 1-5 provide advantages in controlling an intelligence device, such as a robotic arm. For example, the described methods and systems may allow a user to control any suitable intelligent devices in an augmented reality (AR) environment with hand gestures, eye gaze and o ther natural interactions. The present disclosure may facilitate locating the exact position of a robotic arm and accurately align the coordinate of the AR device to that of the robotic arm. This makes it feasible to plan the path of the physical robotic arm in the AR device without requiring the user to power on the robotic arm. Robotic programming instructions for planned operation may be transmitted to the robotic arm for controlling the robotic arm. High precision of the robotic arm may be achieved since all of the positions are generated from digital simulation. Further, the robotic planning may be done with the aid of an AR device without requiring user to have programming skills to program the robotic arm.

[0055] Other advantages of the described s ystem also include the capability to allow user to pause in time, simulate a planned operation (by displaying a trajectory in the AR device) and tune a particular operation according to the trajectory of the robotic arm. The capability of pausing a movement (during the movement) would not be possible in working with an actual robotic arm because the part of a robotic arm that is moving cannot be stopped in time due to the momentum from the movement or mechanical limitation of the robotic arm. The disclosed system and method are advantageous in cases in which the action is very time sensitive or there are multiple robots involved and need to collaborate based on the time. Other advantages include the overlay of 3D mesh which makes it easy for the operators to plan for the complicated environment and avoid collision with obstacles.

(0056) The described features, advantages and characteristics of the present solution may be combined in any suitable manner in one or more embodiments. It is appreciated that, in light of the description herein, that the present soluti on can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the present solution. It should therefore be understood that the present solution is not limited to the particular embodiments described herein, but is intended to include all changes, modifications, and all combinations of various embodiments that are within the scope and spirit of the invention as defined in the claims.