Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ROBOT CONTROL SYSTEM, MOTION DATA CREATION APPARATUS AND ITS CREATING METHOD
Document Type and Number:
WIPO Patent Application WO/2011/067621
Kind Code:
A1
Abstract:
To plan trajectories of joint angles reaching to the target value by providing only the final target value of the task. A robot control system in accordance with the present invention includes a motion data creation apparatus and the robot. When the motion data creation apparatus searches for trajectory data of a joint angle reaching from an initial posture of the robot to a target posture in a configuration space by using a search technique based on sampling, the motion data creation apparatus calculates a joint angle satisfying an constraint by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing' whole body cooperation calculation such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges.

Inventors:
DALIBARD SEBASTIEN (FR)
NAKHAEI SARVEDANI ALIREZA (FR)
LAUMOND JEAN-PAUL (FR)
MIYAGAWA TORU (BE)
HASHIMOTO KUNIMATSU (JP)
Application Number:
PCT/IB2009/007954
Publication Date:
June 09, 2011
Filing Date:
December 04, 2009
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
TOYOTA MOTOR CO LTD (JP)
CENTRE NAT RECH SCIENT (FR)
DALIBARD SEBASTIEN (FR)
NAKHAEI SARVEDANI ALIREZA (FR)
LAUMOND JEAN-PAUL (FR)
MIYAGAWA TORU (BE)
HASHIMOTO KUNIMATSU (JP)
International Classes:
B25J9/16; B62D57/032
Foreign References:
JP2008036761A2008-02-21
JP2008036761A2008-02-21
JP2009214268A2009-09-24
Other References:
YOSHIDA E ET AL: "Motion autonomy for humanoids: experiments on HRP-2 No. 14", COMPUTER ANIMATION AND VIRTUAL WORLDS JOHN WILEY & SONS LTD. UK, vol. 20, no. 5-6, September 2009 (2009-09-01), pages 511 - 522, XP002605537, ISSN: 1546-4261
KUFFNER J J JR ET AL: "Dynamically-stable motion planning for humanoid robots", AUTONOMOUS ROBOTS KLUWER ACADEMIC PUBLISHERS NETHERLANDS, vol. 12, no. 1, January 2002 (2002-01-01), pages 105 - 118, XP002605538, ISSN: 0929-5593
EIICHI YOSHIDA ET AL: "Planning 3-D Collision-Free Dynamic Robotic Motion Through Iterative Reshaping", IEEE TRANSACTIONS ON ROBOTICS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US LNKD- DOI:10.1109/TRO.2008.2002312, vol. 24, no. 5, 1 October 2008 (2008-10-01), pages 1186 - 1198, XP011234183, ISSN: 1552-3098
EIICHI YOSHIDA ET AL: "Smooth Collision Avoidance: Practical Issues in Dynamic Humanoid Motion", INTELLIGENT ROBOTS AND SYSTEMS, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON, IEEE, PI, 1 October 2006 (2006-10-01), pages 827 - 832, XP031006173, ISBN: 978-1-4244-0258-8
FERRE E ET AL: "An iterative diffusion algorithm for part disassembly", ROBOTICS AND AUTOMATION, 2004. PROCEEDINGS. ICRA '04. 2004 IEEE INTERN ATIONAL CONFERENCE ON NEW ORLEANS, LA, USA APRIL 26-MAY 1, 2004, PISCATAWAY, NJ, USA,IEEE, US LNKD- DOI:10.1109/ROBOT.2004.1307547, vol. 3, 26 April 2004 (2004-04-26), pages 3149 - 3154, XP010768589, ISBN: 978-0-7803-8232-9
BOUYARMANE K ET AL: "Potential field guide for humanoid multicontacts acyclic motion planning", ROBOTICS AND AUTOMATION, 2009. ICRA '09. IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 12 May 2009 (2009-05-12), pages 1165 - 1170, XP031509587, ISBN: 978-1-4244-2788-8
VAHRENKAMP N ET AL: "Humanoid motion planning for dual-arm manipulation and re-grasping tasks", INTELLIGENT ROBOTS AND SYSTEMS, 2009. IROS 2009. IEEE/RSJ INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 10 October 2009 (2009-10-10), pages 2464 - 2470, XP031580983, ISBN: 978-1-4244-3803-7
J.J. KUFFNER; S. KAGAML; K. NISHIWAKI; M. INABA; H. INOUE: "Dynamically-stable motion planning for humanoid robots", AUTONOMOUS ROBOTS, vol. 12, 2002, XP002605538
DMITRY BERENSONY; SIDDLLARTHA S.SRLNIVASAZY; DAVE FERGUSONZY; ALVARO COLLETY; JAMES J. KUTTNER: "Manipulation Planning with Workspace Goal Regions.", ICRA, 2009
MARC TOUSSAINT; MICHAEL GIENGER; CHRISTIAN GOERICK: "Optimization of sequential attractor-based movement for compact behaviour generation", HUMANOIDS, 2007
HIROKI SANADA; EIICHI YOSHIDA; KAZUHITO YOKOI: "Passing Under Obstacles with Humanoid Robots.", IROS, 2007
NIKOLAUS VAHRENKAMP; DMITRY BERENSON; TAMIM ASFOUR; JAMES KUFFNER; RUDIGER DILLMANN: "Humanoid Motion Planning for Dual-Arm Manipulation and Re-Grasping Tasks", IROS, 2009
Attorney, Agent or Firm:
INTES, Didier (158 rue de l'Université, Paris Cedex 07, FR)
Download PDF:
Claims:
CLAIMS

[Claim 1]

A robot control system comprising:

a motion data creation apparatus that receives a prioritized task and generates a joint angle satisfying a constraint specified by the task, the prioritized task being set for a plurality of portions of a robot; and

the robot that moves according to a joint angle created by the motion data creation apparatus;

wherein when the motion data creation apparatus searches for trajectory data of a joint angle reaching from an initial posture of the robot to a target posture in a configuration space by using a search technique based on sampling, the motion data creation apparatus calculates a joint angle satisfying an constraint by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a

Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges.

[Claim 2]

The robot control system according to Claim 1, wherein the prioritized task is set for a plurality of portions or a center of mass of the robot.

[Claim 3]

The robot control system according to Claim 1 or 2, wherein the motion data creation apparatus is configured to perform the whole body cooperation calculation such that the difference between the target value of a constraint point specified by the prioritized task and the current value converges by generating a third node from a first node extracted in a random manner or in a predetermined manner and a second node that is a node on the trajectory data and closest to the first node, and using a constraint point obtained from a joint angle corresponding to the third node as an initial value. [Claim 4]

The robot control system according to any one of Claims 1 to 3, wherein the motion data creation apparatus is configured to perform optimization processing of the trajectory data by dividing the generated trajectory data by predetermined unit, using a posture corresponding to each divided point as an initial value, and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges . [Claim 5]

The robot control system according to any one of Claims 1 to 4, wherein

the motion data creation apparatus further comprises an object data storage unit that stores data including a

shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot, and the motion data creation apparatus is configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the configuration space by referring to information stored in the object data storage unit, and when they collide, not to add the calculated joint angle newly to the trajectory data. [Claim 6]

A robot control system comprising:

a motion data creation apparatus that receives a prioritized task and generates a joint angle satisfying a constraint specified by the task, the prioritized task being set for a plurality of portions of a robot; and

the robot that moves according to a joint angle created by the motion data creation apparatus;

wherein the motion data creation apparatus calculates, in a configuration space, a joint angle satisfying an constraint by using a constraint point obtained from a randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges.

[Claim 7]

The robot control system according to Claim 6, wherein the prioritized task is set for a plurality of portions or a center of mass of the robot.

[Claim 8]

The robot control system according to Claim 6 or 7, wherein the motion data creation apparatus is configured to perform optimization processing of the calculated joint angle by using a posture obtained from the calculated joint angle as an initial value and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges .

[Claim 9]

The robot control system according to any one of Claims 6 to 8, wherein

the motion data creation apparatus comprises an object data storage unit that stores data including a shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot, and

the motion data creation apparatus is configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the configuration space by referring to information stored in the object data storage unit, and when they collide, not to adopt the calculated joint angle. [Claim 10]

Amotion data creation apparatus that creates a joint angle to cause a robot to move, wherein when the motion data creation apparatus searches for trajectory data of a joint angle reaching from an initial posture of the robot to a target posture in a configuration space by using a search technique based on sampling, the motion data creation apparatus calculates a joint angle satisfying an constraint by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot. [Claim 11]

The motion data creation apparatus according to Claim 10, wherein the motion data creation apparatus is configured to perform optimization processing of the trajectory data by dividing the generated trajectory data by predetermined unit, using a posture corresponding to each divided point as an initial value, and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges .

[Claim 12]

The motion data creation apparatus according to Claim 10 or 11, wherein

the motion data creation apparatus further comprises an object data storage unit that stores data including a

shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot, and the motion data creation apparatus is configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the configuration space by referring to information stored in the object data storage unit, and when they collide, not to add the calculated joint angle newly to the trajectory data. [Claim 13]

Amotion data creation apparatus that creates a joint angle to cause a robot to move, wherein the motion data creation apparatus calculates, in a configuration space, a joint angle satisfying an constraint by using a constraint point obtained from a randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a

Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by a prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.

[Claim 14]

The motion data creation apparatus according to Claim 13, wherein the motion data creation apparatus is configured to perform optimization processing of the calculated joint angle by using a posture obtained from the calculated joint angle as an initial value and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges.

[Claim 15]

The motion data creation apparatus according to Claim 13 or 14, wherein

the motion data creation apparatus comprises an object data storage unit that stores data including a shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot, and

the motion data creation apparatus is configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the configuration space by referring to information stored in the object data storage unit, and when they collide, not to adopt the calculated joint angle.

[Claim 16]

A motion data creation method for creating a joint angle to cause a robot to move, wherein when trajectory data of a joint angle reaching from an initial posture of the robot to a target posture is searched for in a configuration space by using a search technique based on sampling, a joint angle satisfying an constraint is calculated by using a constraint point obtained from a oint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot. [Claim 17]

The motion data creation method according to Claim 16, wherein optimization processing of the tra ectory data is further performed by dividing the generated trajectory data by predetermined unit, using a posture corresponding to each divided point as an initial value, and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges.

[Claim 18]

The motion data creation method according to Claim 16 or 17, wherein whether or not the robot in a posture obtained from a calculated joint angle collides with an object within an environment in the configuration space is checked by referring to data including a shape/configuration data of the object within the environment and the robot and geometrical configuration data of the robot, and when they collide, the calculated joint angle is not added newly to the trajectory data.

[Claim 19]

A motion data creation method for creating a joint angle to cause a robot to move, wherein a joint angle satisfying an constraint is calculated in a configuration space by using a constraint point obtained from a randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by a prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot. [Claim 20]

The motion data creation method according to Claim 19, wherein optimization processing of the calculated joint angle is further performed by using a posture obtained from the calculated joint angle as an initial value and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges.

[Claim 21]

The motion data creation method according to Claim 19 or

20, wherein whether or not the robot in a posture obtained from a calculated joint angle collides with an object within an environment in the configuration space is checked by referring to data including a shape/configuration data of the object within the environment and the robot and geometrical configuration data of the robot, and when they collide, the calculated joint angle is not adopted.

Description:
DESCRIPTION

Title of Invention

ROBOT CONTROL SYSTEM, MOTION DATA CREATION APPARATUS AND ITS CREATING METHOD

Technical Field

[0001]

The present invention relates to a robot control system that makes, for example, a motion plan of a robot to which a plurality of tasks are provided, a motion data creation apparatus and its creating method.

Background Art

[0002]

To realize a useful humanoid robot for environments such as a home and a hospital, the robot has to be able to operate autonomously. In order to autonomously create posture or motion of a robot, the created posture or motion must be collision free with objects, except for the phase of physical interaction.

[0003]

As a technology to optimize the motion creation of a robot and to optimize the generated motion, technologies disclosed in Non-patent documents 1 to 5 have been known. Further, Non-patent document 6 and Patent documents 1 and 2 disclose technologies using task control with priorities. Citation List

Patent Literature

[0004]

PTL 1 : Japanese Unexamined Patent Application Publication No.

2008-36761

PTL 2 : Japanese Unexamined Patent Application Publication No.

2009-214268

Non Patent Literature

[0005]

NPL 1: Marcelo Kallmann, Amaury Aubel, Tolga Abaci and Daniel Thalmann, "Planning Collision-Free Reaching Motions for

Interactive Object Manipulation and Grasping", EUROGRAPHICS 2003.

NPL 2: J.J. Kuffner, S. Kagaml, K. Nishiwaki, M. Inaba, and H. Inoue, "Dynamically-stable motion planning for humanoid robots" , Autonomous Robots (special issue on Humanoid Robotics) , 12,2002.

NPL 3: Dmitry Berensony, Siddllartha S . Srlnivasazy, Dave Fergusonzy, Alvaro Collety, James J. Kuttner, "Manipulation Planning with Workspace Goal Regions", ICRA2009.

NPL 4: Marc Toussaint, Michael Gienger and Christian Goerick, "Optimization of sequential attractor-based movement for compact behaviour generation", HUMANOIDS 2007.

NPL 5: Hiroki Sanada, Eiichi Yoshida and Kazuhito Yokoi, "Passing Under Obstacles with Humanoid Robots", IROS 2007.

NPL 6: Nikolaus Vahrenkamp, Dmitry Berenson, Tamim Asfour, James Kuffner and Rudiger Dillmann, "Humanoid Motion Planning for Dual-Arm Manipulation and Re-Grasping Tasks", IROS 2009.

Summary of Invention

Technical Problem

[0006]

However, in the related technology, in the case of motion to grasp an object, for example, values regarding all the joints in the posture they want the robot to ultimately achieve to grasp the object are provided. That is, to realize the final position/posture of the finger tips, the portions other than the finger tips can take various postures. However, in the related art, values for all the joints, which are strictly specified by the user, are provided. Then, it is calculated how the robot arrives at the provided final posture from the current posture. Further, it does not ensure that all the tasks are necessarily satisfied during the process to calculate the final posture.

[0007]

Therefore, a technology capable of creating motion by providing only values of the final position/posture of the finger tips, instead of providing those for all the joints in the target posture, has been desired. This means that the robot is constrained only in the position/posture of the finger tips and that the other joints can take any position/posture without restraint. Therefore, it is based on a conception that providing only the target position/posture of the finger tips will be sufficient enough to generate the motion.

[0008]

Accordingly, an exemplary object of the present invention is to provide a robot control system capable of planning a trajectory of joint angles reaching to the target final position/posture by providing only the final target value of the task (for example, the final position and posture of the finger tips), motion data creation apparatus and its creating method.

Solution to Problem

[0009]

A robot control system in accordance with the present invention includes : a motion data creation apparatus that receives a prioritized task and generates a joint angle satisfying a constraint specified by the task, the prioritized task being set for a plurality of portions of a robot; and the robot that moves according to a joint angle created by the motion data creation apparatus; wherein when the motion data creation apparatus searches for trajectory data of a joint angle reaching from an initial posture of the robot to a target posture in a joint space by using a search technique based on sampling, the motion data creation apparatus calculates a joint angle satisfying an constraint by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a

Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges.

[0010]

In this way, it is possible to plan a trajectory of a joint angle reaching to the target final position/posture by providing only the final target value of the task.

[0011]

Further, the prioritized task may be set for a plurality of portions or the center of mass of the robot.

[0012]

Further, the motion data creation apparatus may be configured to perform the whole body cooperation calculation such that the difference between the target value of a constraint point specified by the prioritized task and the current value converges by generating a third node from a first node extracted in a random manner or in a predetermined manner and a second node that is a node on the trajectory data and closest to the first node, and using a constraint point obtained from a joint angle corresponding to the third node as an initial value.

[0013]

Further, the motion data creation apparatus may be configured to perform optimization processing of the trajectory data by dividing the generated trajectory data by predetermined unit, using a posture corresponding to each divided point as an initial value, and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges. In this way, an optimized posture can be realized for each point on the trajectory.

[0014]

Further, the motion data creation apparatus may further include an object data storage unit that stores data including a shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot. With that, the motion data creation apparatus may be configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the joint space by referring to information stored in the object data storage unit, and when they collode, not to add the calculated joint angle newly to the trajectory data. In this way, a trajectory avoiding a collision with an object can be obtained.

[0015]

A robot control system in accordance with another aspect of the present invention includes : a motion data creation apparatus that receives a prioritized task and generates a joint angle satisfying a constraint specified by the task, the prioritized task being set for a plurality of portions of a robot; and the robot that moves according to a joint angle created by the motion data creation apparatus; wherein the motion data creation apparatus calculates, in a joint space, a joint angle satisfying an constraint by using a constraint point obtained from a randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a

Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges.

[0016]

Further, the prioritized task may be set for a plurality of portions or the center of mass of the robot .

[0017]

Further, the motion data creation apparatus may be configured to perform optimization processing of the calculated joint angle by using a posture obtained from the calculated joint angle as an initial value and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges.

[0018]

Further, the motion data creation apparatus may further includes an object data storage unit that stores data including a shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot. With that, the motion data creation apparatus may be configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the joint space by referring to information stored in the object data storage unit, and when they collide, not to adopt the calculated joint angle.

[0019]

A motion data creation apparatus in accordance with the present invention is a motion data creation apparatus that creates a joint angle to cause a robot to move, wherein when the motion data creation apparatus searches for trajectory data of a joint angle reaching from an initial posture of the robot to a target posture in a joint space by using a search technique based on sampling, the motion data creation apparatus calculates a joint angle satisfying an constraint by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.

[0020] Further, a motion data creation apparatus in accordance with another aspect of the present invention is a motion data creation apparatus that creates a joint angle to cause a robot to move, wherein the motion data creation apparatus calculates, in a joint space, a joint angle satisfying an constraint by using a constraint point obtained from a randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by a prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.

[0021]

Further, a motion data creation method in accordance with the present invention is a motion data creation method for creating a joint angle to cause a robot to move, wherein when trajectory data of a joint angle reaching from an initial posture of the robot to a target posture is searched for in a joint space by using a search technique based on sampling, a joint angle satisfying an constraint is calculated by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.

[0022]

Further, a motion data creation method in accordance with another aspect of the present invention is a motion data creation method that creates a joint angle to cause a robot to move, wherein a joint angle satisfying an constraint is calculated in a joint space by using a constraint point obtained from a

randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by a prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.

Advantageous Effects of invention

[0023]

In accordance with the present invention, a robot control system capable of planning a trajectory of joint angles reaching to the target final position/posture by providing only the final target value of the task (for example, the final position and posture of the finger tips) , motion data creation apparatus and its creating method can be provided. Brief Description of Drawings [0024]

[Fig. 1]

Fig. 1 is a configuration diagram of a robot control system in accordance with a first exemplary embodiment;

[Fig. 2]

Fig. 2 is a diagram of a model of a robot in accordance with a first exemplary embodiment;

[Fig. 3]

Fig. 3 is a diagram for explaining a relation between manipulable space and redundant space;

[Fig. 4]

Fig. 4 is a functional configuration diagram of a whole body cooperation IK calculation module in accordance with a first exemplary embodiment;

[Fig. 5]

Fig. 5 is a process flowchart for a whole body cooperation IK calculation module in accordance with a first exemplary embodiment ;

[Fig. 6]

Fig. 6 is an overall flowchart of motion data creation processing in accordance with a first exemplary embodiment; [Fig. 7]

Fig. 7 is a process flowchart for a motion/posture generation unit in accordance with a first exemplary embodiment; [Fig. 8] Fig. 8 is a functional block diagram including feedback calculation in accordance with a first exemplary embodiment; [Fig. 9]

Fig. 9 is a process flowchart for an optimization unit in accordance with a first exemplary embodiment;

[Fig. 10]

Fig. 10 is a functional block diagram showing a

configuration of feedback calculation in accordance with a first exemplary embodiment;

[Fig. 11]

Fig. 11 is a flowchart of motion trajectory generation processing by a motion/posture generation unit in accordance with a first exemplary embodiment;

[Fig. 12]

Fig. 12 is a diagram for explaining a specific example of motion trajectory generation processing by an RRT method in accordance with a first exemplary embodiment;

[Fig. 13]

Fig. 13 is a diagram for explaining a specific example of motion trajectory generation processing by an RRT method in accordance with a first exemplary embodiment;

[Fig. 14]

Fig. 14 is a diagram for explaining a specific example of motion trajectory generation processing by an RRT method in accordance with a first exemplary embodiment; and [Fig. 15]

Fig. 15 is a flowchart for motion trajectory optimization by an optimization unit in accordance with a first exemplary embodiment .

Description of Embodiments

[0025]

First Embodiment

Hereinafter, the embodiments of the present invention will be described with reference to the drawings. Fig. 1 shows a configuration diagram of a robot control system according to the first embodiment. As shown in Fig. 1, a control system 1 includes an interface unit 10, a motion data creation unit 20, a robot control unit 60, and a robot 100. Each component is realized by hardware or software included in a computer. In the present embodiment, the motion data creation unit 20 is mainly realized by software, and the robot control unit 60 and the robot 100 are mainly realized by hardware.

[0026]

The interface unit 10 is a user interface or a data interface to receive command values to the robot 100. in the present embodiment, a plurality of tasks are provided as the command values. Note that, in the following description, the task means the target where control is desired in creating motion data. For example, the task includes the one that is set for a portion of the robot 100 which is the target where motion is controlled, or the position of the center of mass of the robot 100. The task is provided as coordinate values of six dimensions at maximum indicating the position and the posture of the portion which is the control target.

[0027]

The robot control unit 60 calculates a torque control value to drive each joint using a target joint angle generated by the motion data creation unit 20 and a current joint angle measured by an encoder (not shown) . The robot control unit 60 includes a motor driver (not shown) and a controller thereof (not shown) , and drives each oint by operating an actuator in accordance with the torque control value that is calculated. Accordingly, the motion in accordance with the motion data generated by the motion data creation unit 20 is realized.

[0028]

The robot 100 executes the operation based on motion data generated by the motion data creation unit 20. In the present embodiment, it is assumed that the robot 100 is a leg type moving robot. Further, the robot 100 includes at least one arm in order to achieve the task. Further, the robot 100 considers the restriction that it keeps its balance or the restriction by a plurality of tasks, as an example.

[0029]

Fig. 2 is a model diagram showing the robot 100 with joints and links connecting the joints. The robot 100 includes left and right two leg links LR and LL, and left and right two arm links AR and AL. The robot 100 is supplied with the motion data generated by the motion data creation unit 20 so as to be able to walk with the leg links LR and LL and execute operations such as carrying an object with the arm links AR and AL. For example, when the robot 100 walks, the position of the center of mass of the robot 100 is specified and the task is set for a swing leg and a stance leg. Further, the task is set for fingertips when the robot carries out work with its hand, for example.

[0030]

The robot 100 is formed of two leg links LR and LL, two arm links AR and AL, and a body trunk BD to which these leg links and arm links are connected.

[0031]

A neck joint that supports a head 101 includes a joint of roll direction 102, a joint of pitch direction 103, and a joint of yaw direction 103. The right arm link AR includes a shoulder joint of pitch direction 105, a shoulder joint of roll direction 106, an upper arm joint of yaw direction 107, an elbow joint of pitch direction 108, and a wrist joint of yaw direction 109, and a hand 141R that functions as an end-effector that holds an object and carries it is provided in an end of the right arm link AR. Note that the mechanism of the hand 141R may be determined by the shape or the type of the object that is to be held, and the hand may have the configuration of a multi-degree of freedom and multiple joint having a plurality of fingers, for example.

[0032]

The left arm link AL includes the similar configuration as the right arm link AR. More specifically, the left arm link AL includes five joints 110 to 114, and includes a hand 141L in the end thereof.

[0033]

The right leg link LR includes a waist joint of yaw direction 117, a waist joint of pitch direction 118, a waist joint of roll direction 119, a knee joint of pitch direction 120, an ankle joint of pitch direction 121, and an ankle joint of roll direction 122.

A sole 130 that contacts with a floor surface is provided in the lower part of the ankle joint 122.

[0034]

The left leg link LL includes the similar configuration as the right leg link LR. More specifically, the left leg link LL includes six joints 123 to 128, and includes a sole 131 in the end thereof.

[0035]

The body trunk 143 includes a joint of roll direction 115 and a joint of pitch direction 116.

[0036]

The motion data creation unit 20 includes a motion/posture generation unit 30, an object data store unit 40, and an optimization unit 50. The motion data creation unit 20 includes a module that performs whole body cooperation IK calculation that will be described later (whole body cooperation IK control unit 70) , and the motion/posture generation unit 30 and the optimization unit 50 call for a whole body cooperation IK module.

[0037]

The motion/posture generation unit 30 generates the posture and the motion with which the object in the environment and the robot 100 collide using the task input from the interface unit 10, and the information stored in the object data store unit 40. In the object data store unit 40, the shape/texture data regarding the robot 100 and the object in the environment, and the geometrical data of the robot 100 are stored.

[0038]

The motion/posture generation unit 30 has a characteristic feature in that it combines a method of configuration that is extracted randomly or in a predefined manner and a method of inverse kinematics calculation with balance and task restrictions (local path planning - whole body cooperation IK calculation) regarding the generation of the collision-free posture. The whole body cooperation IK calculation means the inverse kinematics calculation with prioritized task restrictions for redundant degrees of freedom system using null space Jacobian matrix. In the whole body cooperation IK calculation, the configurations that satisfy balance and/or task restrictions are output. Note that the detail of the whole body cooperation IK will be described later . The whole body cooperation IK is a method that has been developed for several years, which has now been applied for humanoid robots.

[0039]

Further, the motion/posture generation unit 30 has a characteristic feature in that it combines a method of a configuration search algorithm that is extracted randomly or in a predefined manner and a method of the inverse kinematics calculation with balance and/or task restrictions (local path planning-whole body cooperation IK calculation) regarding the generation of the collision-free motion. Further, the motion/posture generation unit 30 is able to combine the above method with two-dimensional path planning for further

applications. Note that, as the search algorithm based on the sampling, a method that is widely employed for the path or motion planning of the robot may be employed, and RRT, RRT-Connect (http://msl.cs.uiuc.edu/rrt/), or the like may be employed.

[0040]

The optimization unit 50 has a characteristic feature in that it combines a method of the whole body cooperation IK calculation and a mathematical optimization method regarding the optimization of the collision-free posture. As the mathematical optimization method, a steepest descent method, a gradient descent method or the like can be employed. Further, the optimization unit 50 may perform optimization of motion trajectory (joint angular trajectory) for further applications.

[0041]

Hereinafter, the prioritized task control is described first and thereafter the whole body cooperation IK calculation will be described for the purpose of better understanding of the present invention. First, examples, properties, and

characteristics of the prioritized task control will be described in brief, and a method of solving the task control expression when there are a plurality of tasks will be described.

[0042]

The examples of the prioritized task control when there is one task include a case of specifying the position of the center of mass and setting the tasks for the swing leg and the stance leg while the robot is walking, and a case of setting the task for the fingertips while the robot is operating with its hands.

[0043]

One of the properties of the prioritized task control is that the degree of freedom of the control increases as the redundant degree of freedom (the number of joints of the robot) increases. The redundant degree of freedom means that, for example, when the movement of the arms having redundant degree of freedom is considered on the two-dimensional plane, there are a plurality of trajectories of the arm even when the position and the posture of the final joint is fixed. Other properties of the prioritized task control include that there is Jacobian matrix for each task, and the tasks can be prioritized with the least squares solution. Further, it is also known that the calculation load increases in accordance with increase of the number of joints or increase of the tasks .

[0044]

Further properties of the prioritized task control include that priority orders can be added to the tasks and the task can be set for an arbitrary portion. In the prioritized task control , all the tasks are satisfied when there is sufficient redundancy, while tasks having higher priority orders are satisfied when the redundancy is insufficient. Any point (center of mass, fingertips, or the like) other than the portion can be controlled as long as Jacobian matrix can be obtained.

[0045]

Now, relation between a manipulable space and a redundant space will be described. As shown in Fig. 3, the space of n-dimensional joint angle Θ and the space of m-dimensional task r are considered (where n<m) . The space which can be indicated by the joint angular velocity (denoted by dotted Θ in Fig. 3 , which means the derivative value of Θ) based on the current joint angle Θ is defined as a manipulable space, and the joint angular space which can be indicated by one task in the task space is defined as a redundant space (null space) . Then, the following relation is established for the number of joints. Number of joints = (dimension of manipulable space) + (dimension of redundant space)

[0046]

Further, the least squares solution will be described in brief. First, regarding the relation between vector y and vector x, the following expression (1) is established.

y-Ax x≡R n ,yeR m ,n>m --(1)

The expression (1) is replaced with the optimization problem as shown below.

I |y-Ax| |=0 → min| |y-Ax| |

Then, the solution can be obtained from the following expression (2) . x = A*y+(l N - A*A)z A* = A T (AA T ) 1 -(2)

The expression gives the solution even when z is operated to an arbitrary value.

[0047]

The solution that satisfies min| |x| | of the solution of the expression (2) will be obtained by the following expression (3) . x = A*y ---(3)

[0048]

Next , the above technique is applied to a method of obtaining the joint angle using Jacobian matrix when there is one task that istobeset. First, the Jacobian matrix J, the task (constraint) , and the joint angle have relation shown in the following expression (4) . In the following expression, x with a dot in the upper part indicates the task velocity, and is a vector of r columns . Further, J(9) is a Jacobian matrix, and is rxc matrix. Further, Θ with a dot in the upper part indicates the joint angular velocity, and is a vector of c columns. x = j{0)9 —(4)

[0049]

Then, the task (constraint) is input in the following expression (5) obtained by deforming the above equation (4) , so as to obtain the joint angle. Note that J with # indicates a pseudo-inverse matrix of the Jacobian matrix J, which is a matrix of cxr. Further, by integrating the joint angular velocity obtained by the following expression (5) , the joint angle is calculated.

0 = J # x "-< 5 >

[0050]

When there are a plurality of tasks, the redundant space is employed in the prioritized task control so as to satisfy the plurality of constraints. In short, when there is redundancy, it is possible to satisfy other restrictions while satisfying the constraints. In order to approximate the joint angle Θ to the target joint angle 9 cur using this redundancy, k is defined as shown in the following expression (6) . Note that g is a feedback gain.

[0051]

Note that k may be specified by using a potential method expressed by expression (7) shown below. P is obtained by forming the restrictions that should be satisfied into formulas.

[0052]

Next, the method of solving the prioritized task control will be described.

With using the least squares solution stated above, the first task is shown by the following expression (8) .

Χγ — 3θ

Then, the second task is shown by the following expression (9) . Note that the expression shown by the underline is the expression to solve ki as the least squares solution.

[0053]

Summing up the solution method stated above, as shown below, the priority level of each task, the Jacobian matrix, and the task position are input. Note that the robot-base coordinate which is the standard of each task χι,··· χ η is indicated by p C ur- priority level 1 Ji,xi priority level 2 J2,X2

priority level n J n n

Then, as shown in the following expression (10) , the Jacobian matrix that is integrated in the priority level of each task is obtained.

/=0 In the equation (10) , N is defined from the following expression (11) .

[0054]

Next, the method of solving the task control expression when there are a plurality of tasks will be described.

First, with the following expression (12) , the joint angular velocity of each task considering the priority level is obtained with the following expression (12) .

Then, the joint angular velocity obtained in the expression (12) is substituted into the following expression (13) , so as to obtain the joint angular velocity while considering the whole robot .

Note that, in the expression (13) , k is a derivative value (joint angular velocity) of the joint angle of the robot posture which is a standard. For example, when 9 re f is a posture angular vector that is desired to be kept, the current posture angular vector is 9 cur , and gk(0<gk<l) is an arbitrary gain, k can be determined from the following expression (14) _ _

26

* = &(0 * - — ( 1 )

[0055]

Now, the whole body cooperation IK calculation will be described in detail with reference to Figs. 4 and 5. Fig. 4 is a diagram showing the function configuration of the whole body cooperation IK calculation module. Fig. 5 is a diagram showing a process flow by the whole body cooperation IK calculation module. In the whole body cooperation IK calculation module, in the oint space (CS: Configuration Space), the posture that satisfies the task (constraint) set from the initial value of the joint angle that is provided is obtained by a Newton-Raphson method, or a feedback method. In the present embodiment, description will be made by taking a case of employing the feedback method in the whole body cooperation IK calculation as an example.

[0056]

Referring first to Fig. 4, the prioritized task control using the feedback method will be described. Fig. 4 is a function configuration diagram of the whole body cooperation IK

calculation module. The whole body cooperation IK control unit 70 is supplied with a robot-base coordinate p C ur» a current joint angle e C ur of the robot 100, an arbitrary parameter k, and a plurality of tasks with prioritization (task 1 to task n) , and performs feedback calculation based on the difference between the current value and the target value of the constraint points (center of mass, fingertips, and the like) defined by the tasks, so as to output the joint angle 9 0 ut that satisfies the constraint.

In summary, the initial position is indicated by the robot-base coordinate and the initial posture is indicated by the current joint angle of the robot 100, and then these values are updated so as to satisfy the constraint points by the feedback calculation. Note that each task is provided by the six-dimensional coordinate values at maximum including the position and the posture, and the velocity which is the derivative value of these values is input. Further, in the present embodiment, the position of the center of mass is also input as is similar to the other tasks (constraints) for the purpose of simplicity of illustration. However, it may be input as the Jacobian matrix reference coordinate, not as the task.

[0057]

The joint angular velocity calculation unit 71 calculates the joint angular velocity (denoted by Θ with a dot in the upper part in the drawings) which is the derivative value of the joint angle. The joint angular velocity calculation unit 71

substitutes Jacobian matrix of each task, an arbitrary parameter k, and the velocity input of each task into the control expression in case of a plurality of tasks, so as to calculate the joint angular velocity. The joint angular velocity calculated by the joint angular velocity calculation unit 71 is integrated with an integrator 72 so as to obtain the joint angle. The joint angle that is obtained is input to a real machine (robot control unit 60 and robot 100) .

[0058]

The Jacobian matrix calculation unit 74 provides unit velocity to each joint, so as to calculate the Jacobian matrix for each task. Note that the known method may be employed as the method of calculating the Jacobian matrix, and the detailed description will be omitted here.

[0059]

A robot forward kinematics model 73 performs the forward kinematics calculation based on the robot geometrical

configuration, so as to calculate the values of the position and the posture of each task from the joint angle obtained by the integrator 72.

[0060]

In order to operate the robot 100, the target joint angle Q ou m ¾y be provided. However, if the information parameter of the robot 100 such as the joint angle 9 ou t/ each task position xi, X2 Xn and the like is output as well, it is possible to use these information parameters of the robot 100 when more complicated control needs to be performed, for example, when performing compliance control using a force sensor.

[0061]

Referring next to Fig. 5, the processing flow of the whole body cooperation IK calculation module will be described.

First, the whole body cooperation IK control unit 70 uses a forward kinematics model (numerical model) of the robot 100 to provide the unit velocity to each joint of the numerical model, thereby calculating a Jacobian matrix (SI01) . Next, the whole body cooperation IK control unit 70 acquires and determines input data such as the velocity of a task and an arbitrary parameter (S102) . Then, the whole body cooperation IK calculation control unit 70 calculates an equation corresponding to the prioritized task control with redundant degrees, and calculates a joint angular velocity (S103) . Then, the whole body cooperation IK calculation control unit 70 integrates the calculated joint angular velocity, thereby calculating the joint angles of the robot (S104) . Note that the position of the center of mass of the robot can also be calculated from the joint angular velocity of the robot by using the numerical model .

[0062]

Referring to Fig. 6, the entire processing of creating motion data by the motion data creation apparatus 20 will be described. The motion data creation processing includes processing of determining a goal posture (S201, S202) and processing of obtaining a trajectory from an initial position to the goal posture (S203, S204) .

[0063]

The motion/posture generation unit 30 of the motion data creation apparatus 20 generates the goal posture which satisfies restrictions such as the position of the center of mass and the position of finger tips of the robot 100 and which is free from collision with any object (S201) . The optimization unit 50 of the motion data creation apparatus 20 optimizes the generated goal posture (S202) . That is, the optimization unit 50 calculates a joint angle closest to the target joint angle, while satisfying the restrictions.

[0064]

The motion/posture generation unit 30 generates a motion trajectory (trajectory of the joint angle) of the robot 100 which is a trajectory from the initial posture to the goal posture and which is free from collision with any object, while satisfying the restrictions (S203) . The optimization unit 50 optimizes the generated motion trajectory (S204) . That is, the optimization unit 50 calculates a joint angle closest to the target joint angle, while satisfying the restrictions.

[0065]

Note that, in this embodiment, though Steps S201 to S204 are described as a series of processings, each processing may be combined with processing of another motion data creation apparatus. For example, the goal posture generation processing may be carried out using another conventional motion data creation apparatus, and the obtained goal posture may be combined with trajectory generation processing (S203) or trajectory optimization processing (204) according to this embodiment.

[0066]

The details of each processing shown in Fig. 6 will be described below.

Figs. 7 and 8 are diagrams illustrating the goal posture generation processing performed by the motion/posture generation unit 30. Fig. 7 is a diagram illustrating the processing flow performed by the motion/posture generation unit 30.

[0067]

In Fig. 7, the motion/posture generation unit 30 generates a joint angle for each axis of the robot randomly in the joint space (CS) by using the following function (S301) . Let the randomly generated joint angle be 6 ran d. Herein, 9 ran( j is a joint angle vector, because values for all the joints are generated. e ran d=Rand(CS)

[0068]

The motion/posture generation unit 30 provides the generated joint angles to the numerical model of the robot 100 to thereby obtain the posture of the robot 100 with the joint angles , and checks whether the robot 100 with the obtained posture is free from collision with any object within the environment (S302) . Herein, collision checking is carried out using the following function.

CollisionCheck(9 ranc i) [0069]

As a result of the collision checking, when the robot 100 collides with any object within the environment, the process returns to S301 to randomly generate the joint angles of the robot 100 again. Note that checking for collision with any object within the environment may be performed using well-known techniques. For instance, the robot 100 and objects within the environment are approximated by simple shapes such as a rectangular solid or a sphere, and these approximated models are stored in the object data storage unit 40. Then, by referring to the stored information, it can be checked whether the robot 100 collides with any object within the environment.

[0070]

The motion/posture generation unit 30 performs local path planning and calculates the posture that satisfies configured tasks (constraints) (S303) . In the local path planning, the posture that satisfies the constraints is obtained by

Newton-Raphson method using the prioritized task control, or feedback method, with the generated random posture set as an initial value. An example using the feedback method is herein described.

[0071]

Fig. 8 is a functional block diagram including feedback operation for use in the local path planning.

(1) The motion/posture generation unit 30 sets the current posture angle 9 cur as the initial value of the posture angle

(e C ur-e rand ) .

(2) The motion/posture generation unit 30 obtains the current robot base coordinate P C ur and the positions xi, · · · x n of constraint points at the current posture angle 9 cur , by the robot forward kinematics model .

[0072]

(3) The motion/posture generation unit 30 calculates the amount of input to the whole body cooperation IK control unit 70, at each constraint point, from a difference between a target value ref ref

(9 re f, X i, · · · X n) of each constraint point (finger tips, center of mass, or the like) determined by tasks, and a current

CU_ CUr

value (9 cur , X i, · · · X n ) · Note that 9 re f in the target values represents an ideal posture and is set in advance by a user, ref ref ref together with target tasks X , *··Χ n . The target tasks X i, ref

• · · X n are satisfied in order of priority, and a joint angle closest to 9 re f is selected. Specifically, the value is selected so that the square norm of the target joint angle 9 re f and the current joint angle 9 cur becomes minimum. The gain 75 is an arbitrary adjustment parameter. Thus, the amount of input

(velocity input of each task) to the whole body cooperation IK control unit 70 is calculated as shown in the following expression (15)

[0073]

(4) The motion/posture generation unit 30 then obtains the oint angles of the whole body of the robot 100 by using the whole body cooperation IK control unit 70, as shown in the following expression (16) .

[0074]

(5) The motion/posture generation unit 30 then confirms whether the difference between the target value and the current value is within a predetermined threshold, at all the constraint points. When the difference is within the predetermined threshold, the processing of the local path planning is ended. Otherwise, 9 Q ut is updated to new 0 cur (6 cur -G 0 ut) / and the process returns to the processing (1) .

[0075]

Referring back to Fig. 7, description will be continued. The motion/posture generation unit 30 provides the joint angles calculated in the local path planning to the numerical model of the robot 100 to thereby obtain the posture of the robot 100 with the joint angles, and checks whether the robot 100 with the obtained posture collides with any object within the environment (S304) . As a result of the collision checking, when the robot collides with any object, the process returns to S301 to randomly generate the joint angles of the robot 100 again. Otherwise, the joint angles obtained in the local path planning are output, and the goal posture generation processing is ended. Thus, it is possible to generate the goal posture which satisfies restrictions such as the position of the center of mass and the position of finger tips of the robot 100 and which is free from collision with any object.

[0076]

Figs. 9 and 10 are diagrams illustrating goal posture optimization processing performed by the optimization unit 50. Fig. 9 is a diagram illustrating the processing flow performed by the optimization unit 50, and Fig. 10 is a functional block diagram showing the feedback method for use in whole body cooperation IK calculation.

[0077]

In Fig. 9, the optimization unit 50 inputs the current joint angle 9 CU r in the joint space (CS) (S401) . Herein, the goal posture obtained by the motion/posture generation unit 30 is input as 9cur ·

[0078]

The optimization unit 50 calculates a difference between an ideal posture and a current posture as shown in the following expression (17) (S402) . Note that 6 re f represents an ideal posture, and gk represents an arbitrarily determined gain

(0<gk<l) . The ideal posture can be set by the user in advance, and any posture which looks beautiful for people may be set as the ideal posture. In the case of an upright posture, for example, such a posture that the shoulders are held straight is ideal.

[0079]

The optimization unit 50 obtains joint angles closest to the ideal posture from the initial posture while satisfying the constraints, by the feedback method using the prioritized task control .

Fig. 10 is a functional block diagram including feedback operation for use in optimization processing. Herein, the amount of input with respect to a constraint point of a task is set to zero as shown in the following expression (18) , since the posture input in Step S401 has already satisfied the constraints.

Χ γ9 Χ 2 ···Χ η = 0 -"(18)

[0080]

The optimization unit 50 calculates the joint angles of the whole body of the robot 100 by using the whole body cooperation IK calculation module, as shown in the following expression (19) .

[0081]

The optimization unit 50 provides the joint angles, which are calculated using the whole body cooperation IK module, to the numerical model of the robot 100 to thereby obtain the posture of the robot 100 with the joint angles, and checks whether the robot 100 with the obtained posture collides with any object within the environment (S404) . As a result of the collision checking, when the robot collides with any object, the joint angles previously calculated in S403 are output, and the processing is ended (S405) .

[0082]

As a result of the collision checking, when the robot is free from collision with any object, it is checked whether the output value of the joint angular velocity is within a

predetermined threshold (S405) , and when the output value is within the predetermined threshold (when it seems to almost converge) , the processing is ended. Otherwise, 9 cur is updated to new 0 cur O cur <-9out) cmcl the process returns to Step S402. Thus , the input current posture can be optimized.

[0083]

Fig. 11 is a diagram illustrating the flow of motion trajectory generation processing performed by the motion/posture generation unit 30.

In Fig. 11, the motion/posture generation unit 30 generates N goal postures in the joint space (CS) (S501, S506) . Note that, as a goal posture generation method, the technique illustrated in Fig. 7 may be used. Alternatively, other conventional techniques may be employed to generate the goal posture . Further, in random search techniques such as RRT

(http://msl.cs.uiuc.edu/rrt/), setting of a relatively large number of goal postures is favorable for the search. Accordingly, the N goal postures are generated herein.

[0084]

The motion/posture generation unit 30 generates a joint angle for each axis of the robot 100 randomly (S503) . Let the randomly generated joint angle be Brand- Herein, 9 ran d is a joint angle vector, because values for all the joints are generated.

[0085]

The motion/posture generation unit 30 checks whether the robot 100 is free from collision with any object within the environment (S504) . Herein, the collision checking is performed using the following function. In order to perform the collision checking more accurately, P C ur is herein provided to the argument of the function. Although the robot base coordinate (robot standing position) P C ur ma Y be randomly searched, the description will be given assuming that the robot base coordinate is provided by the user. When a collision occurs, the process returns to S503 to randomly generate the joint angle of the robot 100 again.

CollisionCheck (P cur , G ran d) [0086]

The motion/posture generation unit 30 performs local path planning (S505) . Herein, the feedback method using the prioritized task control in the local path planning will be described by way of example.

(1) The motion/posture generation unit 30 sets the current posture angle 9 cur as the initial value ( O cur «-e ran d) .

(2) The motion/posture generation unit 30 obtains the current robot base coordinate P C ur and the positions χχ, · · · x n of constraint points at the current posture angle 9 cur , by the robot forward kinematics model.

[0087]

(3) The motion/posture generation unit 30 sets the target values (constraints) obtained in the processing (2) .

Hereinafter, examples of the target values set in a certain state of the operation will be described in operation examples.

Operation Example 1 : A case where the right hand reaches an arbitrary goal position from the initial posture in which the robot 100 holds nothing in its hands.

(i) A case where a difference between the reaching goal position and the current hand position is within a predetermined threshold (in pursuit of the goal) .

Task 1: The position of the center of mass (user designates a three-dimensional position) as shown in the following expression (20) .

Pcogx

Pcogy (20)

Pcogz

Task 2 : The position and posture of the right hand as shown in the following expression (21) (reaching six-dimensional goal position (position and posture) ) .

(ii) A case where the difference between the reaching goal position and the current hand position is larger than the predetermined threshold (create intermediate nodes in the RRT) .

Task 1: The position of the center of mass (user designates a three-dimensional position) .

Task 2 : (none) .

[0088]

Operation Example 2: A case where the robot 100 places a glass at a certain goal position from the initial posture in which the robot 100 holds the glass of water.

(i) A case where the goal position and the current hand (or glass) position are within an arbitrary threshold (in pursuit of the goal) .

Task 1: The position of the center of mass (user designates a three-dimensional position) as shown in the following expression (22) .

Pcogx

Pcogy (22)

Pcogz Task 2: The right hand (six-dimensional reaching goal position) as shown in the following expression (23).

(ii) A case where the difference between the goal position and the current hand position is larger than the predetermined (create intermediate nodes in the RRT) .

Task 1: The position of the center of mass (user designates a three-dimensional position) .

Task 2: The position and posture of the right hand (set as horizontal constraints. Only two dimensions of the three dimensional posture are constrained. In order to keep it horizontally, the target value is set to " 0 " in two dimensions as shown in the following expression ( 24 ) ) .

[ 0089 ]

( 4 ) The motion/posture generation unit 30 calculates the amount of input to the whole body cooperation IK control unit 70 , at each constraint point, from the difference between the target value and the current value of each constraint point (finger tips, center of mass, or the like) .

( 5 ) The motion/posture generation unit 30 obtains the joint angles of the whole body of the robot 100 by using the whole body cooperation IK control unit 70 .

( 6 ) The motion/posture generation unit 30 confirms whether the difference between the target value and the current value is within the predetermined threshold, at all the constraint points . When the difference is within the predetermined threshold, the processing of the local path planning is ended. Otherwise, 9 D ut is updated to new 9 cur O cur <-0out) · and the process returns to the processing ( 1 ) .

[ 0090 ]

The motion/posture generation unit 30 checks whether the joint angles obtained in the local path planning are free from collision with any ob ect (S506) . Herein, the collision checking is performed using the following function. Note that when a collision occurs, the process returns to S503 to randomly generate the joint angles of the robot 100 again.

CollisionCheck (P cur , 6 ou t) [0091]

When the obtained joint angle is free from collision with any object, the motion/posture generation unit 30 determines whether the target value of each task (each constraint) reaches a final target value (S507) . When the target value reaches the final target value, it is determined that a path from the initial posture has been found, and the processing is ended.

[0092]

In the case where the target value has not reached the final target value, the target value is still an intermediate target value. Accordingly, nodes and edges are added to a tree constructed by the RRT method (S508) , and the process returns to S503 to continue searching. Thus, merely by providing the final target value (e.g., the final position and posture of finger tips) of each task, a trajectory to the final target position and posture can be generated while avoiding the collision with any object.

[0093]

Referring to Figs. 12 to 14, a description is given of a specific example of the motion trajectory generation processing using the RRT, which is performed by the motion/posture generation unit 30. A most basic example of the RRT method is herein described. In addition to the basic RRT method, many other techniques such as a method that uses RRT-Connect which forms branches from both the start and the goal, and a method that prepares a plurality of goal postures in advance can be employed.

[0094]

As shown in Fig. 12, the motion/posture generation unit 30 searches the joint space (CS) for a path from a start node to a goal node.

In the figure, a node currently generated by the motion/posture generation unit 30 is assumed as a node 81. Note that a node N is created using information about robot base coordinates, joint angles, and tasks (constraints) , and is expressed by the following expression (25) . A tree constructed by the RRT method is an assembly of nodes which are linked by edges from a start node and edges which are formed between the nodes .

[0095]

The motion/posture generation unit 30 links the currently generated node 81 and a point (node) , which is nearest to the node 81 among the nodes that belong to the tree, with an edge. When a distance "d" between the nodes exceeds a predetermined threshold set in advance, one or more nodes are created in the middle of the edge leading to the node . Various techniques can be employed as a method for obtaining the distance "d" . For instance, as the simplest method, a square norm of a difference between joint angles 9 ou t at the nodes may be employed, as shown in the following expression (26) .

HKJ - < 26 )

[0096]

As shown in Fig. 13, when a node 85 is created in the middle of the edge from the node 83 (k-th node) to the node 84 ((k+l)th node) , the motion/posture generation unit 30 uses information about previous and next nodes (nodes 83 and 84) . Details are given below.

(i) The motion/posture generation unit 30 obtains a joint angle at the created node 85 as shown in the following expression (27) .

(ii) The motion/posture generation unit 30 executes the local path planning with the obtained joint angle set as an initial input .

(iii) The motion/posture generation unit 30 checks if a collision with any object occurs, and when no collision occurs, the nodes are registered in the tree. When a collision occurs, all the subsequent nodes (including the node 84) which are created in the middle of the edge leading to the node 84 are discarded.

[0097]

As shown in Fig. 14, regarding a node 86 which is subsequent to the node 85 ( (k+2) th node) , when the distance between the node 86 and the goal node is within a certain range, the motion/posture generation unit 30 tries to create an edge between the node 86 and the goal node. When the edge can be created, it is determined that a path can be created, and the processing is ended.

[0098]

Fig. 15 is a diagram illustrating the flow of motion trajectory optimization processing performed by the optimization unit 50. The optimization unit 50 obtains a trajectory closest to the ideal posture from a reference motion trajectory (motion posture trajectory) .

The optimization unit 50 inputs a robot motion trajectory (joint angle trajectory), which is obtained before the optimization, in the joint space (CS) (S601) . Herein, the motion trajectory obtained by the motion/posture generation unit 30 is input .

[0099]

The optimization unit 50 divides the input joint angle trajectory into M segments based on a predetermined unit (S602) . In general, a constraint point moving distance of 2-5 cm or a joint angle range of 1-5 deg is set as the predetermined unit.

As described with reference to Figs. 9 and 10, the optimization unit 50 performs optimization processing on each node (joint angle) among divided nodes m (0≤m<M) (S603) .

The optimization unit 50 repeats the optimization processing on M nodes (S604, S605) . When the optimization processing on the M nodes is ended, an interpolation between the divided nodes is carried out. As the interpolation method, linear interpolation or prioritized task control may be employed so as to perform the interpolation in more detailed units . As a result , the trajectory generated using random search techniques can be smoother .

[0100]

As has been explained so far, in accordance with the present invention, a joint angle trajectory reaching from the initial posture to the final target posture is generated in joint space (CS) by combining a search algorithm based on sampling with whole body cooperation IK calculation using task control with priorities. Therefore, by providing only the final target value of a task (for example, the final position andposture of the finger tips) , a joint angle trajectory satisfying the constraint can be planned. Further, when a joint angle trajectory is to be calculated, a joint angle trajectory reaching to the final target value can be planned while avoiding a collision with an object within the environment by making a collision check between the posture of a robot 100 obtained by the calculated joint angle and the object within the environment. Furthermore, the posture that causes no collision or the trajectory can be optimized by combining the whole body cooperation IK calculation with a mathematical optimization technique and updating the joint angle so as to get closer to an ideal posture at the final target value.

[0101]

Further, in accordance with the present invention, priority is set for a plurality of constraint points, so that when a plurality of constraint positions that cannot be simultaneously satisfied are input, calculation can be performed such that the constraints are satisfied in accordance with the priority. In contrast to that, if no priority is set, only results that do not satisfy any of the constraints can be obtained in most cases . For example, in the case of a bipedal robot or the like in which the robot itself needs to be controlled into a stable state on the premise that the constraints of the legs are satisfied, priority may be assigned in a similar manner to that of the present invention so that the risk that the robot could topple down can be reduced.

[0102]

Note that the present invention is not limited to above-described exemplary embodiments and can be modified as appropriate without departing from the limits of the spirit of the present invention. For example, although the above-described exemplary embodiments are explained on the assumption that the optimization unit 50 performs optimization processing on a trajectory generated by the motion/posture generation unit 30, the optimization unit 50 may perform optimization processing on a trajectory generated by a trajectory creating method in the related art .

Industrial Applicability

[0103]

The present invention is applicable to a robot control system that makes, for example, a motion plan of a robot to which a plurality of tasks are provided, a motion data creation apparatus and its creating method.

Reference Signs List

[0104]

1 CONTROL SYSTEM

10 INTERFACE UNIT

20 MOTION DATA CREATION APPARATUS

30 MOTION/POSTURE CREATION UNIT

40 OBJCT DATA STORE UNIT

50 OPTIMIZATION UNIT

60 ROBOT CONTROL UNIT

70 WHOLE BODY COOPERATION IK CONTROL UNIT

71 JOINT ANNGULAR VELOCITY CALCULATION UNIT 72 INTEGRATOR

73 ROBOT FORWARD KINEMATICS MODEL

74 JOCOBIAN MATRIX CALCULATION UNIT 75, 76 GAIN

81, 83, 84, 86 NODE

82, 85 EDGE

100 ROBOT

101 HEAD

102-128 JOINT

130, 131 SOLE

141R, 141L HAND

LR RIGHT LEG LINK

LL LEFT LEG LINK

AR RIGHT ARM LINK AL LEFT ARM LINK

BD BODY TRUNK