留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于高斯过程的机器人自适应抓取策略

陈友东 郭佳鑫 陶永

陈友东, 郭佳鑫, 陶永等 . 基于高斯过程的机器人自适应抓取策略[J]. 北京航空航天大学学报, 2017, 43(9): 1738-1745. doi: 10.13700/j.bh.1001-5965.2016.0660
引用本文: 陈友东, 郭佳鑫, 陶永等 . 基于高斯过程的机器人自适应抓取策略[J]. 北京航空航天大学学报, 2017, 43(9): 1738-1745. doi: 10.13700/j.bh.1001-5965.2016.0660
CHEN Youdong, GUO Jiaxin, TAO Yonget al. Adaptive grasping strategy of robot based on Gaussian process[J]. Journal of Beijing University of Aeronautics and Astronautics, 2017, 43(9): 1738-1745. doi: 10.13700/j.bh.1001-5965.2016.0660(in Chinese)
Citation: CHEN Youdong, GUO Jiaxin, TAO Yonget al. Adaptive grasping strategy of robot based on Gaussian process[J]. Journal of Beijing University of Aeronautics and Astronautics, 2017, 43(9): 1738-1745. doi: 10.13700/j.bh.1001-5965.2016.0660(in Chinese)

基于高斯过程的机器人自适应抓取策略

doi: 10.13700/j.bh.1001-5965.2016.0660
基金项目: 

国家“863”计划 2014AA041601

北京市科技计划 D161100003116002

详细信息
    作者简介:

    陈友东   男, 博士, 副教授, 硕士生导师; 主要研究方向:机器人控制系统、机器人易编程

    郭佳鑫   男, 硕士研究生; 主要研究方向:人机协作

    陶永   男, 博士, 讲师; 主要研究方向:机电一体化、智能机器人应用

    通讯作者:

    陈友东, E-mail:chenyd@buaa.edu.cn

  • 中图分类号: TP242.6

Adaptive grasping strategy of robot based on Gaussian process

Funds: 

National High-tech Research and Development Program of China 2014AA041601

Beijing Science and Technology Plan D161100003116002

More Information
  • 摘要:

    在机器人抓取作业时,目标物体的位姿经常发生变化。为了使机器人在运动过程中能够适应物体的位姿变化,提出了一种基于高斯过程的机器人自适应抓取策略。该方法建立了从观测空间到关节空间的映射,使机器人从样本中学习,省去了机器人视觉系统的标定和逆运动学求解。首先,拖动机器人抓取物体,记录物体的观测变量和机器人的关节角度;然后,利用记录的样本训练高斯过程模型,实现观测变量和关节角度的关联;最后,当得到新的观测变量时,通过训练的高斯过程模型得到机器人的关节角度。经过训练后,UR3机器人成功抓取了物体。

     

  • 图 1  基于高斯过程的自适应抓取

    Figure 1.  Adaptive grasping based on Gaussian process

    图 2  实验平台

    Figure 2.  Experimental platform

    图 3  UR3机器人及其关节轴

    Figure 3.  UR3 robot and its joint axis

    图 4  目标物体的位姿

    Figure 4.  Pose of target object

    图 5  末端执行器的结构

    Figure 5.  Structure of end effector

    图 6  人工拖动示教编程

    Figure 6.  Manual drag teaching programming

    图 7  目标物体的观测变量

    Figure 7.  Observation variables of target object

    图 8  机器人的自适应抓取

    Figure 8.  Adaptive grasping of robot

    图 9  训练样本和测试样本在三维空间的分布

    Figure 9.  Distribution of training and testing samples on 3D space

    图 10  训练样本和测试样本在像素平面上的分布

    Figure 10.  Distribution of training and testing samples on pixel plane

    表  1  训练数据

    Table  1.   Training data

    编号 观测变量 对应的关节角度/(°)
    x/像素 y/像素 θ/(°) 基座 肩关节 肘关节 腕关节1 腕关节2 腕关节3
    1 805 647 87 3.2 -120.5 -78.5 -63.2 115.9 -78.4
    2 964 700 70 14.7 -121.4 -80.8 -50.2 107.6 -62.7
    3 919 557 80 13.0 -110.5 -96.2 -50.2 107.6 -62.7
    4 951 632 32 16.7 -116.1 -91.4 -39.5 99.7 -15.4
    5 895 577 9 19.3 -112.3 -95.9 -39.4 89.2 7.24
    6 969 483 12 26.5 -105.3 -104.6 -39.4 90.0 7.7
    7 809 516 16 15.7 -108.8 -101.2 -39.5 91.5 -2.2
    8 873 593 31 18.2 -111.5 -102.2 -28.9 91.6 -12.4
    9 975 584 52 21.87 -111.2 -101.6 -31.1 95.9 -26.1
    10 820 711 63 10.2 -120.4 -83.9 -45.4 103.6 -49.8
    11 1 044 655 29 5.9 -113.2 -99.8 -32.3 109.0 -31.6
    12 803 594 39 5.9 -113.2 -99.8 -32.3 109.0 -31.6
    下载: 导出CSV

    表  2  对应图 7中目标物体的关节角度

    Table  2.   Joint angles corresponding to target objects in Fig. 7

    编号 预测的关节角度/(°)
    基座 肩关节 肘关节 腕关节1 腕关节2 腕关节3
    1 16.3 -104.4 -109.1 -33.9 89.9 5.4
    2 12.6 -121.3 -81.5 -50.7 110.3 -65.5
    下载: 导出CSV

    表  3  测试样本信息

    Table  3.   Information of testing samples

    编号 观测变量 对应的关节角度/(°)
    x/像素 y/像素 θ/(°) 基座 肩关节 肘关节 腕关节1 腕关节2 腕关节3
    1 831 630 6 15.0 -115.6 -94.4 -32.6 91.9 -32.6
    2 966 646 16 21.9 -116.6 -92.5 -33.5 90.6 5.2
    3 906 558 28 18.5 -110.7 -99.3 -37.7 94.0 -10.5
    4 774 472 18 13.1 -104.8 -107.7 -37.2 94.1 -8.6
    5 712 637 52 2.5 -117.0 -89.4 -45.5 108.1 -47.5
    6 903 666 30 16.1 -118.3 -89.2 -37.6 96.7 -13.1
    7 1 012 616 54 21.1 -114.9 -92.0 -42.7 98.6 -30.2
    8 1 020 489 77 21.4 -106.2 -101.7 -49.1 102.2 -52.5
    9 722 429 33 9.0 -102.1 -110.1 -41.8 98.9 -27.0
    10 649 610 27 1.9 -114.9 -93.9 -40.1 103.0 -26.2
    11 908 749 51 12.6 -124.4 -79.8 -42.4 103.6 -34.2
    12 637 780 15 -0.7 -126.7 -79.4 -36.3 103.3 -15.3
    下载: 导出CSV

    表  4  不能抓取的测试样本

    Table  4.   Testing samples that could not be grasped

    编号 观测变量
    x/像素 y/像素 θ/(°)
    1 1 063 968 9
    2 968 692 3
    3 526 813 59
    4 840 853 43
    5 852 453 25
    6 519 671 11
    下载: 导出CSV

    表  5  预测的目标物体位姿

    Table  5.   Predicted pose of target object

    编号 观测变量
    x/像素 y/像素 θ/(°)
    1 813 653 88
    2 966 701 69
    3 906 547 80
    4 941 629 31
    5 884 569 9
    6 977 485 13
    7 816 524 14
    8 870 589 32
    9 983 593 51
    10 818 709 63
    11 1 045 655 29
    12 806 594 39
    下载: 导出CSV
  • [1] AHRARY A, LUDENA R D A.A novel approach to design of an under-actuated mechanism for grasping in agriculture application[M]//LEE R.Applied computing and information technology.Berlin:Springer, 2014:31-45.
    [2] MANTI M, HASSAN T, PASSETTI G, et al.An under-actuated and adaptable soft robotic gripper[M]//PRESCOTT T J, LEPORA N F, MURA A, et al.Biomimetic and biohybrid systems.Berlin:Springer, 2015:64-74.
    [3] BELZILE B, BIRGLEN L.A compliant self-adaptive gripper with proprioceptive haptic feedback[J].Autonomous Robots, 2014, 36(1):79-91. doi: 10.1007%2Fs10514-013-9360-1.pdf
    [4] PETKOVIC'D, ISSA M, PAVLOVIC'N D, et al.Adaptive neuro fuzzy controller for adaptive compliant robotic gripper[J].Expert Systems with Applications, 2012, 39(18):13295-13304. doi: 10.1016/j.eswa.2012.05.072
    [5] HOFFMANN H, SCHENCK W, MÖLLER R.Learning visuomotor transformations for gaze-control and grasping[J].Biological Cybernetics, 2005, 93(2):119-130. doi: 10.1007/s00422-005-0575-x
    [6] SAXENA A, DRIEMEYER J, NG A Y.Robotic grasping of novel objects using vision[J].International Journal of Robotics Research, 2008, 27(2):157-173. doi: 10.1177/0278364907087172
    [7] LIPPIELLO V, RUGGIERO F, SICILIANO B, et al.Visual grasp planning for unknown objects using a multifingered robotic hand[J].IEEE/ASME Transactions on Mechatronics, 2013, 18(3):1050-1059. doi: 10.1109/TMECH.2012.2195500
    [8] ZHANG Z Y.A flexible new technique for camera calibration[J].IEEE Transactions on Pattern Analysis & Machine Intelligence, 2000, 22(11):1330-1334. http://www-users.cs.umn.edu/~hspark/CSci5980/zhang.pdf
    [9] 王一, 刘常杰, 杨学友, 等.工业机器人视觉测量系统的在线校准技术[J].机器人, 2011, 33(3):299-302. http://www.cnki.com.cn/Article/CJFDTOTAL-JQRR201103009.htm

    WANG Y, LIU C J, YANG X Y, et al.Online calibration of visual measurement system based on industrial robot[J].Robot, 2011, 33(3):299-302(in Chinese). http://www.cnki.com.cn/Article/CJFDTOTAL-JQRR201103009.htm
    [10] 张李俊, 黄学祥, 冯渭春, 等.基于运动路径靶标的空间机器人视觉标定方法[J].机器人, 2016, 38(2):193-199. http://www.cnki.com.cn/Article/CJFDTOTAL-JQRR201602009.htm

    ZHANG L J, HUANG X X, FENG W C, et al.Space robot vision calibration with reference objects from motion trajectories[J].Robot, 2016, 38(2):193-199(in Chinese). http://www.cnki.com.cn/Article/CJFDTOTAL-JQRR201602009.htm
    [11] CORKE P I.Visual control of robots:High-performance visual serving[M].New York:Wiley, 1997.
    [12] SIRADJUDDIN I, BEHERA L, MCGINNITY T M, et al.A position based visual tracking system for a 7 DOF robot manipulator using a kinect camera[C]//International Joint Conference on Neural Networks.Piscataway, NJ:IEEE Press, 2012:1-7. https://www.iitk.ac.in/ee/publications-stream-wise-list
    [13] THOMAS J, LOIANNO G, SREENATH K, et al.Toward image based visual servoing for aerial grasping and perching[C]//2014 IEEE International Conference on Robotics and Automation (ICRA).Piscataway, NJ:IEEE Press, 2014:2113-2118.
    [14] NIE L, HUANG Q.Inverse kinematics for 6-DOF manipulator by the method of sequential retrieval[C]//Proceedings of the International Conference on Mechanical Engineering and Material Science, 2012:255-258.
    [15] CHAN T F, DUBEY R V.A weighted least-norm solution based scheme for avoiding joint limits for redundant joint manipulators[J].IEEE Transactions on Robotics & Automation, 1995, 11(2):286-292. http://ieeexplore.ieee.org/document/370511/
    [16] SHIMIZU M, KAKUYA H, YOON W K, et al.Analytical inverse kinematic computation for 7-DOF redundant manipulators with joint limits and its application to redundancy resolution[J].IEEE Transactions on Robotics, 2008, 24(5):1131-1142. doi: 10.1109/TRO.2008.2003266
    [17] LUO R C, LIN T W, TSAI Y H.Analytical inverse kinematic solution for modularized 7-DoF redundant manipulators with offsets at shoulder and wrist[C]//International Conference on Intelligent Robots and Systems.Piscataway, NJ:IEEE Press, 2014:516-521.
    [18] EWERTON M, NEUMANN G, LIOUTIKOV R, et al.Learning multiple collaborative tasks with a mixture of interaction primitives[C]//2015 IEEE International Conference on Robotics and Automation (ICRA).Piscataway, NJ:IEEE Press, 2015:1535-1542.
    [19] RASMUSSEN C E.Gaussian processes for machine learning[M].Cambridge:MIT Press, 2006.
    [20] PARASCHOS A, DANIEL C, PETERS J, et al.Probabilistic movement primitives[C]//Advances in Neural Information Processing Systems(NIPS), 2013:2616-2624.
    [21] CALANDRA R, SEYFARTH A, PETERS J, et al.An experimental comparison of Bayesian optimization for bipedal locomotion[C]//IEEE International Conference on Robotics and Automation.Piscataway, NJ:IEEE Press, 2014:1951-1958.
    [22] CULLY A, CLUNE J, TARAPORE D, et al.Robots that can adapt like animals[J].Nature, 2015, 521(7553):503-507. doi: 10.1038/nature14422
  • 加载中
图(10) / 表(5)
计量
  • 文章访问数:  911
  • HTML全文浏览量:  162
  • PDF下载量:  598
  • 被引次数: 0
出版历程
  • 收稿日期:  2016-08-10
  • 录用日期:  2016-12-09
  • 网络出版日期:  2017-09-20

目录

    /

    返回文章
    返回
    常见问答