<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">

基于6D位姿識別面向任意物體的智能人?機協同遞送

Intelligent human–robot collaborative handover system for arbitrary objects based on 6D pose recognition

  • 摘要: 在日常實踐中存在大量人與人之間的多樣性物體遞送需求,這可以依靠協作機器人來完成這些簡單、耗時又耗力的任務. 為此,針對人?機協同遞送過程中無法精確識別物體位姿導致難以準確抓取的問題,引入基于PnP算法(Perspective-n-Point)的物體6D位姿識別網絡,實現待遞送物體位姿的精確識別;提出改進的被遞送物體數據集制作方法,實現面向任意物體的精準識別;通過視覺系統標定、坐標轉換以及抓取方案改進,實現物體的精確位姿定位與準確抓取;為驗證所提出的人?機協同遞送系統的有效性,進行了基于LineMod數據集和自制數據集的人–機物體遞送對比實驗. 結果表明,面向自制數據集的物體遞送提出的人–機遞送系統平均誤差距離為1.97 cm,遞送平均成功率為76%,平均遞送時間為30 s;如不考慮抓取姿勢,其遞送成功率可達89%;具有較好的魯棒性,應用前景良好.

     

    Abstract: In daily practice, there are several instances of diverse object handover between humans. For example, in an automobile production line, workers need to pick up parts and deliver them to colleagues or acquire parts from them and put the parts in the appropriate position. Similarly, in households, children assist bedridden elderly people by passing them a cup of water, and in medical surgeries, assistants take over surgical tools used by doctors. These tasks require a considerable amount of time and manpower. In these scenarios, it is necessary to deliver the target object efficiently and quickly while prioritizing the safety of the object. Collaborative robots can serve as human colleagues to perform these simple, time-consuming, and laborious tasks. We expect humans and robots to hand over objects seamlessly in a natural and efficient way, just as humans naturally hand over objects to each other. This paper proposes a 6-dimensional (6D) pose recognition-based human–robot collaborative handover system to address the problem of inaccurate object grasping caused by imprecise recognition of object poses during the human–robot collaborative handover process. The main contents are as follows: To solve the 6D pose recognition problem, a residual network (ResNet) is introduced to conduct semantic segmentation and key-point vector field prediction on the image, and the random sample consensus (RANSAC) voting is used to predict key-point coordinates. Further, an improved efficient perspective-n-point (EPnP) algorithm is used to predict the object pose, which can improve the accuracy. An improved dataset production method is proposed by analyzing the advantages and disadvantages of the LineMod dataset and based on the latest 3-dimensional (3D) reconstruction technology. To realize the accurate identification of daily objects, which can reduce the time required for dataset production. The transformation relationship (from the object to the camera and then to the robot base coordinate systems) is obtained through internal parameter calibration and hand–eye calibration methods of the camera. Thus, the pose of the target object in the robot base coordinate system is determined. Further, a grasping method for effective position and orientation calculation is proposed to realize precise object pose localization and accurate grasping. A handover experiment platform was set up to validate the effectiveness of the proposed human–robot collaborative handover system, with four volunteers conducting 80 handover experiments. The results showed that the average deviation distance of the proposed human–robot handover system is 1.97 cm, the average handover success rate is 76%, and the average handover time is 30 s, while the average handover success rate can reach 89% without considering the grasping posture. These results demonstrate that the proposed human–robot collaborative handover system is robust and can be applied to different scenarios and interactive objects with promising application prospects.

     

/

返回文章
返回
<th id="5nh9l"></th><strike id="5nh9l"></strike><th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th><strike id="5nh9l"></strike>
<progress id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"><noframes id="5nh9l">
<th id="5nh9l"></th> <strike id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span>
<progress id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"><noframes id="5nh9l"><span id="5nh9l"></span><strike id="5nh9l"><noframes id="5nh9l"><strike id="5nh9l"></strike>
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"><noframes id="5nh9l">
<span id="5nh9l"></span><span id="5nh9l"><video id="5nh9l"></video></span>
<th id="5nh9l"><noframes id="5nh9l"><th id="5nh9l"></th>
<progress id="5nh9l"><noframes id="5nh9l">
259luxu-164