A Visual Servoing Approach to Human-Robot Interactive Object Transfer

Par : Ying Wang
Offrir maintenant
Ou planifier dans votre panier
Disponible dans votre compte client Decitre ou Furet du Nord dès validation de votre commande. Le format PDF est :
  • Compatible avec une lecture sur My Vivlio (smartphone, tablette, ordinateur)
  • Compatible avec une lecture sur liseuses Vivlio
  • Pour les liseuses autres que Vivlio, vous devez utiliser le logiciel Adobe Digital Edition. Non compatible avec la lecture sur les liseuses Kindle, Remarkable et Sony
Logo Vivlio, qui est-ce ?

Notre partenaire de plateforme de lecture numérique où vous retrouverez l'ensemble de vos ebooks gratuitement

Pour en savoir plus sur nos ebooks, consultez notre aide en ligne ici
C'est si simple ! Lisez votre ebook avec l'app Vivlio sur votre tablette, mobile ou ordinateur :
Google PlayApp Store
  • Nombre de pages192
  • FormatPDF
  • ISBN978-3-7412-0412-8
  • EAN9783741204128
  • Date de parution12/05/2016
  • Protection num.Digital Watermarking
  • Taille11 Mo
  • Infos supplémentairespdf
  • ÉditeurBooks on Demand

Résumé

Taking human factors into account, a visual servoing approach aims to facilitate robots with real-time situational information to accomplish tasks in direct and proximate collaboration with people. A hybrid visual servoing algorithm, a combination of the classical position-based and image-based visual servoing, is applied to the whole task space. A model-based tracker monitors the human activities, via matching the human skeleton representation and the image of people in image.
Grasping algorithms are implemented to compute grasp points based on the geometrical model of the robot gripper. Whilst major challenges of human-robot interactive object transfer are visual occlusions and making grasping plans, this work proposes a new method of visually guiding a robot with the presence of partial visual occlusion, and elaborate the solution to adaptive robotic grasping.
Taking human factors into account, a visual servoing approach aims to facilitate robots with real-time situational information to accomplish tasks in direct and proximate collaboration with people. A hybrid visual servoing algorithm, a combination of the classical position-based and image-based visual servoing, is applied to the whole task space. A model-based tracker monitors the human activities, via matching the human skeleton representation and the image of people in image.
Grasping algorithms are implemented to compute grasp points based on the geometrical model of the robot gripper. Whilst major challenges of human-robot interactive object transfer are visual occlusions and making grasping plans, this work proposes a new method of visually guiding a robot with the presence of partial visual occlusion, and elaborate the solution to adaptive robotic grasping.