﻿{"id":1130,"date":"2022-11-05T02:01:02","date_gmt":"2022-11-04T17:01:02","guid":{"rendered":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/?page_id=1130"},"modified":"2022-11-13T02:08:21","modified_gmt":"2022-11-12T17:08:21","slug":"techniques_en","status":"publish","type":"page","link":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/","title":{"rendered":"Our Techniques"},"content":{"rendered":"\n<h2 class=\"wp-block-heading\" id=\"block-0f1273c1-c7cd-41fd-a36d-7dcd201ef82d\">Basic Functions<\/h2>\n\n\n\n<p id=\"block-e6ba2491-d307-4229-bac3-e9a84dd7bc74\">The basic functions of our robot are explained. These techniques are combined to achieve a complex robot&#8217;s action.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"block-2c16f998-c320-4c7a-9b45-3e810f36fcf0\">Object Recognition<\/h3>\n\n\n\n<p id=\"block-77d23ea6-27b6-46fe-8617-de394425009f\">YolactEdge[1] is used for object recognition and the result is used for object grasping point estimation. This is an important process for the robot to find an object and grasp it. We, Hibikino-Musashi@Home focus on this area.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"399\" height=\"300\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture1.jpg\" alt=\"\" class=\"wp-image-1131\" srcset=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture1.jpg 399w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture1-300x226.jpg 300w\" sizes=\"auto, (max-width: 399px) 100vw, 399px\" \/><figcaption>Object recognition by YolactEdge<\/figcaption><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture2.jpg\" alt=\"\" class=\"wp-image-1132\" height=\"300\" srcset=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture2.jpg 471w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture2-300x191.jpg 300w\" sizes=\"(max-width: 471px) 100vw, 471px\" \/><figcaption>Object grasping point estimation<\/figcaption><\/figure>\n<\/div>\n<\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"block-b938b423-d422-4b0b-b507-a40b75c5451b\">Human Detection<\/h3>\n\n\n\n<p id=\"block-ecb6850c-74e5-4b89-a04c-f90f146a70be\">Human attributions are recognized by using CSRA[2] and the pointing position is estimated by using LightWightHumanPose[3]. These functions are used to explain the attributes of humans and to bring a pointed object to a human.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"460\" height=\"300\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture3.jpg\" alt=\"\" class=\"wp-image-1133\" srcset=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture3.jpg 460w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture3-300x196.jpg 300w\" sizes=\"auto, (max-width: 460px) 100vw, 460px\" \/><figcaption>Human attribute recognition<\/figcaption><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture4.jpg\" alt=\"\" class=\"wp-image-1134\" height=\"300\" srcset=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture4.jpg 521w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture4-300x173.jpg 300w\" sizes=\"(max-width: 521px) 100vw, 521px\" \/><figcaption>Pointing point estimation<\/figcaption><\/figure>\n<\/div>\n<\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"block-316341e9-3d0a-42fb-9f70-e123c796bb6a\">Voice Recognition<\/h3>\n\n\n\n<p id=\"block-cce71360-969b-4a33-b0e3-ed71117cb726\">Noise is reduced by noisereduce[4] and the voice is processed by using Vosk[5]. Voice recognition is important to communicate the robot with humans. A robust recognition system that recognizes spoken words in a noisy crowd has been developed.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"539\" height=\"300\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture5.jpg\" alt=\"\" class=\"wp-image-1135\" srcset=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture5.jpg 539w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture5-300x167.jpg 300w\" sizes=\"auto, (max-width: 539px) 100vw, 539px\" \/><figcaption>Voice recognition<\/figcaption><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture6.jpg\" alt=\"\" class=\"wp-image-1136\" height=\"300\" srcset=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture6.jpg 417w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture6-300x216.jpg 300w\" sizes=\"(max-width: 417px) 100vw, 417px\" \/><figcaption>Noise reduction<\/figcaption><\/figure>\n<\/div>\n<\/div>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"block-58cbf967-3b93-4dca-901b-3f8b5de46ab4\">Environment Recognition<\/h3>\n\n\n\n<p id=\"block-2da23149-208d-46a0-9f89-d0a9ea2eff93\">The placable area and travelable area are estimated by semantic segmentation. These techniques are used to place an object in an empty space on a shelf and travel by avoiding obstacles on a floor.<\/p>\n\n\n\n<div class=\"wp-block-columns is-layout-flex wp-container-core-columns-is-layout-28f84493 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img decoding=\"async\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture7.jpg\" alt=\"\" class=\"wp-image-1137\" height=\"300\" srcset=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture7.jpg 332w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture7-300x271.jpg 300w\" sizes=\"(max-width: 332px) 100vw, 332px\" \/><figcaption>Placable area estimation<\/figcaption><\/figure>\n<\/div>\n\n\n\n<div class=\"wp-block-column is-layout-flow wp-block-column-is-layout-flow\">\n<figure class=\"wp-block-image aligncenter size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"327\" height=\"300\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture8.jpg\" alt=\"\" class=\"wp-image-1138\" srcset=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture8.jpg 327w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture8-300x275.jpg 300w\" sizes=\"auto, (max-width: 327px) 100vw, 327px\" \/><figcaption>Travelable area estimation<\/figcaption><\/figure>\n<\/div>\n<\/div>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"block-3ac594bc-cacf-4921-9f43-e307dbb05f70\">Dataset Generation and Sim2Real[6]<\/h2>\n\n\n\n<p id=\"block-010dac5f-c19d-4d2d-b344-f838daf38a34\">A large amount of training data should be prepared to train the object recognition system. Hibikino-Musashi@Home develops dataset generation software using a 3D simulator to train the object recognition system.<\/p>\n\n\n\n<p id=\"block-e7099b94-0e17-4755-9215-f5876a93cb89\">First, an object is scanned using a 3D scanner to get 3D data about the object. Then, the scanned 3D data is taken in the simulator to generate a dataset. The objects are placed randomly in a room simulated in a 3D environment. By changing the background image randomly, the recognition system with high accuracy can be trained.<\/p>\n\n\n\n<p id=\"block-ed1115f4-6896-4f58-83ca-cae699fb0c70\">By using this method, 100 thousand datasets with mask and label automatically within one hour.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter\" id=\"block-3c8a3462-b94d-4c07-9104-f6723f7062da\"><img decoding=\"async\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture9-1024x264.jpg\" alt=\"\"\/><figcaption>3D model scanning<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"block-55fd1897-7df4-4dd2-b6d9-60c1deffcaca\">Open-source Simulator<\/h2>\n\n\n\n<p id=\"block-1741cec3-f3d0-455e-9614-3fb2433679fd\">Hibikino-Musashi@Home published the development system as an open source.<\/p>\n\n\n\n<p><a href=\"https:\/\/github.com\/Hibikino-Musashi-Home\/hma_wrs_sim_ws\">https:\/\/github.com\/Hibikino-Musashi-Home\/hma_wrs_sim_ws<\/a><\/p>\n\n\n\n<p id=\"block-52b41d23-3dcd-4e2b-938f-3818a344cd02\">The following link is the ROS workspace for the tidy-up robot developed by Hibikino-Musashi@Home. This workspace uses an OSS HSR simulator[7]. A developer who does not have a physical HSR can start development easily using libraries developed by Hibikino-Musashi@Home.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter is-resized\" id=\"block-eb12e23c-712c-4f9c-aa40-7d72fec5dd58\"><img decoding=\"async\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture10.jpg\" alt=\"This image has an empty alt attribute; its file name is Picture10.jpg\" height=\"300\"\/><figcaption>Open-source HSR simulator<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"block-064296db-68b9-436c-9b1e-5ad40167437e\">Research Activities<\/h2>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"block-86482a2f-3543-4ff5-97a2-f0c4aaa3da3b\">Brain-inspired Artificial Intelligence[8]<\/h3>\n\n\n\n<p id=\"block-3d12e008-4a49-41f7-baf8-5003efe305d8\">Brain-inspired Artificial Intelligence, which adopts brain functions to obtain a family&#8217;s preference or behavior through the small amount of robot experience has been developed. It can acquire knowledge that differs in each home environment and is difficult to acquire by using deep learning. The goal is to implement the model into the digital or analog circuit to develop a highly efficient and low power consumption system for robots.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter is-resized\" id=\"block-1ff57c0f-2f86-4f97-a829-2b9f793a2ac3\"><img decoding=\"async\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture11.jpg\" alt=\"This image has an empty alt attribute; its file name is Picture11.jpg\" height=\"400\"\/><figcaption>Brain-inspired AI model<\/figcaption><\/figure>\n\n\n\n<h3 class=\"wp-block-heading\" id=\"block-a8c78168-c112-4b9b-8251-1e30edcd9778\">Analog Chip and Soft Hand[9]<\/h3>\n\n\n\n<p id=\"block-27ef65be-b0c1-4e0e-b1b6-7538fa331edf\"><br>This object recognition system uses tactile information taken by sensors made of soft materials and process tactile information on an analog chip. The tactile sensor is made of silicone rubber with liquid metal. It can hold an object gently without damage. The circuit is implemented on an analog chip with 300 TOPS\/W of high performance and power efficiency, which is only a few millimeters in size.<\/p>\n\n\n\n<p id=\"block-9af39424-e291-4df9-861f-87bddb8d7fa7\">The combination of the dedicated robot sensor can improve recognition accuracy, and this approach can develop a long battery life robot system.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full\" id=\"block-c3844f2f-c3df-4dbd-8688-4772c356f791\"><img loading=\"lazy\" decoding=\"async\" width=\"676\" height=\"300\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture12.jpg\" alt=\"\" class=\"wp-image-1142\" srcset=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture12.jpg 676w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture12-300x133.jpg 300w\" sizes=\"auto, (max-width: 676px) 100vw, 676px\" \/><figcaption>Analog chip and soft hand<\/figcaption><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"block-2b7ac4b6-ce9e-4e71-a0c2-731b0c10e58a\">References<\/h2>\n\n\n\n<p>[1] <a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9561858\">H. Liu et al., ICRA, 2021.<\/a><br>[2] <a rel=\"noreferrer noopener\" href=\"https:\/\/openaccess.thecvf.com\/content\/ICCV2021\/html\/Zhu_Residual_Attention_A_Simple_but_Effective_Method_for_Multi-Label_Recognition_ICCV_2021_paper.html\" target=\"_blank\">K. Zhu et al., ICCV 2021.<\/a><br>[4] <a rel=\"noreferrer noopener\" href=\"https:\/\/github.com\/alphacep\/vosk-api\" target=\"_blank\">vosk-api, https:\/\/github.com\/alphacep\/vosk-api.<\/a><br>[5] <a rel=\"noreferrer noopener\" href=\"https:\/\/journals.plos.org\/ploscompbiol\/article?id=10.1371\/journal.pcbi.1008228\" target=\"_blank\">T. Sainburg et al., PLoS computational biology, 2020.<\/a><br>[6] <a rel=\"noreferrer noopener\" href=\"https:\/\/www.tandfonline.com\/doi\/full\/10.1080\/01691864.2022.2115315\" target=\"_blank\">T. Ono et al., Advanced Robotics, 2022.<\/a><br>[7] <a href=\"https:\/\/github.com\/hsr-project\/hsrb_robocup_dspl_docker\" data-type=\"URL\" data-id=\"https:\/\/github.com\/hsr-project\/hsrb_robocup_dspl_docker\">hsrb_robocup_dspl_docker, https:\/\/github.com\/hsr-project\/hsrb_robocup_dspl_docker<\/a><br>[8] <a rel=\"noreferrer noopener\" href=\"https:\/\/ieeexplore.ieee.org\/document\/9759428\" target=\"_blank\">H. Nakagawa et al., IEEE Access, 2022.<\/a><br>[9] <a rel=\"noreferrer noopener\" href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/8852325\" target=\"_blank\">M.Yamaguchi et al., IJCNN 2019.<\/a><\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-large\" id=\"block-0fb7a1ca-cd84-484c-b8f8-8fb4a458bf07\"><a href=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/hma_tech2022_en_v2.pdf\"><img loading=\"lazy\" decoding=\"async\" width=\"724\" height=\"1024\" src=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/hma_tech2022_en_v2-724x1024.jpg\" alt=\"\" class=\"wp-image-1097\" srcset=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/hma_tech2022_en_v2-724x1024.jpg 724w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/hma_tech2022_en_v2-212x300.jpg 212w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/hma_tech2022_en_v2-768x1086.jpg 768w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/hma_tech2022_en_v2-1087x1536.jpg 1087w, https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/hma_tech2022_en_v2.jpg 1358w\" sizes=\"auto, (max-width: 724px) 100vw, 724px\" \/><\/a><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Basic Functions The basic functions of our robot are explained. These techniques are combined to achieve a complex robot&#8217;s action. Object Recognition YolactEdge[1] is used for object recognition and the result is used for object grasping point estimation. This is an important process for the robot to find an object and grasp it. We, Hibikino-Musashi@Home [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"_uag_custom_page_level_css":"","site-sidebar-layout":"default","site-content-layout":"","ast-site-content-layout":"default","site-content-style":"default","site-sidebar-style":"default","ast-global-header-display":"","ast-banner-title-visibility":"","ast-main-header-display":"","ast-hfb-above-header-display":"","ast-hfb-below-header-display":"","ast-hfb-mobile-header-display":"","site-post-title":"","ast-breadcrumbs-content":"","ast-featured-img":"","footer-sml-layout":"","ast-disable-related-posts":"","theme-transparent-header-meta":"","adv-header-id-meta":"","stick-header-meta":"","header-above-stick-meta":"","header-main-stick-meta":"","header-below-stick-meta":"","astra-migrate-meta-layouts":"default","ast-page-background-enabled":"default","ast-page-background-meta":{"desktop":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"ast-content-background-meta":{"desktop":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"tablet":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""},"mobile":{"background-color":"var(--ast-global-color-5)","background-image":"","background-repeat":"repeat","background-position":"center center","background-size":"auto","background-attachment":"scroll","background-type":"","background-media":"","overlay-type":"","overlay-color":"","overlay-opacity":"","overlay-gradient":""}},"footnotes":""},"class_list":["post-1130","page","type-page","status-publish","hentry"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.5 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Our Techniques | HMA<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Our Techniques | HMA\" \/>\n<meta property=\"og:description\" content=\"Basic Functions The basic functions of our robot are explained. These techniques are combined to achieve a complex robot&#8217;s action. Object Recognition YolactEdge[1] is used for object recognition and the result is used for object grasping point estimation. This is an important process for the robot to find an object and grasp it. We, Hibikino-Musashi@Home [&hellip;]\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/\" \/>\n<meta property=\"og:site_name\" content=\"Hibikino-Musashi@Home\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/HibikinoMusashiAthome\/\" \/>\n<meta property=\"article:modified_time\" content=\"2022-11-12T17:08:21+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture1.jpg\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@HMA_wakamatsu\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/techniques_en\\\/\",\"url\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/techniques_en\\\/\",\"name\":\"Our Techniques | HMA\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/techniques_en\\\/#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/techniques_en\\\/#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/wordpress\\\/wp-content\\\/uploads\\\/2022\\\/11\\\/Picture1.jpg\",\"datePublished\":\"2022-11-04T17:01:02+00:00\",\"dateModified\":\"2022-11-12T17:08:21+00:00\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/techniques_en\\\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/techniques_en\\\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/techniques_en\\\/#primaryimage\",\"url\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/wordpress\\\/wp-content\\\/uploads\\\/2022\\\/11\\\/Picture1.jpg\",\"contentUrl\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/wordpress\\\/wp-content\\\/uploads\\\/2022\\\/11\\\/Picture1.jpg\",\"width\":399,\"height\":300},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/techniques_en\\\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/ja\\\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Our Techniques\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/#website\",\"url\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/\",\"name\":\"Hibikino-Musashi@Home\",\"description\":\"Hibikino-Musashi@Home (HMA)\",\"publisher\":{\"@id\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/#organization\"},\"alternateName\":\"HMA\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/#organization\",\"name\":\"Hibikino-Musashi@Home\",\"alternateName\":\"HMA\",\"url\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/wordpress\\\/wp-content\\\/uploads\\\/2021\\\/07\\\/cropped-HMA_Logo_2_512.png\",\"contentUrl\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/wordpress\\\/wp-content\\\/uploads\\\/2021\\\/07\\\/cropped-HMA_Logo_2_512.png\",\"width\":512,\"height\":512,\"caption\":\"Hibikino-Musashi@Home\"},\"image\":{\"@id\":\"https:\\\/\\\/www.brain.kyutech.ac.jp\\\/~hma\\\/#\\\/schema\\\/logo\\\/image\\\/\"},\"sameAs\":[\"https:\\\/\\\/www.facebook.com\\\/HibikinoMusashiAthome\\\/\",\"https:\\\/\\\/x.com\\\/HMA_wakamatsu\",\"https:\\\/\\\/www.instagram.com\\\/hma_wakamatsu\\\/\",\"https:\\\/\\\/www.youtube.com\\\/@hma_wakamatsu\",\"https:\\\/\\\/github.com\\\/Hibikino-Musashi-Home\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Our Techniques | HMA","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/","og_locale":"en_US","og_type":"article","og_title":"Our Techniques | HMA","og_description":"Basic Functions The basic functions of our robot are explained. These techniques are combined to achieve a complex robot&#8217;s action. Object Recognition YolactEdge[1] is used for object recognition and the result is used for object grasping point estimation. This is an important process for the robot to find an object and grasp it. We, Hibikino-Musashi@Home [&hellip;]","og_url":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/","og_site_name":"Hibikino-Musashi@Home","article_publisher":"https:\/\/www.facebook.com\/HibikinoMusashiAthome\/","article_modified_time":"2022-11-12T17:08:21+00:00","og_image":[{"url":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture1.jpg","type":"","width":"","height":""}],"twitter_card":"summary_large_image","twitter_site":"@HMA_wakamatsu","schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/","url":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/","name":"Our Techniques | HMA","isPartOf":{"@id":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/#primaryimage"},"image":{"@id":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/#primaryimage"},"thumbnailUrl":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture1.jpg","datePublished":"2022-11-04T17:01:02+00:00","dateModified":"2022-11-12T17:08:21+00:00","breadcrumb":{"@id":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/#primaryimage","url":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture1.jpg","contentUrl":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2022\/11\/Picture1.jpg","width":399,"height":300},{"@type":"BreadcrumbList","@id":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/techniques_en\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/ja\/"},{"@type":"ListItem","position":2,"name":"Our Techniques"}]},{"@type":"WebSite","@id":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/#website","url":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/","name":"Hibikino-Musashi@Home","description":"Hibikino-Musashi@Home (HMA)","publisher":{"@id":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/#organization"},"alternateName":"HMA","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/#organization","name":"Hibikino-Musashi@Home","alternateName":"HMA","url":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/#\/schema\/logo\/image\/","url":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2021\/07\/cropped-HMA_Logo_2_512.png","contentUrl":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wordpress\/wp-content\/uploads\/2021\/07\/cropped-HMA_Logo_2_512.png","width":512,"height":512,"caption":"Hibikino-Musashi@Home"},"image":{"@id":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/HibikinoMusashiAthome\/","https:\/\/x.com\/HMA_wakamatsu","https:\/\/www.instagram.com\/hma_wakamatsu\/","https:\/\/www.youtube.com\/@hma_wakamatsu","https:\/\/github.com\/Hibikino-Musashi-Home"]}]}},"uagb_featured_image_src":{"full":false,"thumbnail":false,"medium":false,"medium_large":false,"large":false,"1536x1536":false,"2048x2048":false},"uagb_author_info":{"display_name":"HMA","author_link":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/author\/hma_editor\/"},"uagb_comment_info":0,"uagb_excerpt":"Basic Functions The basic functions of our robot are explained. These techniques are combined to achieve a complex robot&#8217;s action. Object Recognition YolactEdge[1] is used for object recognition and the result is used for object grasping point estimation. This is an important process for the robot to find an object and grasp it. We, Hibikino-Musashi@Home&hellip;","_links":{"self":[{"href":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wp-json\/wp\/v2\/pages\/1130","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wp-json\/wp\/v2\/comments?post=1130"}],"version-history":[{"count":1,"href":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wp-json\/wp\/v2\/pages\/1130\/revisions"}],"predecessor-version":[{"id":2157,"href":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wp-json\/wp\/v2\/pages\/1130\/revisions\/2157"}],"wp:attachment":[{"href":"https:\/\/www.brain.kyutech.ac.jp\/~hma\/wp-json\/wp\/v2\/media?parent=1130"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}