ABCDEFGHIJKLMNO
1
AuthorsTitleConference出版社PagesMonthDayYearLocationDOISlideShareYouTube / Presentation Videoprojectpage linkAcceptance Rate [%]Awards
2
--In Proceedings of the 38th annual ACM symposium on User interface software and technology (UIST '25) ACMconditionally acceptedApril 26 - May 012025Busan, Korea22
3
Takashi Amesaka, Takumi Yamamoto, Hiroki Watanabe, Buntarou Shizuki, Yuta SugiuraFlexEar-Tips: Shape-Adjustable Ear Tips Using Pressure ControlThe ACM CHI Conference on Human Factors in Computing Systems (CHI 2025)ACMArticle No.: 350, Pages 1 - 13April 26 - May 012025Yokohama, Japanhttps://doi.org/10.1145/3706598.3714177https://lclab.org/projects/flexear-tips24.9
4
Fumika Oguri, Katsutoshi Masai, Yuta Sugiura, Yuichi ItohMaGEL: A Soft, Transparent Input Device Enabling Deformation Gesture RecognitionProceedings of the 30th International Conference on Intelligent User Interfaces (IUI '25)ACM982 - 992March24-272025Cagliari, Italyhttps://doi.org/10.1145/3708359.371210025
5
Riku Kitamura, Kenji Yamada, Takumi Yamamoto, Yuta SugiuraAmbient Display utilizing Anisotropy of TatamiIn Proceedings of the 19th International Conference on Tangible, Embedded, and Embodied Interaction (TEI 2025)ACMArticle No.: 3, Pages 1 - 15March4-72025Bordeaux, Francehttps://doi.org/10.1145/3689050.3704924https://www.slideshare.net/slideshow/ambient-display-utilizing-anisotropy-of-tatami/276666522https://youtu.be/iOuDdAVULf4https://lclab.org/projects/tatapixel
6
Takumi Yamamoto, Suguru Kanoga, Yuta SugiuraComparison of Nine Deep Regressors in Continuous Blood Pressure Estimation Using Single-Channel Photoplethysmograms under the PulseDBThe 2025 17th IEEE/SICE International Symposium on System Integration (SII 2025)IEEE165-170January21-242025Munich, Germanyhttps://doi.org/10.1109/SII59315.2025.10871083
7
Shogo Hanayama, Riku Kitamura, Takumi Yamamoto, Takashi Amesaka, Liwei Chan, Yuta SugiuraSkinRing: Ring-shaped Device Enabling Wear Direction-Independent Gesture Input on Side of Finger The 2025 17th IEEE/SICE International Symposium on System Integration (SII 2025)IEEE386-392January21-242025Munich, Germanyhttps://doi.org/10.1109/SII59315.2025.10871054https://www.slideshare.net/slideshow/skinring-ring-shaped-device-enabling-wear-direction-independent-gesture-input-on-side-of-finger/275308174https://youtu.be/sdn2rF-fYeQ?si=vX2otDKX_sVajVprhttps://lclab.org/projects/skinring
8
Takehisa Furuuchi, Takumi Yamamoto, Katsutoshi Masai, Takashi Amesaka, Yuta SugiuraGestEarrings: Developing Gesture-Based Input Techniques for Earrings The 2025 17th IEEE/SICE International Symposium on System Integration (SII 2025)IEEE897-904January21-242025Munich, Germanyhttps://doi.org/10.1109/SII59315.2025.10871112https://www.slideshare.net/slideshow/gestearrings-developing-gesture-based-input-techniques-for-earrings/275359306https://youtu.be/F5DLPrIcyyo?si=DSQA3M9Jyf6PqL2Khttps://lclab.org/projects/gestearrings
9
Sarii Yamamoto, Jia Jun Wang, Liwei Chan, Yuta SugiuraPuzMaty: Supporting Puzzle Mat Design CreationThe 2025 17th IEEE/SICE International Symposium on System Integration (SII 2025)IEEE919-923January21-242025Munich, Germanyhttps://doi.org/10.1109/SII59315.2025.10871081https://www.slideshare.net/slideshow/puzmaty-supporting-puzzle-mats-design-creation/275644116https://youtu.be/3JY5e2Zb9y8?si=u3myQ9CGEbb-6Gvshttps://lclab.org/projects/puzmaty
10
Hiyori Tsuji, Takumi Yamamoto, Maiko Kobayashi, Kyoshiro Sasaki, Noriko Aso, Yuta SugiuraCradlePosture: Camera-Based Approach for Estimating Neonate's Posture Based on Caregiver's Holding BehaviorsThe 2025 17th IEEE/SICE International Symposium on System Integration (SII 2025)IEEE434-439January21-242025Munich, Germanyhttps://doi.org/10.1109/SII59315.2025.10870587https://www.slideshare.net/slideshow/cradleposture-camera-based-approach-for-estimating-neonate-s-posture-based-on-caregiver-s-holding-behaviors/276550061https://youtu.be/HxD0Nbj8HZ8?si=oApUSbCk2OXt8M_Yhttps://lclab.org/projects/cradleposture
11
Natsumi Onosato, Naoharu Sawada, Yohei Kawasaki, Masahiko Takeda, Masaki Inoue, Yuta SugiuraDetachable Robot that Moves the Baby Bouncer The 2025 17th IEEE/SICE International Symposium on System Integration (SII 2025)IEEE134-140January21-242025Munich, Germanyhttps://doi.org/10.1109/SII59315.2025.10870928https://www.slideshare.net/slideshow/detachablerobotthatmovesthebabybouncer-pdf/275407239https://youtu.be/hlwi5b6mGcc?si=mZorDF3FMa2YkgyZhttps://lclab.org/projects/bouncer
12
Yuxuan Sun, Yuta SugiuraWrist-worn Haptic Design for 3D Perception of the Surrounding Airflow in Virtual RealityThe 16th Asia-Pacific Workshop on Mixed and Augmented Reality (APMAR2024)November29-302024Kyoto, Japanhttps://ceur-ws.org/Vol-3907/paper8.pdfhttps://www.slideshare.net/slideshow/wrist-worn-haptic-design-for-3d-perception-of-the-surrounding-airflow-in-virtual-reality-apmar-2024/273985545Best Presentation Award
13
Yurina Mizuho and Yuta SugiuraA Comparison of Violin Bowing Pressure and Position among Expert Players and BeginnersPart of proceedings of 6th International Conference AsiaHaptics 2024to appearOctober28-302024Sunway, Malaysiahttps://arxiv.org/abs/2411.05126https://www.slideshare.net/slideshow/a-comparison-of-violin-bowing-pressure-and-position-among-expert-players-and-beginnershttps://youtu.be/vqVmNPk_-78?si=lnxjukMQhvA2NZ8_https://lclab.org/projects/violin-analytics
14
Takumi Yamamoto, Rin Yoshimura, Yuta SugiuraEnchantedClothes: Visual and Tactile Feedback with an Abdomen-Attached Robot through ClothesPart of proceedings of 6th International Conference AsiaHaptics 2024to appearOctober28-302024Sunway, Malaysiahttps://arxiv.org/abs/2411.05102https://www.slideshare.net/slideshow/enchanted-clothes-visual-and-tactile-feedback-with-an-abdomen-attached-robot-through-clothes/272879615https://www.youtube.com/watch?v=-IlUqC7eR4Mhttps://lclab.org/projects/enchantedclothes
15
Shunta Suzuki, Takashi Amesaka, Hiroki Watanabe, Buntaro Shizuki, Yuta SugiuraEarHover: Mid-Air Gesture Recognition for Hearables Using Sound Leakage SignalsIn Proceedings of the 37th annual ACM symposium on User interface software and technology (UIST '24) ACMArticle No.: 129, Pages 1 - 13
October14-172024Pittsburgh, USAhttps://doi.org/10.1145/3654777.3676367https://www.slideshare.net/slideshow/earhover-mid-air-gesture-recognition-for-hearables-using-sound-leakage-signals/272652761https://www.youtube.com/watch?v=FvmiPpnCMA0https://lclab.org/projects/earhover24UIST'24 Best Paper Award
16
Yurina Mizuho, Yohei Kawasaki, Takashi Amesaka and Yuta SugiuraEarAuthCam: Personal Identification and Authentication Method Using Ear Images Acquired with a Camera-Equipped Hearable DeviceThe Augmented Humans (AHs) International Conference 2024ACM119–130April 4-62024Melbourne, Australiahttps://dl.acm.org/doi/10.1145/3652920.3653059https://www.slideshare.net/sugiuralab/earauthcam-personal-identification-and-authentication-method-using-ear-images-acquired-with-a-cameraequipped-hearable-device?from_m_app=ioshttps://youtu.be/vqVmNPk_-78?si=uX9aqmbvyHhzBjBthttps://lclab.org/projects/earauthcam
17
Yuto Ueda, Anusha Withana, Yuta SugiuraTactile Presentation of Orchestral Conductor’s Motion TrajectoryThe 2024 16th IEEE/SICE International Symposium on System Integration (SII 2024)IEEE546-553January8-112024Ha Long, Vietnamhttps://doi.org/10.1109/SII58957.2024.10417570https://www.slideshare.net/slideshows/tactile-presentation-of-orchestral-conductors-motion-trajectory/265890200https://youtu.be/iju0jcEvgz4https://lclab.org/projects/tact
18
Hiyori Tsuji, Takumi Yamamoto, Sora Yamaji, Maiko Kobayashi, Kyoshiro Sasaki, Noriko Aso, Yuta SugiuraSmartphone-Based Teaching System for Neonate Soothing MotionsThe 2024 16th IEEE/SICE International Symposium on System Integration (SII 2024)IEEE178-183January8-112024Ha Long, Vietnamhttps://doi.org/10.1109/SII58957.2024.10417485Smartphone-Based Teaching System for Neonate Soothing Motions | PPThttps://youtu.be/aOAkpsWATSQ?si=HB-pjYLM4pna19ZDhttps://lclab.org/projects/teaching-system-for-soothing-motions
19
Sarii Yamamoto, Kaori Ikematsu, Kunihiro Kato, Yuta SugiuraPinch Force Measurement Using a Geomagnetic SensorThe 2024 16th IEEE/SICE International Symposium on System Integration (SII 2024)IEEE284-287January8-112024Ha Long, Vietnamhttps://doi.org/10.1109/SII58957.2024.10417164https://www.slideshare.net/slideshows/pinch-force-measurement-using-a-geomagnetic-sensor/266136157https://youtu.be/pcHNgfJX7_k?si=Cmi69zwgbuxFU3j6https://lclab.org/projects/pinch-force-measurement
20
Naoharu Sawada, Takumi Yamamoto, Yuta SugiuraConverting Tatamis into Touch Sensors by Measuring CapacitanceThe 2024 16th IEEE/SICE International Symposium on System Integration (SII 2024)IEEE554-558January8-112024Ha Long, Vietnamhttps://doi.org/10.1109/SII58957.2024.10417676https://www.slideshare.net/sugiuralab/converting-tatamis-into-touch-sensors-by-measuring-capacitance?from_m_app=ioshttps://youtu.be/DtQgBpZaPag?si=kwE1G-Cq1og7uA7-https://lclab.org/projects/tatami-sensor
21
Yohei Kawasaki, Yuta SugiuraIdentification and Authentication Using ClaviclesIn Proceedings of the SICE Annual Conference 2023IEEE1141-1145September6-92023Mie, Japanhttps://doi.org/10.23919/SICE59929.2023.10354211https://www.slideshare.net/sugiuralab/identification-and-authentication-using-clavicles-ee36https://youtu.be/IxKy9cK4Vn0?si=Kszl4Dr7PKFOR1Yz
22
Masaya Tashiro, Ashif Aminulloh Fathnan, Yuta Sugiura, Akira Uchiyama, Hiroki WakatsuchiMultifunctional Metasurface-Based Sensors Operating at a Single Frequency2023 Seventeenth International Congress on Artificial Materials for Novel Wave Phenomena (Metamaterials)IEEEX-379-X-381September11-162023Chania, Greecehttps://doi.org/10.1109/Metamaterials58257.2023.10289290
23
Yurina Mizuho, Riku Kitamura, Yuta SugiuraEstimation of Violin Bow Pressure Using Photo-Reflective SensorsIn Proceedings of the 25th International Conference on Multimodal Interaction (ICMI 2023)ACM216–223October9-132023Paris, Francehttps://dl.acm.org/doi/10.1145/3577190.3614172https://www.slideshare.net/sugiuralab/estimation-of-violin-bow-pressure-using-photoreflective-sensorshttps://youtu.be/aGZc6PNfcSghttps://lclab.org/projects/bow-pressure-sensing37
24
Riku Kitamura, Takumi Yamamoto, Yuta SugiuraTouchLog: Finger Micro Gesture Recognition Using Photo-Reflective SensorsIn Proceedings of the 2023 ACM International Symposium on Wearable Computers (ISWC '23)ACM92-97October8-122023Cancún, Mexicohttps://dl.acm.org/doi/10.1145/3594738.3611371https://www.slideshare.net/sugiuralab/touchlog-finger-micro-gesture-recognition-using-photoreflective-sensors?from_m_app=ioshttps://youtu.be/9vYuoJjoOCQ?si=jKf2d5o2Ep3jbS6whttps://lclab.org/projects/touchlog
25
Takashi Amesaka, Hiroki Watanabe, Masanori Sugimoto, Yuta Sugiura, Buntarou Shizuki User Authentication Method for Hearables Using Sound Leakage SignalsIn Proceedings of the 2023 ACM International Symposium on Wearable Computers (ISWC '23)ACM119–123October8-122023Cancún, Mexicohttps://dl.acm.org/doi/abs/10.1145/3594738.3611376https://youtu.be/Zx-j2w_WIeE?si=KcdwR49bgd_cwCf1https://lclab.org/projects/sound-leakage-authentication
26
Takumi Yamamoto, Ryohei Baba, Yuta SugiuraAugmented Sports of Badminton by Changing Opening Status of Shuttle’s FeathersIn Proceedings of the 15th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR2023) August18-192023Taipei, Taiwanhttps://ceur-ws.org/Vol-3467/short2.pdfhttps://www.slideshare.net/sugiuralab/augmented-sports-of-badminton-by-changing-opening-status-of-shuttles-featherspdfhttps://youtu.be/idJE3FbFNao?si=CZo0t2Uvb3HkoqsVhttps://lclab.org/projects/badminton
27
Naoharu Sawada, Takumi Yamamoto, Yuta SugiuraA Virtual Window Using Curtains and Image ProjectionIn Proceedings of the 15th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR2023) August18-192023Taipei, Taiwanhttps://ceur-ws.org/Vol-3467/short4.pdfhttps://www.slideshare.net/sugiuralab/a-virtual-window-using-curtains-and-image-projectionhttps://lclab.org/projects/curtain
28
Sarii Yamamoto, Fei Gu, Kaori Ikematsu, Kunihiro Kato, Yuta SugiuraMaintenance-Free Smart Hand DynamometerIn Proceedings of the 45th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2023)IEEE1-5July24-272023Sydney, Australiahttps://doi.org/10.1109/EMBC40787.2023.10340847https://www.slideshare.net/sugiuralab/maintenancefree-smart-hand-dynamometerpdf-260046046https://youtu.be/hf1gJsZYLiw?si=S3EPRE5JniFBMmmlhttps://lclab.org/projects/sgd
29
Takumi Yamamoto, Katsutoshi Masai, Anusha Withana, Yuta SugiuraMasktrap: Designing and Identifying Gestures to Transform Mask Strap into an Input InterfaceProceedings of the 28th International Conference on Intelligent User Interfaces (IUI '23)ACM762–775March27-312023Sydney, Australiahttps://dl.acm.org/doi/10.1145/3581641.3584062https://www.slideshare.net/sugiuralab/masktrap-designing-and-identifying-gestures-to-transform-mask-strap-into-an-input-interface?from_search=0https://www.youtube.com/watch?v=CU4q24YHiN8https://lclab.org/projects/maskstrap24.1
30
Chengshuo Xia, Yuta SugiuraVirtual IMU Data Augmentation by Spring-joint Model for Motion Exercises Recognition without Using Real DataIn Proceedings of the 2022 ACM International Symposium on Wearable Computers (ISWC '22)ACM79–83September11–152022Atlanta, USA and Cambridge, UKhttps://doi.org/10.1145/3544794.3558460https://www.slideshare.net/sugiuralab/virtual-imu-data-augmentation-by-springjoint-model-for-motion-exercises-recognition-without-using-real-data-258122560https://youtu.be/RuRWUNi0tVQ?list=PLmr5ZJP6hj9tu2eSTX594WKhc1ufh-ccrhttps://lclab.org/projects/virtualspring37.5
31
Chengshuo Xia, Tsubasa Maruyama, Haruki Toda, Mitsunori Tada, Koji Fujita, Yuta SugiuraKnee Osteoarthritis Classification System Examination on Wearable Daily-use IMU LayoutIn Proceedings of the 2022 ACM International Symposium on Wearable Computers (ISWC '22)ACM74–78September11–152022Atlanta, USA and Cambridge, UKhttps://doi.org/10.1145/3544794.3558459https://www.slideshare.net/sugiuralab/knee-osteoarthritis-classification-system-examination-on-wearable-dailyuse-imu-layouthttps://youtu.be/lK7o5Y78rq8?list=PLmr5ZJP6hj9tu2eSTX594WKhc1ufh-ccrhttps://lclab.org/projects/kneeoa37.5
32
Yohei Kawasaki, Yuta SugiuraIdentification and Authentication Using Blink with Smart GlassesIn Proceedings of the SICE Annual Conference 2022IEEE1251-1256September6-92022Kumamoto, Japanhttps://doi.org/10.23919/SICE56594.2022.9905842https://www.slideshare.net/sugiuralab/identification-and-authentication-using-blink-with-smart-glasseshttps://youtu.be/WcW6W0-FeYc?list=PLmr5ZJP6hj9tu2eSTX594WKhc1ufh-ccrhttps://lclab.org/projects/blink
33
Chengshuo Xia, Yuta SugiuraDesigning a Customized Wearable Human Activity Recognition System based on Virtual Avatar and Synthetic Acceleration DataThe 17th International Symposium of 3DAHMJuly16-192022Tokyo / online
34
Motoyasu Masui, Yoshinari Takegawa, Yutaka Tokuda, Yuta Sugiura, Katsutoshi Masai, Keiji HirataHigh-Speed Thermochromism Control Method Integrating Water Cooling Circuits and Electric Heating Circuits Printed with Conductive Silver Nanoparticle InkIn Proceedings of the 24rd International Conference on Human-Computer Interaction (HCII '22)Springer66–80June262022onlinehttps://doi.org/10.1007/978-3-031-05409-9_6
35
Xiang Zhang, Kaori Ikematsu, Kunihiro Kato, Yuta SugiuraReflecTouch: Detecting Grasp Posture of Smartphone Using Corneal Reflection Images The ACM CHI Conference on Human Factors in Computing Systems (CHI 2022)ACMArticle No.289(Pages 1–8)April302022Hybrid-Onsite(New Orleans, LA,USA)https://doi.org/10.1145/3491102.3517440https://www.slideshare.net/sugiuralab/reflectouch-chi-2022https://youtu.be/aB-Aq6sQPiohttps://lclab.org/projects/reflectouch
36
Kana Matsuo, Koji Fujita, Takafumi Koyama, Shingo Morishita, Yuta SugiuraCervical Spine Range of Motion Measurement Utilizing Image AnalysisIn Proceedings of the 17th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 4: VISAPPSciTePress861-867February6-82022onlinehttps://doi.org/10.5220/0010819400003124https://www.slideshare.net/sugiuralab/cervical-spine-range-of-motion-measurement-utilizing-image-analysis-visapp2022https://youtu.be/y6HBflYr8VMhttps://lclab.org/projects/csrom
37
Ryota Matsui, Takafumi Koyama, Koji Fujita, Hideo Saito and Yuta SugiuraVideo-Based Hand Tracking for Screening Cervical MyelopathyInternational Symposium on Visual Computing (ISVC 2021)Springer3-14October4‐62021onlinehttps://doi.org/10.1007/978-3-030-90436-4_1https://www.slideshare.net/sugiuralab/videobased-hand-tracking-for-screening-cervical-myelopathy-isvc2021https://youtu.be/Aiw6t16cQEIhttps://lclab.org/projects/cm-screening-via-rgb-cam
38
Motoyasu Masui, Yoshinari Takegawa, Nonoka Nitta, Yutaka Tokuda, Yuta Sugiura, Katsutoshi Masai, Keiji HirataPerformEyebrow: Design and Implementation of an Artificial Eyebrow Device Enabling Augmented Facial ExpressionIn Proceedings of the 23rd International Conference on Human-Computer Interaction (HCII2021)Springer584-597July32021onlinehttps://doi.org/10.1007/978-3-030-78468-3_40
39
Miyu Fujii, Kaho Kato, Chengshuo Xia, Yuta SugiuraPersonal Identification using Gait Data on Slipper-device with AccelerometerAsian CHI Symposium 2021ACM74-79May7-82021onlinehttps://doi.org/10.1145/3429360.3468185https://www.slideshare.net/sugiuralab/personal-identification-using-gait-data-on-slipperdevice-with-accelerometer-asian-chi-2021-symposiumhttps://youtu.be/TFnMUIVZlHQhttps://lclab.org/projects/slipper
40
Xinrui Fang, Chengshuo Xia, Yuta SugiuraFacialPen: Using Facial Detection to Augment Pen-Based InteractionAsian CHI Symposium 2021ACM1-8May7-82021onlinehttps://doi.org/10.1145/3429360.3467672https://www.slideshare.net/sugiuralab/facialpen-using-facial-detection-to-augment-penbased-interaction-asian-chi-2021-symposiumhttps://youtu.be/gJRd2L0ohEEhttps://lclab.org/projects/facial-pen
41
Kaho Kato, Chengshuo Xia, Yuta SugiuraExercise Recognition System using Facial Image Information from a Mobile DeviceThe 2021 IEEE 3rd Global Conference on Life Sciences and Technologies (LifeTech 2021)IEEE268-272March9-112021Nara, Japanhttps://doi.org/10.1109/LifeTech52111.2021.9391782https://www.slideshare.net/sugiuralab/exercise-recognition-system-usingfacial-image-information-from-a-mobile-device-ieee-lifetech-2021https://youtu.be/33nIKSSLpHUhttps://lclab.org/projects/exerciserecognition
42
Ryota Matsui, Kaho Kato, Yuta SugiuraHuman Movement Recognition Using Internal Sensors of a Smartphone-based HMDThe 27th International Workshops (IDW '20)December9-112020onlinehttps://doi.org/10.36463/idw.2020.0935https://www.slideshare.net/sugiuralab/human-movement-recognition-using-internal-sensors-of-a-smartphonebased-hmd-idw-2020https://youtu.be/cBzsYys8zt8https://lclab.org/projects/hmdgesture
43
Yoshinari Takegawa, Yutaka Tokuda, Akino Umezawa, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Sugiura, Maki Sugimoto, Diego Martinez-Plasencia, Sriram Subramanian, Keiji HirataDigital Full-face Mask Display with Expression Recognition using Embedded Photo Reflective Sensor Arrays2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)IEEE101-108November9-132020onlinehttps://doi.org/10.1109/ISMAR50242.2020.00030https://youtu.be/Qow0lkh8mRQhttps://lclab.org/projects/e2-mask-z
44
Katsutoshi Masai, Kai Kunze, Daisuke Sakamoto, Yuta Sugiura, Maki SugimotoFace Command -- User-defined Facial Gestures on Smart Glasses2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR)IEEE374-386November9-132020onlinehttps://doi.org/10.1109/ISMAR50242.2020.00064https://lclab.org/projects/facecommands
45
Shusuke Sato, Yuta SugiuraAnomaly Movement Detection System using Autoencoder to Support Beginner Lure OperationThe SICE Annual Conference 2020 (SICE 2020)September262020onlinehttps://lclab.org/projects/iolure
46
Chengshuo Xia,Yuta SugiuraA Study of Wearable Accelerometers Layout for Human Activity RecognitionAsian CHI Symposium 2020April252020onlinehttps://www.slideshare.net/sugiuralab/a-study-of-wearable-accelerometers-layout-for-human-activity-recognitionasianchi2020https://lclab.org/projects/sensoroptimizationAsian CHI Symposium 2020 Best Paper Award
47
Masaru Watanabe, Yuta Sugiura,Hideo Saito,Takafumi Koyama,Koji Fujita Detection of cervical myelopathy with Leap Motion Sensor by random forestsThe 2020 IEEE 2nd Global Conference on Life Sciences and Technologies (LifeTech 2020)IEEE214-216March10-112020Kyoto,Japanhttps://doi.org/10.1109/LifeTech48969.2020.1570620097
48
Chengshuo Xia, Yuta SugiuraWearable Accelerometer Optimal Positions for Human Motion RecognitionThe 2020 IEEE 2nd Global Conference on Life Sciences and Technologies (LifeTech 2020)IEEE19-20March10-112020Kyoto,Japanhttps://doi.org/10.1109/LifeTech48969.2020.1570618961https://www.slideshare.net/sugiuralab/wearable-accelerometer-optimal-positions-for-human-motion-recognitionlifetech2020https://lclab.org/projects/sensoroptimization
49
Kaho Kato, Kohei Matsumura, Yuta SugiuraAroundSense: An Input Method for Gestures around a SmartphoneThe 26th International Workshops (IDW '19)November27-292019Sapporohttps://confit.atlas.jp/guide/event/idw2019ln/subject/INPp1-4L/datehttps://www.slideshare.net/sugiuralab/aroundsense-an-input-method-for-gestures-around-a-smartphoneidw-19https://lclab.org/projects/aroundsenseOutstanding Poster Paper Award
50
Fumihiko Nakamura, Katsuhiro Suzuki, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, Maki SugimotoAutomatic Labeling of Training Data by Vowel Recognition for Mouth Shape Recognition with Optical Sensors Embedded in Head-Mounted DisplayIn Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments (ICAT-EGVE '19)The Eurographics Association9-16September11-132019Tokyo, Japanhttps://diglib.eg.org/handle/10.2312/egve20191274https://lclab.org/projects/mouth-capture
51
Kentaro Ino, Kosuke Takahashi, Mariko Isogawa, Yoshinori Kusachi, Dan Mikami, Yuta Sugiura, Hideo SaitoBODY SHAPE AND CENTRE OF MASS ESTIMATION USING MULTI-VIEW IMAGESIn Proceedings of the 37th International Society of Biomechanics in Sport Conference (ISBS2019)161-164July21-252019Maiami,USAhttps://commons.nmu.edu/cgi/viewcontent.cgi?article=1688&context=isbs
52
Nagisa Matsumoto, Chihiro Suzuki, Koji Fujita, Yuta SugiuraA Training System for Swallowing Ability by Visualizing the Throat PositionIn Proceedings of the 21st International Conference on Human-Computer Interaction (HCII2019)Springer501-511July29-312019Florida, USAhttps://doi.org/10.1007/978-3-030-22219-2_37https://www.slideshare.net/sugiuralab/a-training-system-for-swallowing-ability-by-visualizing-the-throat-position-hcii-2019-full-paperhttps://lclab.org/projects/enge
53
Yuki Ishikawa, Ryo Hachiuma, Naoto Ienaga, Wakaba Kuno, Yuta Sugiura, Hideo SaitoSemantic Segmentation of 3D Point Cloud to Virtually Manipulate Real Living SpaceIn Proceedings of the 12th Asia Pacific Workshop on Mixed and Augmented Reality (APMAR2019) IEEE1-7March28-292019Nara, Japanhttps://doi.org/10.1109/APMAR.2019.8709156https://lclab.org/projects/semantic-segmentation
54
Naoto Ienaga, Yuta Sugiura, Hdeo Saito, Koji FujitaSelf-assessment Application of Flexion and ExtensionIn Proceedings of the 2019 IEEE 1st Global Conference on Life Sciences and Technologies (LifeTech 2019)IEEE150-152March12-142019Kyoto, Japanhttps://ieeexplore.ieee.org/document/8884003
55
Takumi Kobayashi, Yuta Sugiura, Hideo Saito, Yuji UemaAutomatic Eyeglasses Replacement for a 3D Virtual Try-on SystemIn Proceedings of the 10th Augmented Human International Conference (AH '19)ACM30:1--30:4March11-122019Reims, Francehttps://doi.org/10.1145/3311823.3311854https://www.slideshare.net/sugiuralab/automatic-eyeglasses-replacement-for-a-3d-virtual-tryon-system-ah2019-short-paperhttps://lclab.org/projects/ar-eyeglasses49
56
Ayane Saito, Wakaba Kuno, Wataru Kawai, Natsuki Miyata, Yuta SugiuraEstimation of Fingertip Contact Force by Measuring Skin Deformation and Posture with Photo-reflective SensorsIn Proceedings of the 10th Augmented Human International Conference (AH '19)ACM2:1--2:6March11-122019Reims, Francehttps://doi.org/10.1145/3311823.3311824https://www.slideshare.net/sugiuralab/estimation-of-fingertip-contact-force-by-measuring-skin-deformation-and-posture-with-photoreflective-sensors-ah-2019-full-paperhttps://lclab.org/projects/touch-log49
57
Naoto Ienaga, Wataru Kawai, Koji Fujita, Natsuki Miyata, Yuta Sugiura, Hideo SaitoA Thumb Tip Wearable Device Consisting of Multiple Cameras to Measure Thumb PostureInternational Workshop on
Attention/Intention Understanding
(AIU2018), ACCV2018
Springer31-38December2-62018Perth, Australiahttps://doi.org/10.1007/978-3-030-21074-8_3https://lclab.org/projects/multiple-cameras-to-measure-thumb-posture
58
Konomi Inaba, Akihiko Murai, Yuta SugiuraCenter of Pressure Estimation and Gait Pattern Recognition Using Shoes with Photo-reflective SensorsIn Proceedings of the 30th Australian Conference on Computer-Human Interaction (OzCHI '18)ACM224-228December4-72018Melbourne, Australiahttps://doi.org/10.1145/3292147.3292189https://www.slideshare.net/sugiuralab/center-of-pressure-estimation-and-gait-pattern-recognition-using-shoes-with-photoreflective-sensors-ozchi2018https://lclab.org/projects/senshoe40
59
Yuta Sugiura, Hikaru Ibayashi, Toby Chong, Daisuke Sakamoto, Natsuki Miyata, Mitsunori Tada, Takashi Okuma, Takeshi Kurata, Takashi Shinmura, Masaaki Mochimaru, and Takeo IgarashiAn Asymmetric Collaborative System for Architectural-scale Space DesignIn Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry (VRCAI '18)ACMArticle 21, 6 pagesDecember2-32018Tokyo, Japanhttps://doi.org/10.1145/3284398.3284416https://www.youtube.com/watch?v=Fn446IApjrMhttps://lclab.org/projects/dollhouse-vr
60
Yuta Sugiura, Toby Chong, Wataru Kawai, and Bruce H. ThomasPublic/Private Interactive Wearable Projection DisplayIn Proceedings of the 16th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry (VRCAI '18)ACMArticle 10, 6 pagesDecember2-32018Tokyo, Japanhttps://doi.org/10.1145/3284398.3284415https://www.slideshare.net/sugiuralab/publicprivate-interactive-wearable-projection-display-vrcai2018https://www.youtube.com/watch?v=6Ii7Mn6-7dEhttps://lclab.org/projects/publicprivatedisplay
61
Kentaro Ino, Naoto Ienaga, Yuta Sugiura, Hideo Saito, Natsuki Miyata, Mitsunori TadaGrasping Hand Pose Estimation from RGB Images using Digital Human Model by Convolutional Neural NetworkIn Proceedings of the 3DBODY.TECH 2018 - 9th Int. Conf. and Exh. on 3D Body Scanning and Processing Technologies154-160October16-172018Lugano, Switzerlandhttps://doi.org/10.15221/18.154https://lclab.org/projects/grasphandposeestimation
62
Takumi Kobayashi, Naoto Ienaga, Yuta Sugiura, Hideo Saito, Natsuki Miyata, Mitsumori TadaA Simple 3D Scanning System of the Human Foot Using a Smartphone with a Depth CameraIn Proceedings of the 3DBODY.TECH 2018 - 9th Int. Conf. and Exh. on 3D Body Scanning and Processing Technologies161-169October16-172018Lugano, Switzerlandhttps://doi.org/10.15221/18.161https://lclab.org/projects/footreconstruction
63
Kei Saito,Katsutoshi Masai, Yuta Sugiura, Toshitaka Kimura, Maki SugimotoDevelopment of a Virtual Environment for Motion Analysis of Tennis Service ReturnsIn Proceedings of the 1st International Workshop on Multimedia Content Analysis in Sports (MMSports'18)ACM59-66October262018Seoul, Republic of Koreahttps://doi.org/10.1145/3265845.3265854https://im-lab.net/tennis_vr/
64
Kentaro Yagi, Kunihiro Hasegawa, Yuta Sugiura, Hideo SaitoEstimation of Runners’ Number of Steps, Stride Length and Speed Transition from Video of a 100-Meter RaceIn Proceedings of the 1st International Workshop on Multimedia Content Analysis in Sports (MMSports'18)ACM87-95 October262018Seoul, Republic of Koreahttps://doi.org/10.1145/3265845.3265850https://lclab.org/projects/estimating-a-runners-stride-length-and-frequency
65
Kosuke Kikui, Yuta Itoh, Makoto Yamada, Yuta Sugiura, Maki SugimotoIntra-/Inter-user Adaptation Framework for Wearable Gesture Sensing DeviceIn Proceedings of the 2018 ACM International Symposium on Wearable Computers (ISWC '18)ACM21-24September8-122018Singaporehttps://doi.org/10.1145/3267242.3267256https://lclab.org/projects/intra-inter-user-adaptation
66
Takuma Hashimoto, Suzanne Low, Koji Fujita, Risa Usumi, Hiroshi Yanagihara, Chihiro Takahashi, Maki Sugimoto, Yuta SugiuraTongueInput: Input Method by Tongue Gestures Using Optical Sensors Embedded in MousepieceIn Proceedings of the SICE Annual Conference 2018IEEE6 pages
1219 - 1224
September11-142018Nara, Japanhttps://doi.org/10.23919/SICE.2018.8492690https://www.slideshare.net/sugiuralab/tongueinput-input-method-by-tongue-gestures-using-optical-sensors-embedded-in-mousepiecehttps://lclab.org/projects/tongueinput
67
Junya Taira, Suzanne Low, Maki Sugimoto, Yuta SugiuraDetecting Position of a Device by Swept Frequency of Microwave on Two-Dimensional Communication System
In Proceedings of the SICE Annual Conference 2018IEEE6 pages
1213 - 1218
September11-142018Nara, Japanhttps://doi.org/10.23919/SICE.2018.8492609https://www.slideshare.net/sugiuralab/detecting-position-of-a-device-by-swept-frequency-of-microwave-on-twodimensional-communication-systemhttps://lclab.org/projects/tdc
68
Moeko Iwasaki, Suzanne Low, Mitsunori Tada, Yuta Sugiura, Hideo Saito, Maki Sugimoto3D Shape Reconstruction of Human Foot using Distance SensorsIn Proceedings of the SICE Annual Conference 20186 pagesSeptember11-142018Nara, Japanhttps://www.slideshare.net/sugiuralab/3d-shape-reconstruction-of-human-foot-using-distance-sensorshttps://lclab.org/projects/hulahoop
69
Kentaro Yagi, Kunihiro Hasegawa, Yuta Sugiura, Hideo SaitoESTIMATING A RUNNER’S STRIDE LENGTH AND FREQUENCY FROM A RACE VIDEO BY USING GROUND STITCHING36th Conference of the International Society of Biomechanics in Sports298-301September10-142018Auckland, New Zealandhttps://dl.acm.org/doi/10.1145/3265845.3265850https://lclab.org/projects/estimating-a-runners-stride-length-and-frequency
70
Katsutoshi Masai, Yuta Sugiura, and Maki SugimotoFaceRubbing: Input Technique by Rubbing Face using Optical Sensors on Smart Eyewear for Facial Expression RecognitionIn Proceedings of the 9th Augmented Human International Conference (AH '18)ACM5 pagesFebruary7-92018Seoul, Republic of Koreahttps://doi.org/10.1145/3174910.3174924
71
Nao Asano, Katsutoshi Masai, Yuta Sugiura, and Maki SugimotoFacial Performance Capture by Embedded Photo Reflective Sensors on A Smart EyewearIn Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments (ICAT-EGVE '17)The Eurographics Association8 pagesNovember22-242017Adelaide, Australiahttp://dx.doi.org/10.2312/egve.20171334 https://www.youtube.com/watch?v=W_1reOv0Jsc&feature=youtu.behttp://im-lab.net/facial-performance-capture-by-embedded-photo-reflective-sensors-on-a-smart-eyewear/
72
Wakaba Kuno, Yuta Sugiura, Nao Asano, Wataru Kawai, and Maki Sugimoto3D Reconstruction of Hand Postures by Measuring Skin Deformation on Back HandIn Proceedings of the International Conference on Artificial Reality and Telexistence & Eurographics Symposium on Virtual Environments (ICAT-EGVE '17)The Eurographics Association8 pagesNovember22-242017Adelaide, Australiahttp://dx.doi.org/10.2312/egve.20171362 https://www.slideshare.net/sugiuralab/3d-reconstruction-of-hand-postures-by-measuring-skin-deformation-on-back-hand-icategve-2017https://www.youtube.com/watch?v=x5yXRdHrsEkhttps://lclab.org/projects/finger-posture-estimation
73
Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas, Yuta SugiuraCheekInput: Turning Your Cheek into an Input Surface by Embedded Optical Sensors on a Head-mounted Display In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST '17)ACMArticle 19, 8 pagesNovember8-102017Gothenburg, Swedenhttps://doi.org/10.1145/3139131.3139146https://www.slideshare.net/sugiuralab/cheekinput-turning-your-cheek-into-an-input-surface-by-embedded-optical-sensors-on-a-headmounted-displayhttps://www.youtube.com/watch?v=oQKq8lrEihUhttps://lclab.org/projects/cheekinput
74
Naomi Furui, Katsuhiro Suzuki, Yuta Sugiura, and Maki SugimotoSofTouch: Turning Soft Objects into Touch Interfaces by Detachable Photo Sensor ModulesIn Proceedings of the 16th International Conference on Entertainment Computing 2017 (ICEC '17)Springer47-58September19-212017Tsukuba, Japanhttps://doi.org/10.1007/978-3-319-66715-7_6https://lclab.org/projects/softouchBest Paper Award
75
Yan Zhao, Yuta Sugiura, Mitsunori Tada and Jun MitaniInsTangible: A Tangible User Interface Combining Pop-up Cards with Conductive Ink PrintingIn Proceedings of the 16th International Conference on Entertainment Computing 2017 (ICEC '17)Springer72-80September19-212017Tsukuba, Japanhttps://doi.org/10.1007/978-3-319-66715-7_8https://www.youtube.com/watch?v=IIExb45vkx0https://lclab.org/projects/instangible
76
Yuta Sugiura, Fumihiko Nakamura, Wataru Tawai, Takashi Kikuchi, and Maki SugimotoBehind the palm: Hand gesture recognition through measuring skin deformation on back of hand by using optical sensorsSICE Annual Conference 2017 (SICE '17)IEEE1082-1087September19-222017Kanazawa, Japanhttps://doi.org/10.23919/SICE.2017.8105457https://www.youtube.com/watch?v=nktzxEik7jIhttps://lclab.org/projects/behind-the-palm
77
Takashi Kikuchi, Yuta Sugiura, Katsutoshi Masai, Maki Sugimoto, Bruce H. ThomasEarTouch: Turning the Ear into an Input SurfaceIn Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '17)ACM6 pagesSeptember4-72017Vienna, Austriahttps://doi.org/10.1145/3098279.3098538https://www.slideshare.net/sugiuralab/eartouch-turning-the-ear-into-an-input-surfacehttps://www.youtube.com/watch?v=yF2VeTvcFo0https://lclab.org/projects/eartouch20
78
Natsuki Miyata, Takehiro Honoki, Yuta Sugiura, and Yusuke MaedaAn Interactive Assessment of robustness and comfort in human graspsProceedings of the 5th International Digital Human Modeling Symposium (DHM '17)baua184-190June26-282017Bonn, Germanyhttps://www.baua.de/DE/Angebote/Publikationen/Berichte/Gd91.html?pk_campaign=DOIhttps://www.youtube.com/watch?v=ccQzNdNB5Dw&feature=youtu.behttps://lclab.org/projects/wrap-sense
79
Naoki Kashiwagi, Yuta Sugiura, Natsuki Miyata, Mitsunori Tada, Maki Sugimoto, Hideo SaitoMeasuring Grasp Posture Using an Embedded CameraThe First International Workshop on Human Activity Analysis with Highly Diverse Cameras (HDC2017)IEEE42-47March24-312017Santa Rosa, CA, USAhttps://doi.org/10.1109/WACVW.2017.14https://www.youtube.com/watch?v=qngdJKUBtPghttps://lclab.org/projects/handsensing60
80
Arashi Shimazaki, Yuta Sugiura, Dan Mikami, Toshitaka Kimura, and Maki SugimotoMuscleVR: detecting muscle shape deformation using a full body suitIn Proceedings of the 8th Augmented Human International Conference (AH '17)ACMArticle 15, 8 pagesMarch16-182017Silicon Valley, California, USAhttps://doi.org/10.1145/3041164.3041184https://www.youtube.com/watch?v=jPP4HWAB1wwhttps://lclab.org/projects/musclevr29
81
Katsutoshi Masai, Yuta Sugiura, and Maki SugimotoACTUATE racket: designing intervention of user's performance through controlling angle of racket surfaceIn Proceedings of the 8th Augmented Human International Conference (AH '17)ACMArticle 31, 5 pagesMarch16-182017Silicon Valley, California, USAhttps://doi.org/10.1145/3041164.3041200https://www.youtube.com/embed/WnV6OGQrmgEhttps://lclab.org/projects/actuateracket53
82
Katsuhiro Suzuki, Fumihiko Nakamura, Jiu Otsuka, Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki SugimotoRecognition and Mapping of Facial Expressions to Avatar by Embedded Photo Reflective Sensors in Head Mounted DisplayVirtual Reality (VR '17)IEEE177-185March18-222017Los Angeles, CA, USAhttps://doi.org/10.1109/VR.2017.7892245https://www.youtube.com/watch?v=UDjNBZ14mUIhttps://lclab.org/projects/affectivehmd22.4
83
Katsutoshi Masai, Yuta Itoh, Yuta Sugiura, and Maki SugimotoAnalysis of Multiple Users' Experience in Daily Life Using Wearable Device for Facial Expression RecognitionIn Proceedings of the 13th International Conference on Advances in Computer Entertainment Technology (ACE '16)ACMArticle 52, 5 pagesNovember9-122016Osaka, Japanhttps://doi.org/10.1145/3001773.3014351https://www.youtube.com/watch?v=9PMzpsDg518https://lclab.org/projects/affectivewear
84
Natsuki Miyata, Takehiro Honoki, Yusuke Maeda, Yui Endo, Mitsunori Tada, and Yuta SugiuraGrasp sensing for daily-life observation - concept proposal and prototype implementation for cylindrical object -The 4th International Digital Human Modeling Symposium (DHM '16)June15-172016Montréal, Canadaxxhttps://www.youtube.com/embed/ccQzNdNB5Dwhttps://lclab.org/projects/wrap-sense
85
Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Kai Kunze, Masahiko Inami, and Maki SugimotoFacial Expression Recognition in Daily Life by Embedded Photo Reflective Sensors on Smart EyewearIn Proceedings of the 21st International Conference on Intelligent User Interfaces (IUI '16)ACM317-326March7-102016Sonoma, California, USAhttps://doi.org/10.1145/2856767.285677025
86
Masaharu Hirose, Karin Iwazaki, Kozue Nojiri, Minato Takeda, Yuta Sugiura, and Masahiko InamiGravitamine spice: a system that changes the perception of eating through virtual weight sensationIn Proceedings of the 6th Augmented Human International Conference (AH '15)ACM33-40March9-102015Singaporehttps://doi.org/10.1145/2735711.2735795https://lclab.org/projects/gravitamine-spice28
87
Yuta Sugiura, Koki Toda, Takayuki Hoshi, Youichi Kamiyama, Takeo Igarashi, and Masahiko InamiGraffiti fur: turning your carpet into a computer displayIn Proceedings of the 27th annual ACM symposium on User interface software and technology (UIST '14) ACM149-156October5-82014Hawaii, USAhttps://doi.org/10.1145/2642918.2647370https://www.slideshare.net/sugiuralab/graffiti-fur-turning-your-carpet-into-a-computer-display-uist2014-240137184https://www.youtube.com/watch?v=L0hrETGddLQhttps://lclab.org/projects/graffiti-fur22Best Talk Award
88
Masa Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, Michita ImaiAugmenting a Wearable Display with Skin Surface as an Expanded Input AreaIn Proceedings of the HCI International 2014 (HCII '14)SpringerVolume 8518, 2014, pp 606-614.June22-272014Crete, Greecehttps://doi.org/10.1007/978-3-319-07626-3_57https://www.youtube.com/watch?v=3d5MSQanwAEhttps://lclab.org/projects/senskin
89
Suzanne Low, Yuta Sugiura, Dixon Lo, and Masahiko InamiPressure detection on mobile phone by camera and flashIn Proceedings of the 5th Augmented Human International Conference (AH '14)ACMArticle 11, 4 pagesMarch7-92014Kobe, Japanhttps://doi.org/10.1145/2582051.2582062https://www.youtube.com/watch?v=2MQkFmQr_TIhttps://lclab.org/projects/pressure-detection36
90
Kozue Nojiri, Suzanne Low, Koki Toda, Yuta Sugiura, Youichi Kamiyama, and Masahiko InamiPresent information through afterimage with eyes closedIn Proceedings of the 5th Augmented Human International Conference (AH '14)ACMArticle 3, 4 pagesMarch7-92014Kobe, Japanhttps://doi.org/10.1145/2582051.2582054https://lclab.org/projects/afterimage36
91
Shunsuke Koyama, Yuta Sugiura, Masa Ogata, Anusha Withana, Yuji Uema, Makoto Honda, Sayaka Yoshizu, Chihiro Sannomiya, Kazunari Nawa, and Masahiko InamiMulti-touch steering wheel for in-car tertiary applications using infrared sensorsIn Proceedings of the 5th Augmented Human International Conference (AH '14)ACMArticle 5, 4 pagesMarch7-92014Kobe, Japanhttps://doi.org/10.1145/2582051.2582056https://www.youtube.com/embed/Sx8fnU92Ly0https://lclab.org/projects/multitouchhandle36
92
Suzanne Low, Yuta Sugiura, Kevin Fan and Masahiko InamiCuddly: Enchant Your Soft Objects with a Mobile PhoneIn Proc. Advances in Computer Entertainment 2013 (ACE '13)Springer138-151November13-152013Boekelo, Netherlandshttps://doi.org/10.1007/978-3-319-03161-3_10https://www.youtube.com/watch?v=K3iEFW4qbB0https://lclab.org/projects/cuddly22Best Presentation Award
93
Tsubasa Yamamoto, Yuta Sugiura, Suzanne Low, Koki Toda, Kouta Minamizawa, Maki Sugimoto and Masahiko InamiPukaPuCam: Enhance Travel Logging Experience through Third-Person View Camera Attached to BalloonsIn Proc. Advances in Computer Entertainment 2013 (ACE '13)Springer428-439November13-152013Boekelo, Netherlandshttps://doi.org/10.1007/978-3-319-03161-3_32https://www.youtube.com/watch?v=YA1RCn-vIwohttps://lclab.org/projects/pukapucam22
94
Masa Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, and Michita ImaiSenSkin: adapting skin as a soft interfaceIn Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST '13)ACM539-544October08 - 112013St Andrews, UKhttps://doi.org/10.1145/2501988.2502039https://www.youtube.com/watch?v=3d5MSQanwAEhttps://lclab.org/projects/senskin19
95
Masa Ogata, Yuta Sugiura, Hirotaka Osawa, and Michita ImaiFlashTouch: data communication through touchscreensIn Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13)ACM2321-2324April 27 - May 022013Paris, Francehttps://doi.org/10.1145/2470654.2481320https://www.youtube.com/watch?v=yt2eaBRZ-sghttps://lclab.org/projects/flashtouch20
96
Kevin Fan, Hideyuki Izumi, Yuta Sugiura, Kouta Minamizawa, Sohei Wakisaka, Masahiko Inami, Naotaka Fujii, and Susumu TachiReality jockey: lifting the barrier between alternate realities through audio and haptic feedbackIn Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '13)ACM2557-2566April 27 - May 022013Paris, Francehttps://doi.org/10.1145/2470654.2481353https://www.youtube.com/watch?v=io45rJNNY9Mhttps://lclab.org/projects/realityjockey20
97
Yasutoshi Makino, Yuta Sugiura, Masa Ogata, and Masahiko InamiTangential force sensing system on forearmIn Proceedings of the 4th Augmented Human International Conference (AH '13)ACM29-34March7-82013Stuttgart, Germanyhttps://doi.org/10.1145/2459236.2459242https://www.youtube.com/watch?v=WvfcoYPXUEQ71
98
Yuta Sugiura, Masahiko Inami, and Takeo IgarashiA thin stretchable interface for tangential force measurementIn Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST '12)ACM529-536October7-102012Cambridge, MA, USAhttps://doi.org/10.1145/2380116.2380182https://www.slideshare.net/sugiuralab/a-thin-stretchable-interfacefor-tangential-force-measurement-uist-2012https://www.youtube.com/watch?v=9dw-rjjk3sohttps://lclab.org/projects/metaskin21
99
Masa Ogata, Yuta Sugiura, Hirotaka Osawa, and Michita ImaiiRing: intelligent ring using infrared reflectionIn Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST '12)ACM131-136October7-102012Cambridge, MA, USAhttps://doi.org/10.1145/2380116.2380135https://www.youtube.com/watch?v=DcF7AWhgdP4https://lclab.org/projects/iring21
100
Masa Ogata, Yuta Sugiura, Hirotaka Osawa, and Michita ImaiPygmy: a ring-shaped robotic device that promotes the presence of an agent on human handIn Proceedings of the 10th asia pacific conference on Computer human interaction (APCHI '12)ACM85-92August28-312012Matsue-city, Shimane, Japanhttps://doi.org/10.1145/2350046.235006726.5