1 of 9

Call for Research Interns �Jan/Feb 2026

2 of 9

About the Wearable and Interactive Technology Lab

The Wearable and Interactive Technology Lab in the School of Electrical Engineering at KAIST focuses on the design, development, and evaluation of wearable, physical, and tangible interactive computing systems. Bringing together design perspectives, computer science skills, and psychological methods, the WIT lab conceives, creates, and studies the next generation of human-computer interfaces.

3 of 9

Why join WIT Lab?

Conduct research on emerging technology

Work with a graduate student mentor

Work towards a research paper

4 of 9

  • How to
    • Select at one or more topics that you like to work on
    • Write a brief one-paragraph statement about why you want to work on each topic. If you select more than one topic, write a statement for each one.
    • Send an email with this content, plus your CV and transcript, to Ian Oakley <ianoakley@kaist.ac.kr>
  • Logistics
    • Only accepting full-time internship (no part-time participation)
    • Remote working is feasible (discuss with your mentor for coordination)
    • Stipend will be paid for two months (Jan/Feb)
    • Can complete the internship as EE495 independent research (getting 1 credit)
  • Timeline
    • Wed 5th Nov: Announcement
    • Wednesday 26th Nov: Application deadline (Friday 21st for URP)
    • Wednesday 3rd Dec: Acceptance notification and mentor matching
    • Dec 3rd-28th Pre-internship meeting with a mentor
    • Dec 29th – Feb 27th Internship (8 weeks, skipping the lunar new year week)

How to apply?

5 of 9

Egocentric Full-Body Motion Capture via Smart Glasses

Mentor: Hyunyoung Han (hyhan@kaist.ac.kr, Website)

Required skills: 3D modeling

Background: This research explores novel approaches to egocentric motion capture using commercially available smart eyewear. We will investigate optical and computational methods to expand sensing capabilities within the constraints of head-mounted devices, enabling full-body motion tracking without requiring instrumented environments or additional wearable sensors [1, 2]. The system addresses key challenges in self-occluded body tracking [3] through customizable hardware configurations and specialized computer vision techniques.

Expected Outcomes: Prototype for egocentric motion capture system, research paper

References

[1] Kang, T., Lee, K., Zhang, J., & Lee, Y. (2023, December). Ego3dpose: Capturing 3d cues from binocular egocentric views. In SIGGRAPH Asia 2023 Conference Papers (pp. 1-10).

[2] Dai, P., Zhang, Y., Liu, T., Fan, Z., Du, T., Su, Z., ... & Li, Z. (2024). Hmd-poser: On-device real-time human motion tracking from scalable sparse observations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 874-884).

[3] Zhang, S., Ma, Q., Zhang, Y., Aliakbarian, S., Cosker, D., & Tang, S. (2023). Probabilistic human mesh recovery in 3d scenes from egocentric views. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 7989-8000).

Self-occlusion problem

Previous approaches to egocentric motion capture

6 of 9

Reading Webtoons in XR

Mentor: Ammar Al-Taie (ammar@kaist.ac.kr)

Required skills: Unity Development

Background: Webtoons are popular because they use familiar smartphone interaction techniques—scrolling, swiping, and tapping—to navigate panels. However, extended reality (XR) glasses are expected to replace smartphones as they become smaller and lighter. In this project, we will research how webtoons can be presented and navigated in XR, leveraging 3D object rendering, spatial affordances, and a range of sensors, including eye and face tracking. We will investigate how we can use XR to read webtoons while walking or travelling in public transport.

Expected Outcomes: Prototype XR webtoon concept and an experiment testing the concept.

Converting Webtoons to XR

7 of 9

Expanding the Design Space of Running Interfaces

Mentor: Ammar Al-Taie (ammar@kaist.ac.kr)

Required skills: Unity Development or Electronics Prototyping and passionate about running 🏃

Background: Running is the most popular physical activity worldwide. However, current devices, such as smartwatches or earbuds, are constrained to small screens or simplified audio and vibration feedback. In this internship, we will investigate the types of tasks runners do while running, and research how we can improve these tasks using new devices. We will develop a prototype device and then conduct an experiment to test it with real runners.

Expected Outcomes: Prototype running device, and an experiment testing that device

A runner distracted by their smartwatch

8 of 9

Keyboard Typing on an Unmodified Smartwatch Using Sonar

Mentor: Jiwan Kim (mail: kjwan4435@gmail.com, web: http://jiwan.kim/)

Required skills: Android programming, Python data processing

Background: Interaction with smartwatches is limited by the small size of their touch screens and occlusion issues, especially for text input. To address this limitation, many previous works explored text input on the smartwatch, but they are relying on external devices or still have occlusion issues. I want to suggest SonarType, which can sense around-movement without external sensors and occlusion issues, and with mitigating interference of nearby moving object.

Expected Outcomes: Mobile application prototyping, Data analysis, Run experiment

References: 1. Sonar sensing on the smartwatch: Kim, J., & Oakley, I. (2022, April). SonarID: Using Sonar to Identify Fingers on a Smartwatch. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (pp. 1-10).

2. Around gesture tracking using sonar on mobile device: Wang, W., Liu, A. X., & Sun, K. (2016, October). Device-free gesture tracking using acoustic signals. In Proceedings of the 22nd Annual International Conference on Mobile Computing and Networking (pp. 82-94).

3. Text entry for small touchscreens: Gong, J., Xu, Z., Guo, Q., Seyed, T., Chen, X. A., Bi, X., & Yang, X. D. (2018, April). Wristext: One-handed text entry on smartwatch using wrist gestures. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (pp. 1-14).

WristText (CHI’18)

LLAP (MobiCom’16)

Our system

9 of 9

Development and Design Gaze-Pinch Input Interaction on Tablet/Desktop

Mentor: Mingyu Han(mghan@kaist.ac.kr)

Required skills: Android and Python programming

Current pointing and selection method in Apple Vision Pro�Gaze(Eye) + Pinch(Hand) on Virtual Reality

What if we move this Gaze(Eye) + Pinch(Hand) on Tablet/Desktop with webcam?

Left: how to get gaze in built-in-camera in Phone

Right: Mediapipe to get hand joint point

References

[1] Huang, Qiong, Ashok Veeraraghavan, and Ashutosh Sabharwal. "Tabletgaze: dataset and analysis for unconstrained appearance-based gaze estimation in mobile tablets." Machine Vision and Applications 28.5 (2017): 445-461.

[2] Pfeuffer, Ken, and Hans Gellersen. "Gaze and touch interaction on tablets." Proceedings of the 29th Annual Symposium on User Interface Software and Technology. 2016.