Published using Google Docs
Jason Wu CMU PhD SOP Public
Updated automatically every 5 minutes

Jason Wu – jasonwu@gatech.edu

Computers have evolved from tools for calculation and productivity to our companions, constantly by our side and capable of learning more about us than any other human ever could. I’m fascinated by the interdisciplinary field of Human-Computer Interaction (HCI) which analyzes this complex relationship between humans and computer systems through the lenses of computer science (CS), engineering, and psychology. Specifically, I’m interested in harnessing the capabilities of mobile and pervasive computing to build low-cost and easily accessible systems that can collect data from our day-to-day activities to generate valuable and potentially life-saving medical insight. Ultimately, my mission is to contribute to the field of HCI by teaching computers to listen, understand, and respond to humans so that in the future they will not only become smarter assistants, but also friendlier companions.

Interfaces that process and interpret user input allow computers to listen to us and perceive our intent. To explore and build these systems, I joined a cross-lab research group between the Georgia Tech Ubicomp Lab and the Contextual Computing Group (CCG) focused on creating gestural interfaces for wearable computers. During my time there, I investigated algorithms for robust gesture detection and alternative input modalities for smartwatches. By collecting and processing data from sensors installed on smartwatches, applications can receive user input more rapidly and efficiently that using traditional interfaces. In particular, I worked on two systems that allow for subtle, one-handed input: Whoosh, an interface using non-voice acoustics, and SynchroWatch, a synchronous gesture interface using magnetic sensing. On both projects, I researched algorithm implementation, interface design, and helped conduct user studies and evaluation. Ultimately, Whoosh was accepted to the 2016 ACM International Symposium for Wearable Computers (ISWC), and SynchroWatch was accepted as journal publication to appear in the ACM Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT) journal. I’m proud of my contributions to these projects that allow wearable computers to more easily gather data and receive user input from onboard sensors and seek to continue to research this field.

Indeed, input interfaces will continue to improve, allowing computers to better interpret our intent and collect more relevant data, but in order to be useful, computer systems must also be able to understand information and produce meaningful insight. A major application of this collected data is found in technology-assisted healthcare, where intelligent algorithms and applications can be used to identify patterns in health data and provide alternative treatment options. As an undergraduate researcher at the Georgia Tech Ubicomp Lab, I researched methods for the monitoring and management of asthma symptoms. Specifically, I worked on the Deep Breath project, which aims to passively measure the lung function of asthma patients through gamification and signal processing techniques. Through building prototypes, conducting user studies, and analyzing data, I not only realized the importance of this research for asthma patients but also greatly developed my own skills as a researcher. Later, I had the opportunity to present my work to the Ubicomp Lab researchers and faculty and at the Georgia Tech Undergraduate Research Symposium. Deep Breath and other health informatics systems like it are prime examples of how computer systems can increase users’ quality of life by intelligently processing and understanding our data. I am currently continuing work on Deep Breath and excited to research similar projects in the future.

While it is important for computer systems to understand and process queries, they often must reply affectively and generate human-like responses. During my research internship at the University of Southern California’s Institute for Creative Technology (ICT), I had the opportunity to explore affective computing by working with virtual humans, virtual agents capable of emulating human behavior. There, I developed NADiA (Neurally Assisted Dialog Agent), a mobile virtual human for affective multi-modal interactions. By combining an affective neural language model, a virtual character animation system, and a neural network that infers the user’s emotional state from a smartphone camera, NADiA could sustain realistic human-like conversation while being lightweight enough to run on a mobile phone. Eventually, I led a study to quantify the effectiveness of the system using subjective and objective metrics. As shown by the results of a perception study and machine translation evaluation, NADiA produced better results than existing conversational agents in terms of perceived anthropomorphism and response quality. A paper detailing the NADiA system and evaluation results is currently under review for the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS). Affective systems like these will undoubtedly be essential for machines to communicate effectively with humans, and I am eager to participate in the research and development of this technology.

When I finish my undergraduate studies at Georgia Tech next spring, I will graduate not only with a degree (BS in Computer Science) but with an unforgettable 3-year academic and research experience that has put me well on my way to realizing my vision of human-machine integration. I’m proud of my achievements so far as a student and researcher, but if anything, my work and exploration into the field of HCI has only raised more questions than answers. Through my previous research, I have begun to fully appreciate the complexity and impact of the field. Thus, I have realized that in order to pursue these answers and my future career goal of becoming a professor, I must continue to grow as an explorer and researcher by enrolling in a PhD program. By attending graduate school, I will seek opportunities in research and academia that enable me to explore fields not limited to my own and contribute back to them. In addition, I am determined to increase my own understanding in the fields of CS and HCI and to develop my ability to pass that understanding onto others. In the future, I hope I can be as educational and inspiring as my own professors and mentors.

The computer science graduate school at CMU is a prestigious program that accepts ambitious students and transforms them through rigorous coursework, mentorship from distinguished faculty, and opportunities to conduct exciting research. CMU is home to the Human-Computer Interaction Institute (HCII) and the Language Technologies Institute (LTI) which are leaders in developing next-generation human-computer interfaces and natural language processing. Projects such as xxxxx, xxxxx, the xxxxx lab’s research into xxxxx, and the xxxxx are pushing the boundaries of how computers sense their environment, feel our emotions, and present us information. In many ways, their missions are well aligned with my own and it would be an honor to participate in the research and development of these projects and others like them. Working on these projects will undoubtedly be a challenging experience, but I am confident that my previous research experience and skill set will prove valuable and rewarding. I consider this opportunity essential to the realization of my future goals of being a professor and advancing the field of computer science through both education and research.