Hospitals not Homes,
Applied not Speculative
If we want to make real, meaningful advances developing and deploying robots in care settings sometime soon, we need to be more practical than we have been. General purpose robotics is still very far away. Roboticists and designers need to work together more closely rather than assuming that we’ll fit it all together later. And we need to figure out how we’re going to bridge the gap between the messy real world where we want robots to work and the highly controlled environments that robots actually work in today.
Recent advances in robotics and artificial intelligence are simultaneously exciting and overblown. On one hand it’s exciting to see the progress and to dream about the potential of general purpose robots that can locomote, grasp, and manipulate any object to be deployed at scale. On the other hand, the hype that this generates is misguided and can even be harmful to the advancement of the field by suggesting that we’ve solved robotics problems that we are not close to solving. As we’re writing this paper, Elon Musk claims he’ll be selling a humanoid robot by the end of 2025 [4]. This propagates the myth that robotics as a field will soon be solved – “it’s just around the corner, believe me.” Perhaps to a lesser extent, the academic HCI community relies on this myth as well – we imagine a future where the robot just works so that we can leverage our design methods to investigate how humans will interact with the robots. Who can blame us: it’s fun, exciting, and gets papers into CHI and DIS.
But roboticists will tell you that we’re still very far away from general purpose robots that can locomote and grasp in the messy real world. We can make robots that do repetitive tasks in tightly controlled environments, and with a few notable exceptions (e.g. Diligent robotics [9]) almost everything else is far off. Perhaps this is why Musk’s 2025 robot prophecy seems focused on these robots working in his Tesla factory. The problem of course, for this venue in particular, is that care settings are very far from tightly controlled environments with highly repetitive tasks. Many of the care tasks we would love for robots to engage in require interacting with, even touching, humans – this is obviously far from being a controlled environment. While there is some work to figure out how to safely make contact (e.g. [6]), again, there’s a long way to go.
We argue that the robotics community and the HCI/HRI community need to make some changes if our goal is to have real-world impact, rather than to spend resources designing for a future that will probably never come – or at least won’t look like the future we’re envisioning. First, while much care work is situated in the home making it a natural target for research, we think that exploring the roles robots can play in hospitals might have a better outlook near-term. Second, we think it’s important to look for ways for roboticists and designers to more deeply integrate our work to combat our natural instinct to silo ourselves within our own research communities.
Home is where the majority of care work takes place – even in the context of a long hospital stay for something like a spinal cord injury, which results in a maximum 3 month stay in a rehabilitation hospital in the United States. The care tasks that are required for those patients continue for the rest of their lives, and that care happens at home for most. So homes are a natural target for researchers thinking about deploying robots to carry out care tasks. But homes are not ideal places to deploy robots. First, they’re very unpredictable – it’s basically the polar opposite of a tightly controlled environment. People have different floor plans, different furniture, different places they store things, different tolerances for cleanliness…we could go on. There are so many variables, the robots are sure to be tripped up. And when they are inevitably tripped up, who is going to help fix or maintain the robot? Do we just employ an army of graduate students to drive around keeping everything running when we attempt a long-term robot deployment? Furthermore, our research very likely requires observing the research participants as they use the robots to evaluate the performance of the robot and also the human experience of cohabitating with the robot. Should we sit in their home to observe them? Record video? Rely on self-report?
The hospital, while still far from a tightly controlled environment, provides a more structured environment that addresses many of the challenges found in the home. Furniture, floorplans, and layouts are much more predictable; hospitals need to be kept clean and minimize clutter, the entire building is ADA compliant so locomoting is simplified. Of course, the robots will still likely be tripped up - failure is inevitable for the foreseeable future. But maintaining 10 robots in the same building is a much easier task than maintaining 10 robots all in different locations. Even the question of observation and data collection is easier to address in a hospital setting – despite patients being temporary residents in their hospital rooms, they tend not to see the room as entirely their space – hospital employees are constantly coming and going. Our own experiences embedding researchers in hospital rooms have shown that it is acceptable for patients; researcher-observers often fade into the background [3]. Of course research in hospitals has its own difficulties. One significant challenge for working in a hospital setting can be getting enough buy-in from hospital decision makers to overcome the bureaucratic red tape and other hurdles to facilitate deploying in this potentially sensitive environment.
Roboticists and designers are both guilty of living and working in speculative environments that assume that the other’s field will just figure it out. Roboticists typically identify a task to motivate their work that is either devoid of context (i.e. grab a thing off a table) or assumes that a context exists where their technical approach is useful – it’s a thin motivation to justify building the thing they were going to build anyway. Designers aren’t any better though: we consistently design situations and interactions without consideration of the practical details of whether and how a robot might actually accomplish the task at hand. This is almost definitely a waste of time: the future probably will not be the way we envision it, so it doesn't matter too much what we wish we could build if it’s not practical on a fairly near-term horizon. Wizard-of-oz-style approaches like situated participatory design [10] are a nice way to bridge the gap and make design activities more grounded in real-world experience, but there’s a difference between teleoperated approaches and more autonomous approaches. At some point, we need to move towards designing with the materials we have, not the materials we wish we had.
To effectively integrate the research agendas of designers and roboticists we first need to figure out how to get everyone to care about the work of the other: designers need to appreciate on some level the capabilities and research questions relevant to the roboticists and roboticists need to appreciate the complexity of designing effective, meaningful, and valuable human interactions with technology. Once we achieve buy-in, we can start to explore paths forward that satisfy the constraints of our respective fields, pushing to deploy robots that are implementable and useful. It’s a tall order and we probably won’t get there quickly. But if it’s successful, both have a lot to gain – showcasing working prototypes in real-world settings and conducting long-term deployments are exceedingly rare today. Both roboticists and designers have a lot to gain through such long-term deployments; the data we could collect from the robot, and the humans who interact with it, would let us understand real usage and help us get away from speculating on what might be.
Okay, you’re convinced, but you don’t know where to start. How are we actually going to foster deeper research integration between roboticists and designers? We don’t have a silver bullet here, but we do have our own plans for what we want to do next. We’re also excited that we finally have written these thoughts down after talking about collaborating for eight years. :)
We plan to put a dedicated robot platform in the hospital. This costs money, but is paramount to making things easy and luckily robots are getting cheaper. Putting the robot in the hospital means we don’t have to schlep it back and forth across campus or town to do experiments. Further, we don’t want it to sit broken in the hospital, so we need dedicated lab space on site to repair and update the robot in place when necessary. We’re very lucky to have access to a hospital with supportive leadership and employees.
We want to start working in areas we understand. Jason has done extensive work in understanding patient use of assistive technologies in the rehab hospital setting [1–3, 8]. Tucker has developed algorithms and systems for object grasping [7] and rearrangement [5]. We’re not going to start out trying to use a robot to bathe patients or do a bed turn, even if we’ve had robots pick up rubber duckies in the past. Instead, we want to work with patients and hospital staff to identify those aspects of manipulation of items in hospital rooms that don’t require delicate interaction with breakable things or even more precious humans.
We will look for opportunities to leverage the structure of the hospital and the ability to augment the environment when necessary to make things easier for the robot (e.g. [11]) or humans working with the robot. Further, we will begin by focusing on manipulating a small set of objects the robot has prior knowledge of. While Tucker’s made a career having robots manipulate novel objects, we can ensure more robust manipulation if the robot knows what it’s manipulating. Our initial thought is that the robot should retrieve objects upon request by the patient.
What we don’t know is the best interface for interacting with this robot and we acknowledge that there’s a wealth of HRI research looking at many aspects of human-robot communication. We just want a simple interface that we have experience with to start. To paraphrase Charlie Kemp, we need to close the loop quickly. Having a system that works okay allows us to evaluate the system instead of getting hung up on perfecting smaller components.
With this system in place we can begin to ask the actual research questions. If a patient asks “grab me that ipad and set it down on the hospital bed” Is it good enough to work mostly autonomously or does it require some degree of teleoperation? Note that for a spinal cord injury population, teleoperation might still be fine, and better than their current options (e.g. calling for a nurse). However, full teleoperation might be too difficult for some patients and shared autonomy may be a better alternative. We seek to evaluate to what extent this is the case for our application with this population.
What happens when the robot fails? It could try again. The patient could try teleoperating. We could call in a medical assistant to help. We don’t know what the tradeoffs or failure rates are. We’ll see what happens in initial deployments – which will be fully observed, we’re not going to leave patients alone with the robot for a while – and then formalize this question more. This reflects one of our key goals: we want to study how the robot actually gets used in a real, day-to-day environment and not just speculate. Does it get used at all? Is it useful or just a novelty? Do the patients like it? Does it decrease employee workload? Does it decrease social interactions between patients and employees and is that a problem? What are the breakdowns/shortcomings of the current deployment? Does its presence in the hospital spark ideas by employees or patients for other useful applications they think might be possible after seeing what this robot can do?
These questions will all be on our list to start looking at once we have something running in place. Finally, we promise to not only publish the papers answering each of these questions, but to report back on what we learned about doing the research itself – the process of this tight collaboration between roboticists and designers. Hopefully, it doesn’t take us another eight years to figure that paper out. But hey, if it does, it’ll still be well before we have general purpose robots!
[1] Dawson, J., Fisher, E. and Wiese, J. 2024. Hospital Employee Experiences Caring for Patients in Smart Patient Rooms. Proceedings of the CHI Conference on Human Factors in Computing Systems (New York, NY, USA, May 2024), 1–16.
[2] Dawson, J., Kauffman, T. and Wiese, J. 2023. It Made Me Feel So Much More at Home Here: Patient Perspectives on Smart Home Technology Deployed at Scale in a Rehabilitation Hospital. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2023), 1–15.
[3] Dawson, J., Phanich, K.J. and Wiese, J. 2024. Reenvisioning Patient Education with Smart Hospital Patient Rooms. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7, 4 (Jan. 2024), 1–23.
[4] Hart, R. 2024. Elon Musk Says Tesla’s Optimus Robot Could Drive Company To $25 Trillion Valuation—Here’s What Experts Think. Forbes Magazine. (Jun. 2024).
[5] Huang, Y., Yuan, J., Kim, C., Pradhan, P., Chen, B., Fuxin, L. and Hermans, T. 2024. Out of Sight, Still in Mind: Reasoning and Planning about Unobserved Objects with Video Tracking Enabled Memory Models. IEEE International Conference on Robotics and Automation (ICRA) (2024).
[6] Madan, R., Valdez, S., Kim, D., Fang, S., Zhong, L., Virtue, D.T. and Bhattacharjee, T. 2024. RABBIT: A Robot-Assisted Bed Bathing System with Multimodal Perception and Integrated Compliance. Proceedings of the 2024 ACM/IEEE International Conference on Human-Robot Interaction (New York, NY, USA, Mar. 2024), 472–481.
[7] Matak, M. and Hermans, T. 2023. Planning Visual-Tactile Precision Grasps via Complementary Use of Vision and Touch. IEEE Robotics and Automation Letters. 8, 2 (Feb. 2023), 768–775.
[8] Motahar, T. and Wiese, J. 2024. Investigating Technology Adoption Soon After Sustaining a Spinal Cord Injury. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 8, 1 (Mar. 2024), 1–24.
[9] Moxi —: https://www.diligentrobots.com/moxi. Accessed: 2024-06-17.
[10] Stegner, L., Senft, E. and Mutlu, B. 2023. Situated Participatory Design: A Method for In Situ Design of Robotic Interaction with Older Adults. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, Apr. 2023), 1–15.
[11] Xu, Z. and Cakmak, M. 2014. Enhanced robotic cleaning with a low-cost tool attachment. 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (Sep. 2014), 2595–2601.