TableBot
In the spring of 2022 I joined the course “Innovation Project” which had the theme hybrid communication that year. After thorough research, my group and I decided to design a tabletop telepresence robot, which we called RemoteU. The project was a success and we continued working on it through an independent project in the fall of 2022, now under the name TableBot, and finally evolved it to become our masters thesis topic.
Our little robot has developed through it’s three iterations and with every version, extensive research has been done to understand the implications of the upgrades.
This is the story of TableBot.
Iteration 1: Remote U
The first iteration focused on development of the prototype. We wanted to know: Is this even possible?
RemoteU was designed to be a DIY telepresence robot which would utilise the ecology of artefacts already in place at a hybrid meeting:
The body was fully 3d printed on a small 3d printer, making it accessible to many tech enthusiasts.
The components were cheap and easy to obtain.
The video call was facilitated by a smartphone to reduce technological complexity.
Since this was our first time developing a robot, several explorations and ideas were tested.
The evaluation was superficial as it was a proof of concept for the robot. Only two participants were in the co-located space and the task was not well designed to support the robot, but the data we collected suggested there was a lot more to uncover with this little (almost toy) robot.
I designed a custom input for the robot which would allow the user to provide input both for movement of the robot, as well as the tilting of the phone. Because we used joysticks, the input device provided a natural force feedback.
Additionally, I 3d modelled every iteration of the robot.
Iteration 2: TableBot
Due to the cancellation of a course, we decided to continue the work with our robot. One team member was however in Canada this semester, so we wanted to do as little changes as possible - just do some more thorough tests. This iteration thus focused on experiment design and UX.
Evaluation as a game
We wanted to test the control of the robot - what happens when the robot has full control vs when those in the primary space are responsible with moving the view of the pilot? We thus needed a task that could encourage a lot of movement, conversation and argumentation.
We decided on making a collaborative version of the board game Codenames and have the robot be tested in groups of 5, meaning 1 pilot and 4 in the primary space to challenge the pilot as much as possible.
Evaluations from RemoteU had shown us that the robot moved too slowly, so to fix it, this iteration was provided large wheels with custom silicone tires for proper grip. The motors were upgraded as well, enabling the robot to move much faster.
We tested with 20 people across 4 groups and found suggestive data encouraging the use of tabletop telepresence robots which could be controlled by both the pilot and those in the primary space.
About this time we had gotten addicted. Nobody else was doing research on tabletop telepresence robots and our findings were provocative compared to recognised research. We needed to continue this - but next time we wanted to go BIG.
Iteration 3: TableBot
Masters Thesis
With all team members present, we wanted to redesign TableBot completely in accordance to the findings we had received from iteration 2. We needed a wide lens camera, a bigger screen and better movements. We initiated this version by consulting with lecturers from different fields to gain as many insights as possible.
First order of business was to ditch the smartphone. We bought a 180 degree fisheye-lens camera component and sourced a 7” touchscreen and Jabra conference microphone.
While my team-members upgraded the steering system with ease-in and ease-out accelerations and built a web RTC video-call system, I got to work on designing the body of the robot, using every mm of space I could get my hands on to keep the robot as small as possible.
We even designed small accessories to reduce any possible sense of discomfort with the prototype
Study 2
I also got to design the graphic user interface which would be where the pilot would control the robot from. We wanted to design a system where we could change how much the pilot could see, so every detail of the view could be set in the DOM, hidden from the pilot to avoid visual clutter.
We wanted the design of this prototype to be as polished as possible, which included the interface.
To allow the pilot to discretely point at things or indicate where they are looking, we included a small LED strip to the front of the robot.
This time we wanted to evaluate the new upgrades as much as possible, so we conducted three rounds of evaluation:
Quantitative exploration of the view of the pilot
Quantitative exploration of the view of the participants in the primary space
Qualitative exploration on the robot in a complex social setting (Similar to the study in iteration 2)
We used the findings from the first two evaluations to support the third one
At this point we have yet to define the complete findings of our work, so stay tuned!
Study 3