Juggling Extra Limbs: Identifying Control Strategies for Supernumerary Multi-Arms in Virtual Reality
This project investigates how users coordinate and control multiple virtual supernumerary limbs with varying levels of autonomy, revealing strategies such as command, demonstration, delegation, and labelling that shape multi-arm interaction in VR.
Overview
Juggling Extra Limbs investigates how users coordinate and control multiple virtual supernumerary limbs (VSLs) with varying levels of autonomy in VR. Through an exploratory Wizard-of-Oz study (N=14), we identify the interaction strategies participants naturally adopt — ranging from direct commands and demonstrations to delegation and labelling — and examine how autonomy levels affect task performance, embodiment, and perceived control when managing a four-armed virtual avatar.
Vision
Controlling a single extra limb is already challenging; managing multiple semi-autonomous supernumerary arms simultaneously is like juggling. As supernumerary limb research moves toward practical multi-arm systems for industry, surgery, and daily life, understanding how people actually want to coordinate extra limbs becomes critical. Rather than prescribing fixed control mappings, this project lets users freely explore strategies with non-fixed, autonomy-adaptive VSLs, revealing the control vocabulary and coordination patterns that should inform future multi-limb interfaces.
How It Works
Wizard-of-Oz Setup
A participant and a hidden human operator share a four-armed virtual avatar in VR (Meta Quest 3, Unity 2022). The participant controls the avatar's two primary arms directly, while the operator controls two back-mounted VSLs behind a physical partition, responding to the participant's voice commands, demonstrations, and gestures. Random error injections and control delays simulate realistic autonomous system behaviour.
Two Autonomy Levels
- Low Autonomy: VSLs respond only to explicit step-by-step instructions (commands, labelling, demonstrations). Each action requires direct participant guidance.
- High Autonomy: VSLs demonstrate advanced capabilities including object recognition, multi-step task execution, and autonomous planning. Participants can delegate complex tasks with abstract commands.
Experimental Tasks
- Basic Control Task: A reaching task with progressively more buttons (1 to 4), requiring participants to coordinate all four limbs simultaneously.
- Factory Task: A shape-sorting task under time pressure, where participants grasp, rotate, and insert objects into matching holes — demanding bilateral coordination between avatar limbs and VSLs.
Key Findings
A study with 14 participants across both autonomy levels revealed:
- Four Instruction Types: Participants naturally employed Commands (direct actions), Demonstrations (show-and-repeat), Labelling (naming objects/properties), and Delegation (abstract task assignment), with usage patterns shifting based on autonomy level.
- Faster Completion with High Autonomy: Participants completed tasks significantly faster under high autonomy in both tasks (p < 0.05), with lower error rates in the Factory task.
- Embodiment Trade-off: Low autonomy strengthened body ownership, agency, and self-location, while high autonomy diminished these — participants reported feeling more like observers when VSLs acted independently.
- Adaptive Strategy Switching: Participants dynamically switched between sequential and parallel control, kept delicate tasks manual, and delegated repetitive work — developing more efficient coordination as they grew familiar with the system.
- Task Organisation Matters: 10/14 participants used object labelling and categorisation (by colour or shape) to streamline multi-limb coordination.
Applications
- Industrial Multi-Arm Robotics: Informing control interfaces for workers operating multiple robotic assistants in manufacturing and assembly.
- Surgical Assistance: Guiding the design of multi-arm surgical systems where surgeons coordinate natural and robotic limbs.
- Adaptive Autonomy Systems: Designing context-aware autonomy that adjusts VSL independence based on task complexity and user preference.
- VR Training & Simulation: Creating training environments where users practise multi-limb coordination for future wearable robotic applications.
Team Members
Related Publications
Juggling Extra Limbs: Identifying Control Strategies for Supernumerary Multi-Arms in Virtual Reality
H Zhou, T Kip, Y Dong, A Bianchi, Z Sarsenbayeva, A Withana
Project Details
Timeline
Started: June 1, 2024
External Collaborators
- Tom Kip (University of Sydney)
- Andrea Bianchi (KAIST)

