This thesis explores how an autonomous follow-me vehicle can be designed to deliver clear and trustworthy motion- and signal-based instructions to pilots during taxiing, addressing the operational challenges of clarity, safety, and confidence in complex airport environments. Buil
...
This thesis explores how an autonomous follow-me vehicle can be designed to deliver clear and trustworthy motion- and signal-based instructions to pilots during taxiing, addressing the operational challenges of clarity, safety, and confidence in complex airport environments. Building on trust theory, external human–machine interface (eHMI) strategies, and real-world case studies, the project develops UsherBot, a Stingray-inspired autonomous guidance vehicle created to bridge the communication gap between pilots and taxiing systems.
Guided by pilot needs, the design is shaped around three core criteria: easy recognition, low cognitive load, and adaptability to all weather conditions. The resulting concept integrates a curved rooftop display optimized for cockpit viewing angles, dynamic ground projection to visualize upcoming movements directly on the taxiway, and animated lighting cues aligned with familiar aviation signaling conventions. Together, these features form a multimodal system that communicates intent intuitively and supports pilots without verbal instructions.
The project contributes a forward-looking vision for autonomous ground mobility, showing how distinctive form language, behaviorally aligned signals, and predictable motion cues can work in synergy to build trust progressively through consistent, positive interactions. In doing so, UsherBot not only improves situational awareness and reduces cognitive workload but also points toward safer, more efficient, and sustainable airport operations in the era of increasing automation.