Repository hosted by TU Delft Library

Home · Contact · About · Disclaimer ·
 

The mission execution crew assistant : Improving human-machine team resilience for long duration missions

Author: Neerincx, M.A. · Lindenberg, J. · Smets, N.J.J.M. · Bos, A. · Breebaart, L. · Grant, T. · Olmedo-Soler, A. · Brauer, U. · Wolff, M.
Type:article
Date:2008
Institution: TNO Defensie en Veiligheid · DenV
Source:59th International Astronautical Congress 2008, IAC 2008, 29 September 2008 through 3 October 2008, Glasgow, 12, 7910-7921
series:
International Astronautical Federation - 59th International Astronautical Congress 2008, IAC 2008
Identifier: 347479
ISBN: 9781615671601
Keywords: Cognitive engineering · Course of action · Evaluation results · Health management · Human factors · Human-in-the-loop evaluation · Human-machine · Learnability · Long duration missions · Mental loads · Mission execution · Resource management · Sensemaking · Simulation-based · Situation awareness · Standard requirements · Support functions · Support systems · Technical demands · User experience · Virtual environments · Diagnosis · Human engineering · Manned space flight · Ontology · Planning · Rational functions · User interfaces · Virtual reality

Abstract

Manned long-duration missions to the Moon and Mars set high operational, human factors and technical demands for a distributed support system, which enhances human-machine teams' capabilities to cope autonomously with unexpected, complex and potentially hazardous situations. Based on a situated Cognitive Engineering (sCE) method, we specified a theoretical and empirical founded Requirements Baseline (RB) for such a system (called Mission Execution Crew Assistant; MECA), and its rational consisting of scenarios and use cases, user experience claims, and core support functions. The MECA system comprises distributed personal ePartners that help the team to assess the situation, to determine a suitable course of actions to solve a problem, and to safeguard the astronauts from failures. In addition to standard requirements reviews, we tested and refined the RB via storyboarding and human-in-the-loop evaluations of a simulation-based prototype in a virtual environment with 15 participants. The evaluation results confirmed the claims on effectiveness, efficiency, satisfaction, learnability, situation awareness, trust and emotion. Issues for improvement and further research were identified and prioritized (e.g., acceptance of mental load and emotion sensing). In general, the sCE method provided a reviewed set of 167 high-level requirements that explicitly refers to the tested scenarios, claims and core support functions on health management, diagnosis, prognosis & prediction, collaboration, resource management, planning, and sense-making. A first version of an ontology for this support was implemented in the prototype, which will be used for further ePartner development.