Modern robots make decisions in many ways, but rely on their designers to choose which strategies to employ and when. Adopting a perspective of bounded rationality from the cognitive sciences, we develop a definition of decision making for constructivist robots to formulate their
...
Modern robots make decisions in many ways, but rely on their designers to choose which strategies to employ and when. Adopting a perspective of bounded rationality from the cognitive sciences, we develop a definition of decision making for constructivist robots to formulate their own decisions based on their mission-specific values. Our model, extending Kirsch (2019), defines decision making with an iterative algorithm that captures a broad range of possible strategies and problem domains. Briefly, a set of decision alternatives are first assessed by a set of relevant cues. These assessments are then aggregated into a multi-dimensional evaluation of each alternative from which a preference ordering is created. Finally, based on the problem specification, a set of chosen alternatives is either accepted by a stopping rule or a new iteration is started, updating the sets of alternatives and cues. If no alternatives or cues remain after iterating, then the decision fails. Given this algorithm, we implement a toolbox of decision-making components as modules in a cognitive robot architecture and demonstrate a method of assembling them into complete decision strategies, represented as behavior trees, using an automated theorem prover. For proof of concept, we simulate three example strategies used by a domestic robot performing a search and retrieval task. We discuss new insights into designing and selecting decision-making strategies and make recommendations on how our proof of concept can be improved.