When humans make inferences that go beyond limited, noisy, or ambiguous input data, background knowledge is necessary to make generalizations. Such inferences are important for designing intelligent artificial agents. Bayesian inference, a statistical method commonly used as a co
...

When humans make inferences that go beyond limited, noisy, or ambiguous input data, background knowledge is necessary to make generalizations. Such inferences are important for designing intelligent artificial agents. Bayesian inference, a statistical method commonly used as a cognitive model of the brain, formalizes the integration of background knowledge, i.e. "priors", and sensory evidence. I research the effect of priors on sense of agency (SoA) in an artificial agent. In humans, SoA is the subjective experience of control over actions and their consequence. In robotics, developing an agent with SoA is a popular challenge and the first step towards the artificial self.

This thesis designs an artificial agent where the same prior knowledge improves SoA in unambiguous environments and induces incorrect, illusory SoA in noisy, ambiguous environments. First, I provide a comprehensive overview on the role of priors in Bayesian inference and the computational principles of a SoA. Second, I define the ``point mass moving rubber hand illusion (mRHI)", a simulation for an artificial agent that simplifies the human mRHI experiment. Third, I use two parameter estimators, ordinary least-squares (OLS) and expectation-maximization (EM), to calculate SoA in the point mass mRHI.

I found that SoA requires a prior belief about the causal relationship between outcome and action. In the point mass mRHI, the agent has to identify which of the three point masses it can control with its force input. The agent has the prior knowledge that there is a causal relationship between its actions and the states of one point mass. Though with OLS the agent has correct and incorrect SoA, some of the results not comparable to the results from human mRHI experiment. A likely cause is that this simplest reproduction of the mRHI models SoA as a binary variable and does not use Bayesian inference. The (partially) Bayesian parameter estimator EM calculates SoA as a (continuous) posterior probability and finds results similar to the human mRHI experiment. Comparing an OLS algorithm without prior to the EM algorithm, I found that in unambiguous environments the Bayesian prior improves the agent's general SoA and SoA over time but does not improve the SoA for the point mass that the agent does control. In the ambiguous environments, the prior generally motivates the agent towards a SoA over the incorrect mass. However, in a noisy but slightly less ambiguous environment, the prior improves the agent's SoA. In short, the prior improves a SoA when the agent needs to make inferences that go beyond the data but can also induce an incorrect SoA in a noisy and ambiguous environment.