We are interested in what it means for a system to be an agent. Suppose we observe a physical system interacting with its environment. When can we view that system as an agent that is using its inputs to update its knowledge about the world and choosing actions accordingly?
To some extent we have a choice about this, but to some extent we do not. Some systems can be consistently interpreted as agents, perhaps in multiple ways, while others cannot. We will systematically map the relationship between the properties of a physical system and the ways in which it can be interpreted as an agent. This will not only help us to understand the workings of the brain and our role as agents in the physical world, but also to build better artificial agents.
We will study this mathematically through what we term an ‘interpretation map’: a function that takes the state of a system and interprets it in terms of an agent’s beliefs and goals, expressed in terms of Bayesian priors. In doing this we address the Big Question, “What is the relationship between mind and matter?”
Both brains and most machine learning agents are built up from smaller components. This leads us to the question, “How can complex agents be built up from simpler components?” We will develop a compositional theory of physical agents, which will contribute to the foundations of machine learning and neuroscience, and lead to applications in artificial intelligence.
Our project will contribute to the question, “Why do goal-directed agents exist in the physical world?” The concept of agency is relevant in fields as diverse as physics, biology, neuroscience, artificial life, artificial intelligence, and philosophy. By making the concept precise and, in principle, testable, we will provide a common language and a body of mathematical results that will allow greater transfer of knowledge between these fields.