Is Our Immune System Conscious?
I’m glad this title got your attention — it caught mine, too, when I realised it was possible!
Though I trained as a Child and Adolescent Psychiatrist, I’ve always had an interest in the interface between psychology and philosophy. I was a founder member (and am still part of) the Philosophy Special Interest Group of the Royal College of Psychiatrists. I had phenomenology (which I describe below) drummed into me as a 1980s Manchester trainee, and still believe in its value, as my phenomenological approach contributed to correcting the under-diagnosis of ADHD in the UK in the late 1990s and early 2000s. Furthermore, I had moved toward infant and perinatal mental health, having spotted that infant psychiatry was, astonishingly, underserved. Here, my phenomenology seemed not to fit with the scaled hierarchy of the mother-child dyad. My attempts to combine these managed to catch Karl Friston’s attention, as he spotted that I was groping my way towards Active Inference, which he invented. What follows is where I’m up to, in what I hope is a digestible form. I’m hoping both to make Active Inference accessible and show how its application might radically transform how we should think of ourselves.
Consciousness and Phenomenology
Explaining consciousness remains the El Dorado of both psychology (the study of mind or behaviour) and cognitive neuroscience (the study of mental phenomena through neuronal structure and function). Descartes famously described us as being made of two kinds of substances, mind and matter, with consciousness being a defining principle of the former. Whether this distinction is genuine or illusory haunts everyone in the field. Meanwhile, for most of us, our everyday experience continues to assert that we are ghosts in the machinery of our bodies, living somewhere behind our eyes.
However, what if our bodies support multiple kinds of consciousness, and the consciousness we know is only one type? As my example of our immune system shows, I’m not referring to psychological ideas of the “unconscious mind”, “altered states of consciousness” however caused, or rare psychopathological states where people segregate their consciousness into multiple personalities. Instead, I’m claiming that other systems in our Cartesian machines, far from being mere contextual influences on our experiential selves, have their own orders of consciousness, as subjective and inaccessible as the one we’re used to.
The Dimensions of Consciousness
For my proposal to make sense, we need some definition of consciousness that does not tie it irrevocably to brain function, but nonetheless says what we mean by it. I’ll therefore approach consciousness through what is called phenomenology. It’s a vast subject, but Figure 2 below summarises its essentials. So, we’re considering what it’s like to be conscious, not what kind of substrate is needed to support consciousness. To keep a very complex topic simple, I’m going to assume that self-consciousness is essential to consciousness (we’ll see why below) and we can describe our everyday experience sufficiently well in three dimensions
1. Awareness. This is our level of consciousness, from fully awake to comatose.
2. Identity. This is our recognition that we are different from our environment.
3. Subjectivity. This is the world we experience.
The Free Energy Principle and Active Inference
These describe an intellectual apparatus we can use to embody the phenomenological consciousness I’ve just described in a biologically plausible form. Being part of computational neuroscience, they are best described mathematically. However, their overall architecture can be summarised in three segments that don’t require complex maths to grasp.
The Markov Blanket of Identity
Let’s start (relatively) simply, with a single-celled organism living in some water. It’s an example of a (hierarchically stacked) Non-Equilibrium Steady State (NESS). To survive, these need stuff to keep them going, and to avoid stuff that will end them. We can describe that mathematically, by saying that the stuff it needs for survival forms its Markov Blanket. So long as it keeps its Markov blanket, the harmful stuff in its environment can’t touch it, and it can get to the stuff it needs. However, in all environments stuff moves around, so our organism needs to adapt to maintain its Markov blanket. Put another way, it identifies itself though its Markov blanket, and if it fails to correctly identify itself, it will end and dissolve into equilibrium. So, before we can identify anything else, we must identify ourselves within our environment.
Active Inference & Subjectivity Our organism has a dilemma: how can it know what’s out there without breaching its Markov blanket? If it has an idea of what might be out there, it can try some action, and see if the outcome is as it hopes. Any prediction error can then be corrected by further actions. It’s behaving like an investor: it respires hoping its environment will afford oxygen; or eats hoping that what it ingests is nourishing. While we think of sensation as something passively receptive, it is anything but, with both internal and external sensations being inferred from prediction & error correction. As Figure 4 shows, sensory states are in a feedback loop with internal states, while action states crate a similar loop with external ones, enabling iterative error correction. So, we represent our world accurately but indirectly, using subjective representation.
Free Energy & Well-Being Across Scale.
Confusingly, the “free energy” being talked about here is informational free energy. This quantity is high when the probability of different outcomes is similar, low when a single outcome dominates. Our organism must find the “right” guess about its environment to survive, so it wants to reduce free energy as much as possible. Its success is a measure of its well-being.
Fortunately, it can model informational free energy thermodynamically, so it can embody well-being in its metabolism. However, environments differ dramatically with scale, so estimates of well-being must vary with them. Consider our blood. What looks like a thick red liquid to us is a fluid home to a whole range of differently specialised cells, whose well-being must be nurtured to keep us healthy. We are to those cells as our world is to us, but measuring their well-being compared to ours means using tools as different as a telescope and a ruler. While both can measure length, their output is so different we need qualitatively different dimensions to capture them.
Putting all this together, we have ended up with a subjective, dimensional space, wherein are represented estimates of our well-being in terms of our world at various scales, obtained from our guesses about it, and our actions upon it. We’ve described two of our conscious dimensions, subjectivity, and identity, as it’s our world. Let’s regard degree of awareness as a function of scale, as we have room for more complexity as scale increases.
What is it like to be an immune system?
Like our brains, our immune systems’ functioning can be described by FEP/AI. From what’s just been argued, they will also express our dimensions of consciousness. Unfortunately, I can no more prove that the immune system is conscious than I can prove that you are not an unconscious zombie, but we can look for suggestive signs, such as intentionality and choice. Intentionality and teleology is well-recognised in the immune system; indeed, philosophy has focused on trying to remove any implication of consciousness from them. That may be mistaken.
Unsurprisingly, studies of the immune system’s impact on choices such as mate selection do not assess agency, but it does influence them, and its responsiveness to social environments is well-established. However, as our consciousness comes from a different system, immunological consciousness is inconceivable to us, so we will not notice it even when present. Despite this, a common need to ensure our bodies’ well-being will coordinate their actions.
Does having a conscious immune system matter?
It is tempting, particularly from a conventionally materialist position, to say that immune consciousness is as irrelevant as everyday consciousness; they simply reflect a particular point of view about our mechanistic selves.
However, most think intention is an important component of cause, and FEP/AI embodies it in our biology. If so, our “psychological” accounts of our behaviour are incomplete, as they only address consequences of being a single type of agent, when it could be that we embody multiple, hierarchically arranged ones, working in cooperation to ensure our well-being and survival across different scales. Maybe it is time for behavioural science to move beyond the brain. Unlike Bataille and his colleagues, we no longer need to abandon reason to do so.