Content Naturalised: Bringing Meaning into the Physical World

What do thoughts mean? When you believe that the sky is blue, what gives that belief its specific content—its aboutness, its grip on the world, its internal coherence? How does your mind reach out into reality and latch onto something so seemingly external as the sky?


These are not idle questions. In philosophy of mind and cognitive science, they touch on a foundational issue: How can mental content be part of a natural, physical world? How can meaning arise from neurons, chemistry, and computation?


To ask this is to face the challenge of naturalising content—of explaining mental meaning in terms that are consistent with the scientific picture of the universe. It’s a bold ambition: to show that thoughts, like molecules and stars, belong to the natural order—not as magical exceptions, but as fully grounded phenomena.


In this blog post, we’ll explore what it means to naturalise content, why this project matters, and how it continues to shape modern theories of the mind.





The Problem of Intentionality



Philosophers use the term intentionality to describe the aboutness of mental states. Beliefs, desires, hopes, and fears are all about something. They have content—they refer, represent, or point toward objects, states of affairs, or possibilities.


But this kind of content raises a deep puzzle:


  • It’s semantic—it has meaning.
  • It’s normative—it can be right or wrong (you can believe falsely).
  • It’s perspectival—it’s tied to how things seem to the thinker.



These features don’t sit easily with the descriptive, mechanistic language of science. Electrons don’t represent anything. Cells don’t believe. So how can a brain—a physical system—produce states with meaning?


This is the heart of the naturalisation project: to explain how semantic content can arise from a world governed by physical laws.





Why Naturalise Content?



Naturalising content matters because:


  • It grounds psychology and neuroscience in a coherent ontology.
  • It helps us distinguish real mental states from mere patterns of behavior.
  • It informs debates about AI, consciousness, and representation.
  • It enables us to explain mental error: how someone can misrepresent the world.



If content can be naturalised, then mental states can be part of science—not as mysterious add-ons, but as natural phenomena.





Approaches to Naturalising Content




1. Causal Theories of Content



These theories (e.g., Dretske, Stampe) propose that mental content is determined by causal relations between internal states and features of the world.


For example, your belief that there is a tree outside is about a tree because your visual system was caused by a tree to form that belief.


Pros: Links content directly to the physical world.


Challenges:


  • What about misrepresentation? If you believe there’s a tree but you’re hallucinating, what caused that belief?
  • Pure causality doesn’t seem to capture normativity—the idea that beliefs can be true or false, right or wrong.






2. Teleosemantic Theories



These theories (Millikan, Papineau) argue that mental content is tied to the biological function of mental states. A belief’s content is what it has historically evolved to track or respond to.


For example, a frog’s tongue flicks at small dark dots because it has evolved to respond to flies. Even if it sometimes misfires (e.g., at pebbles), its mechanism has a teleological function—a kind of natural “purpose.”


Pros: Explains normativity in natural terms—based on evolutionary success.


Challenges:


  • May struggle to explain novel, abstract thoughts (e.g., mathematics).
  • Not all mental content seems tied to biological function (e.g., fictional beliefs).






3. Inferential Role Semantics



Here, content is defined by a concept’s role in reasoning. A belief has the content it does because of how it connects to other beliefs, inferences, and actions.


For instance, the belief that “fire is hot” is tied to expectations (fire will burn), behaviors (avoid touching it), and inferences (if there’s smoke, there might be fire).


Pros: Emphasizes the internal structure of mental life—how ideas fit into reasoning.


Challenges:


  • Makes content too “narrow”—based only on internal connections.
  • May ignore the world-directedness of thought (what beliefs are about).






4. Hybrid and Ecological Models



Some thinkers combine internal and external views: content is both a result of functional roles and anchored in interaction with the world.


The mind, in this view, is not a passive container of meaning, but an active participant in a cognitive ecology. Meaning arises from dynamic relations between agents and their environments.


Pros: Captures the embodied, embedded nature of cognition.


Challenges: Still developing as a framework; less precise than more formal theories.





The Deeper Implication: Meaning Is Not Magic



At first, meaning seems irreducibly mysterious—a quality that science cannot touch. But the project of naturalising content suggests otherwise.


It tells us that:


  • Mental content is not supernatural.
  • Representational systems can evolve.
  • Error, reference, and perspective can all be understood within a natural framework.



This doesn’t mean we have all the answers. But it means that the old gap—between matter and meaning—is narrower than it once seemed.





Final Thoughts: Bridging the Inner and the Outer



To naturalise content is to take a stand: that minds belong in the world, not outside it. That belief, thought, and meaning can be studied alongside biology and physics—not reduced to them, but woven into the fabric of the natural order.


It is a way of saying: your mind is real, your meanings matter, and the science of tomorrow may one day explain why and how they do.


Because to understand the mind is not just to map the brain.


It is to trace the path from matter to meaning—and to discover that perhaps, in the end, they were never so far apart.