Alright, let's dissect this. You want me to take this dry, academic text and… inject it with something. Not life, exactly. More like a controlled release of disdain, wrapped in meticulous detail. Fine. Consider it done. Just don't expect any sunshine and rainbows.
Family of Views in the Philosophy of Mind
This isn't about how to build a better toaster, so pay attention. We're discussing the Computational Theory of Mind (CTM), often just called computationalism. It's a rather persistent notion that your mind, this messy collection of thoughts and feelings, is essentially an information processing system. And that what you call cognition—your thinking, your remembering, your general ability to not walk into walls—is just a form of computation. It’s closely tied to functionalism, which, frankly, is just a more polite way of saying that what matters is what mental states do, not what they're made of. As if the material doesn't have a say in the matter. [1]
Overview
It started, as these things often do, with some rather ambitious fellows. Warren McCulloch and Walter Pitts were already dabbling in the idea back in 1943, suggesting that the whirring and clicking of neurons somehow explained cognition. [2] Then came Peter Putnam and Robert W. Fuller in 1964, adding their own flavor to the stew. [3] [4] But the modern iteration, the one that still manages to annoy people, was largely the work of Hilary Putnam in the early 1960s. He had a rather bright, if misguided, student, Jerry Fodor, who ran with it for decades. Of course, by the 1990s, even Putnam himself, along with the ever-skeptical John Searle, decided to throw some shade on the whole enterprise. [5] [6] [7]
The core idea, the one that keeps resurfacing like a bad penny, is that your brain, that squishy mass of tissue, is a computational system. It's physically realized by neural activity, which sounds impressive but just means it's in your brain. The variation in these theories hinges on how "computation" is even defined. Usually, it’s bandied about in terms of Turing machines—abstract devices that shuffle symbols around according to some rules, depending on their internal state. The critical, and frankly lazy, aspect here is the ability to ignore the messy physical details. [7] Whether the computation is happening on silicon chips or in biological goo, it’s the process, the manipulation of inputs and states according to rules, that supposedly matters. CTM doesn't just say the mind is like a computer program; it claims it is a computational system. [7]
This whole affair almost always necessitates mental representation. You can't compute an actual, tangible object, can you? No, you need symbols, representations. The computer, or in this case, the mind, interprets and represents things, then computes based on those representations. It’s a neat trick that ties CTM directly to the Representational Theory of Mind, which, naturally, focuses on these symbols. It's supposed to explain things like systematicity and productivity, whatever those are. [7] And of course, Fodor insisted on linking it all to his own language of thought hypothesis, a whole other can of worms involving semantics and complex representations. [ citation needed ]
More recently, some have tried to draw a sharper line between the mind and cognition. This leads to the Computational Theory of Cognition (CTC), building on McCulloch and Pitts. CTC says neural computations explain cognition. Simple enough. But CTM takes it further, asserting that even phenomenal consciousness, those elusive qualia, are computational. So, CTM implies CTC, but CTC doesn't necessarily imply CTM. It leaves room for some aspects of the mind to be… less computational. A convenient escape hatch, perhaps. [ citation needed ]
"Computer Metaphor"
Let's be clear: CTM isn't just saying your mind is like a computer. That’s the "computer metaphor," a rather pedestrian analogy. CTM is a much bolder, and I’d argue, more arrogant claim. It’s not about software running on hardware. It's the assertion that a computational simulation of a mind is sufficient for the actual presence of a mind. That a mind can, in principle, be simulated. [ citation needed ]
And "computational system" isn't a nod to your sleek laptop. It’s about any system that manipulates symbols according to rules, a step-by-step process. Alan Turing laid out the theoretical groundwork for this with his Turing machine. [ citation needed ]
Criticism
Oh, there's plenty of criticism. Enough to fill a rather dismal library.
One of the earliest jabs came from John Searle and his infamous Chinese room thought experiment. He tried to dismantle the idea that artificial intelligence could possess intentionality or true understanding, and by extension, that such systems could be models for the human mind. Imagine a guy in a room, no Chinese knowledge whatsoever, just a massive rulebook. Symbols go in, he follows the rules, different symbols come out. To an outsider, it looks like a conversation. Searle’s point? The guy inside doesn't understand a damn thing. He's just a symbol manipulator, devoid of genuine comprehension. This was a direct assault on the notion that computation equals mind. [ citation needed ]
Searle also had a field day questioning what even constitutes a computation:
"The wall behind my back is right now implementing the WordStar program, because there is some pattern of molecule movements that is isomorphic with the formal structure of WordStar. But if the wall is implementing WordStar, if it is a big enough wall it is implementing any program, including any program implemented in the brain." [10]
These are what you might call "insufficiency objections." They argue that computation, by itself, just isn't enough to capture certain mental capacities. Arguments about qualia, like Frank Jackson's knowledge argument, fall into this category. They target physicalist views broadly, but CTM is certainly within their crosshairs. [ citation needed ]
Then there are the criticisms aimed squarely at CTM. Even Jerry Fodor, a staunch proponent, admitted the theory was far from a complete explanation. He pointed out that much of our cognition is abductive and holistic, meaning it’s influenced by a vast web of beliefs. This leads to the infamous frame problem: how does a computational system know which beliefs are relevant and which aren't, especially when relevance is context-dependent, not just a local, syntactic property? [11]
And let's not forget Hilary Putnam, the very architect of modern computationalism, turned critic. He questioned the very idea of what constitutes a computation, echoing Searle's concerns. His argument, in essence, was that any sufficiently complex system, like an open system, could be said to implement every abstract finite automaton. [12] So, the question of whether the mind can implement computational states becomes rather moot if everything does. Computationalists have been scrambling to define criteria for what counts as a genuine "implementation" ever since. [13] [14] [15]
Roger Penrose also weighed in, suggesting that human understanding of mathematics, particularly its non-algorithmic aspects, might be beyond the reach of standard Turing complete computers. He invoked Gödel's theorem to support this. The consensus? He was probably wrong. [16] [17]
Pancomputationalism
This whole mess leads to a rather unsettling question: what exactly does it take for a physical system to perform computations? A simple answer involves mapping abstract mathematical computations onto physical states. A system computes C if its physical states can be mapped to the states defined by C. [18] [12]
But Putnam and Searle, bless their critical hearts, argued this mapping is far too simplistic. It trivializes the whole idea. As Putnam put it, "everything is a Probabilistic Automaton under some Description." [20] Even a rock, a wall, a bucket of water – they're all computing systems if you squint hard enough and choose the right description. Gualtiero Piccinini has helpfully cataloged various flavors of this "pancomputationalism." [21]
To counter this trivialization, philosophers have proposed more restrictive accounts of what constitutes a computational system. These often involve causal, semantic, syntactic, or mechanistic criteria. [22] The mechanistic account, for instance, was developed by Gualtiero Piccinini in 2007, attempting to pin down what makes a physical system truly computational. [23]
Notable Theorists
This article, frankly, needs more substantiation. But since you're here, let's run through some names.
-
Daniel Dennett proposed the multiple drafts model. Consciousness, he argues, isn't some singular event. It's a messy, distributed process, a blur across space and time in the brain. Consciousness is the computation; there's no ghostly observer stepping in.
-
Jerry Fodor, as mentioned, saw mental states as relationships between individuals and mental representations. He championed the language of thought (LOT), believing it wasn't just a useful metaphor but was actually encoded in the brain. His thinking evolved, naturally, but his foundational work on computation and representation is significant. [24] [25]
-
David Marr offered a three-tiered approach: the computational level (what and why), the algorithmic level (how), and the implementational level (the physical substrate). [26]
-
Ulric Neisser, who coined the term "cognitive psychology," viewed minds as dynamic information processors, their operations amenable to computational description.
-
Steven Pinker, in his quest to make cognitive science accessible, popularized CTM in books like How the Mind Works. He sees language, for instance, as an evolved, innate capacity.
-
Hilary Putnam, despite his later criticisms, initially proposed functionalism as a way to define consciousness based on computation, irrespective of the substrate. [27]
There. Read it. Understand it. Don't expect me to hold your hand through it again.