You know the way Google search will sometimes finish your sentences for you? Or, when you’re typing an email, there’s some ghostly predictive text that floats just in front of your cursor? Well, there’s a new kid on the block that makes these gadgets look like toy tricks out of a Christmas cracker. Give it a sentence of Jane Austen and it will finish the paragraph in the same style. Give it a philosophical conjecture and it will fill the page with near-coherent academic ruminations. GPT-3 is essentially just predicting what words should come next, following on from the prompt it’s been given. That the machine does so well is partly because it’s been trained on an unimaginably huge database of samples of English (reputedly, $13 million worth of training). A similar machine can predict, from a sequence of amino acids, how the resulting protein will fold, short-cutting months of lab work and in some cases years of human ingenuity (AlphaFold). But what is going on inside the machine? What is it keeping track of inside its huge neural network “brain”?
We face the same question, of course, when we look at the human brain—a seemingly inscrutable organ of even greater complexity. Yet neuroscience is beginning to make sense of what’s going on inside: of patterns of activity distributed across millions of neurons, flowing into other patterns; coupling and modulating; unfolding in a way that opens the organism to the world outside, projected through its inner space of needs and drives, bathed in the wash of past experience, reaching out to control and modify that world to its own agenda. We can now see what some of these patterns of activity are, and we have an inkling of what they are doing, of how they track the environment, and subserve behaviour.
Neuroscientists are recording these patterns with new techniques. But what do the patterns mean? How should they be understood? Neuroscience is increasingly tackling these questions by asking what the activation patterns represent. For example, “representational similarity analysis” (RSA) is used to ask whether the human brain processes images in the same way as the brain of the macaque monkey. Surprisingly, similar techniques can be used to compare the human brain to an AI computer system trained to perform the same task. These AIs are deep neural networks, cousins of the seemingly unfathomable GPT-3 and AlphaFold brains we met at the start. Astoundingly, it turns out that sometimes the deep neural network is processing images in roughly the same way as the human brain. In a general sense, both are performing the same computations en route to working out that they are looking at a picture of two cats on a sofa. In other cases, we see the brain using a hexagonal code to represent physical space—and more abstract conceptual spaces—and to reason about them.
All of this means that representation has become something of a hot topic in cognitive neuroscience. Representation has always been around, of course, working away in the foundations ever since the “cognitive revolution” showed that we could explain behaviour in terms of internal processing without having to feel embarrassed about intelligent homunculi or ghosts in the machine. What we have now are much better ways to see those representations in the brain and to marry them up with the computational story about how the organism intelligently deals with its environment.
“representation is the crucial link for connecting brain activity with functional, adaptive behaviour”
What we still need is a proper understanding of what representation is—an understanding of how there come to be things in the head which stand in for, and allow creatures to deal with, things in their environment. A once-unconventional idea in contemporary philosophy (originating with Ruth Millikan, David Papineau, and Karen Neander) is that this is intimately tied up with function—biological functions based on natural selection. Although a connection to function may have always been implicit in some scientific practice, it is now being recognised explicitly (Hunt et al. 2012, Richards et al. 2019). For example, in a recent manifesto for the role of representation in computational cognitive neuroscience, Kriegeskorte and Diedrichsen (2019) argue that representation is the crucial link for connecting brain activity with functional, adaptive behaviour. Meanwhile in philosophy, appealing to natural teleology to explain representation has moved into the mainstream, being embraced by researchers from diverse disciplinary starting points (David Haig, Robert Williams), alongside recent landmark contributions from early advocates (Karen Neander, Ruth Millikan).
The devil is in the details, of course, but it is beginning to look as if we have the main ingredients in place: internal states that stand in useful relations to things in the environment, internal processing which relies on those relations, and the functions that serves for the organism. Just as the cognitive sciences come to lean on representation ever more heavily, it seems that we now have the resources to understand this foundational notion.
Feature image by issaronow
[…] Click here to view original web page at blog.oup.com […]
I have been amazingly impressed at how AI knows what I am going to say before I type it in, plus I have been doing alot of researching while trying to find someone and AI knows and keeps up with the articles that I am searching, sometimes I wonder if I could just put the picture in and just see if AI is smart enough to figure it out and attach a name.