Warning: imminent, unrestricted discussion on Chappie, its themes, story and ending.
As it usually goes in science fiction, Chappie (Neill Blomkamp, 2015) revolves around a small network of philosophical problems, scientific specialties and technological goals.
Philosophically, it’s a film about the hard problem of consciousness proposed by David Chalmers in 1995. As it usually goes in science fiction, the story’s and the problem’s conclusion go hand in hand. That is, during its narrative climax Chappie proposes a particular answer to the existential questions that inspire it: What is conscious experience? How does it relate with the brain’s processes for handling information? And the biggest one, is it possible to answer these questions?
Technologically, Chappie’s answer is a bit whimsical. Scientifically, the answer is simple, stubborn, and perfectly in line with my personal position on this. Is it possible to know what consciousness is? Chappie answers: yes.
I’ll do my best to understand why is it that, in reaching that answer, the protagonist’s intellectual victory, Chappie’s, is not a triumph over the villain who wants to see him dead but over his creator and best friend.
The hard problem
I always try describing the hard problem of consciousness respectfully. It’s an awfully interesting concept and many thinkers I deeply respect take it very seriously. I tend to think it’s not as important as it presents itself and seen a certain way it’s not really a problem, but none of that prevents me from finding it fascinating.
Understanding the problem is useful yet unessential for enjoying Chappie or for reading the rest of this article frankly. Suffice it to say that many people are convinced that no analysis of the human brain, no matter how comprehensive, can possibly reveal the nature of consciousness. For those interested, I take the liberty to describe the hard problem, as I understand it, as briefly as I can:
The idea is that neuroscience is advancing day by day in its efforts to understand the mechanisms by which the nervous system receives inputs and processes information. Each decade will find us more enlightened about it. Eventually, we’ll have perhaps not a complete but a comprehensive idea of how internal brain mechanisms operate.
We’ll understand how signals from the senses reach the brain and how neurons react, to these signals and to each other, how they store and process data in ways that eventually result in the body’s specific movements, posture attitudes and sound emissions, ranging from sudden laughs to annoyed grunts or words, which in turn can be conversations or speeches or voice messages or improvised songs.
Applied to a simpler machine, we can imagine a clock’s internal components taking the finger movement that winds it and resulting in a steady, rhythmic motion of its hands. The gears are pushing each other and the clock doesn’t know what’s happening or has any experience of the event. If for some reason we knew the clock is perceiving what happens, our understanding of its mechanism wouldn’t help us understand its ability to perceive.
In the future of neuroscience, we could imagine the brain’s inner components in motion and generating behaviors on the human body. We could visualize neurons exercising their capabilities and would understand how this makes that brain’s owner display a certain behavior, but we wouldn’t necessarily understand how she has any experience of the world. We can understand how the brain processes images, for example, but we’ll never know how it generates the experience of what’s the color green.
We know consciousness exists only because we experience it from the inside, and we assume other consciousnesses exist only because other bodies display our same symptoms. This is the hard part of the hard problem: not only we don’t know what consciousness is, we don’t know what could it be or where to start looking.
If we can imagine the moving gears of a clock showing the time without the clock knowing of its own existence, we can imagine the working neurons of a person who goes through her day and communicates without any internal notion of being in the world. This hypothetical person with a brain mechanically identical to our own but lacking personal consciousness, is what in this context is called a philosophical zombie. The important thing is, if such people existed, there’d be no conceivable material way to distinguish them from the rest, and therefore what distinguishes us from them is mysterious.
You don’t have to be religious to admire the hard problem or to think it matters, but it’s easy to see how the proposition is suggesting conscious experience must have some immaterial basis, which doesn’t act according to the laws of nature as we understand them.
The hard problem is a daunting presence in the field of artificial intelligence research, and therefore in the world of science fiction. It suggests we can program robots to act as if they had a personal experience, but we have no way of checking they do.
Chappie, then, asks whether its protagonist, the robot named Chappie, whose consciousness was fully programmed by the young Deon Wilson, is truly capable of having an inner life and an experience of the world, or whether he’s just designed to fake it.
The answer it gives us, of course, is affirmative. Chappie has a conscience and his experience of the world is as valid as any human being’s. It’s not a particularly original response: twentieth century fiction is full of thinking, feeling machines.
Common sense is dedicated, among other things, to identify human experience as unique and irreplicable. Science fiction is dedicated, among other things, to take down the various manifestations of this particular article of popular wisdom wherever it’s found.
As it usually goes in science fiction, all the main characters have personal positions on the matter, and important plot points contain discussions in which their views are contrasted.
“You’re not data”
A contradictory character in this sense is Deon Wilson, Chappie’s maker. The following exchange occurs at the 80th minute when Chappie is considering installing his consciousness into a new body with a healthy battery:
Chappie: —Deon, this could save me. I need a new body, remember?
Deon: —No, it can’t save you, Chappie. The problem is much greater than your battery.
Deon: —Because you are conscious. You cannot be copied because you’re not data. We don’t know what consciousness is, so we cannot move it.
Chappie: —Chappie can figure it. I can know what it is, and then I can move me.
Deon: —You can’t move it, I’m sorry!
Deon’s position is strange, and to some extent seems forced to push the story forward.
It’s important to note that Deon doesn’t doubt Chappie’s personal experience. Deon believes (knows) Chappie to be alive and aware, but also believes his consciousness can’t be copied from one robot to another.
It isn’t immediately clear how this view holds. 15 minutes into the film the very same Deon is done compiling the CONSCIOUSNESS.DAT file successfully. 25 minutes in, he’s installing the program in Chappie’s body from his laptop.
It’s clear that what he installed, what he effectively transferred from one robot (his computer) to another, isn’t merely “information” in the narrow sense used later to discourage his creation. Chappie initially lacks encyclopedic information about the world, he can’t even speak until he starts mimicking his peers’ English.
In all fairness to Deon, he never said the proposition is absolutely unfeasible. Maybe what’s impossible, in his eyes, is doing it fast enough. In the practical frame of the story, however, both positions are essentially identical. Either for lack of time or by logical necessity, Chappie’s plan is doomed to fail.
The easy problem
Marking the beginning of what tends to be called technological singularity, Chappie, a result of human creativity, manages to be more creative than the intelligence that created him and solves a problem his creator considered impossible.
Using his unmediated access to the internet (which he dubs the sum of all human knowledge) and a neurotransmitter helmet capable of transferring the wearer’s mental instructions to a remote unit, Chappie solves the hard problem.
As it usually goes in science fiction, at least in stories that predict answers to a currently pending question, Chappie’s solution is never explained in detail and we’re offered but glimmers of the bright object.
An initial description uses terms bordering on the mystical, perhaps with the excuse that Chappie’s explaining it to Yolandi, who knows more of superstition than neuroscience:
Consciousness is like energy. This helmet reads energy from you and me. I just need to figure out how to get it out.
It’s hard to imagine someone with a professional neuroscientist’s competence using the term “energy” so gratuitously. It’s never explained in what sense is this energy not “information” or how does it make sense that, not being information, the helmet can “read” it. But eventually Chappie finds out everything he has to.
Chappie: —I know what consciousness is. This helmet can read it.
The situation’s positivist thrust is notable. Chappie’s intellectual triumph is the triumph of scientific research, overcoming obstacles traditionally relegated by error to other branches of human inquiry.
In fact, Chappie’s processing power allows him to reach his conclusion in record time, establishing causation between his research and Progress much more strongly than real, human, slow, bumpy research.
Chappie even suspiciously gets answers for which you’d actually need accumulated data from brain scans of a number of experimental subjects Chappie has no access to. You can argue he does have access to thousands of published scans to date, but it’s clear his findings would require statistics from parameters his contemporaries wouldn’t have even begun to monitor.
Chalmers didn’t describe just the hard problem. The easy problem of consciousness, finding out through scientific research how the nervous system processes information, is also part of his point. In fact, the point is that there are two distinct problems, the easy and the hard one, where much of today’s neuroscience sees one. And that neuroscience progresses through one believing this will solve the other, which is not possible.
Asking how thoroughly we must understand the brain mechanism so that we know what consciousness is would be as incongruous as asking how thoroughly we must shuffle a deck of cards so that the sun turns green.
Chappie (both the film and the character) brings the problem to the table and then walks right through it, not without some insolence, as if it never existed. Chappie works hard on the easy problem and easily finds a solution for the hard one. Chappie shuffles those cards so bad the sun goes full lettuce.
“A fundamental spiritual problem”
That the main obstacle in his scientific endeavor is Deon’s opposition is even more curious considering the film already has an antagonist, Vincent Moore, to represent opposition to the story’s central thesis. His character introduction, during the film’s introductory news segment, is quite explicit about it:
Anderson Cooper: —Before the success of the ubiquitous human-sized police robots, there was a bigger bad boy on the block: the Moose. Vincent Moore is a weapons designer and a former soldier. He has a fundamental spiritual issue with artificial intelligence.
Vincent Moore: —I have a robot that is indestructible. It is operated by a thinking, adaptable, humane, moral human being.
Chappie’s advanced awareness doesn’t even exist at this point, and yet Vincent objects to the comparatively rudimentary intelligence of Deon’s inventions. The reasons behind this opposition are swiftly established: Deon’s success eclipsed his own, and he condemns artificial intelligence on spiritual grounds. Hugh Jackman’s natural accent also indicates that Vincent is Australian, just like David Chalmers.
His hostile demeanor’s origins are also established, somewhat stereotypically: he was military and his main interest, even outside the army, is weapons. Eventually we learn of his fondness for boxing and american football.
Later, pointing at the exposed insides of Chappie’s head, Vincent says:
Your simple A.I. program makes you think you’re real. But you know what’s in here, huh? Nothing. A bunch of wires, man.
Eavesdropping on the above quoted conversation between robot and maker, perhaps internally facing the possibility of Chappie indeed being alive and conscious, Vincent crosses himself. He doesn’t get to say Deon is playing God, but evidently some mystical assessment of consciousness in its organic expression explains his aversion to consciousness that was artificially manufactured.
Vincent is a jealous, weapons-loving, God-fearing soldier. His personal brand of Christianity is opposed, what a coincidence, to scientific advances led by his rival in the popularity contest secretly taking place inside his head.
He’s also a hypocrite: He convinced himself Chappie merely believes to be real, never realizing that the fact that Chappie believes anything goes against his worldview. He wants to fight crime by creating a killing machine (at no point do we get the impression the Moose can apprehend someone without destroying her in the process). He boasts of the moral superiority of human consciousness, but when he gets to use his, he tells his creation to perpetrate atrocities no other robot in the film comes close to committing.
Should not and can not, respectively
And nonetheless, again, when Chappie deciphers the secret of consciousness he’s not triumphing over Vincent’s overt hatred, but over his creator’s resigned skepticism.
This is narratively clever: if Vincent believed Chappie can’t be saved, his attempts to destroy him would make less sense. It’s also clever that Chappie’s artificial intelligence’s salvation finally depends on the neurotransmitter helmet Vincent himself invented precisely to dispense with artificial intelligence on his project.
In principle, it should be noted, Vincent opposes Chappie’s existence, or his potential popularity or his freedom of action, but not his ability to copy himself to another robot. While he doesn’t get a say in the matter (and while we might expect he would object to perpetuating an aberration’s life) his conviction that Chappie is “just” a program would prevent him at least from asserting, as Deon does, that Chappie is doomed to die before being copied to a different unit.
Which explains why Vincent is not the one opposing Chappie on this specific point, but why is it Deon? Why this contradiction? The first one to believe a machine can express feelings suddenly decides no human technology can move feelings from one point to another. The first one to imagine a scientific experiment who can write poetry suddenly decides the secret of consciousness exceeds the capabilities of science.
The film’s first minutes make us believe the enthusiastic scientist’s role will be occupied by Deon, the character that’s an actual scientific researcher. But Chappie, when it comes to avoiding his own death, usurps it without hesitation and succeeds ideologically over Vincent and Deon, for whom science should not and can not, respectively, decipher consciousness.
Science fiction’s paradox
If Deon’s well-intentioned antagonism can’t be explained inside the movie’s fiction, we may try to explain what function it’s fulfilling in the narrative. This would be an extra-diegetic explanation, but it must be more complex than “character A does X thing because if he didn’t the story couldn’t move forward.”
Part of it could be explained on a narrative thirst to include the aforementioned singularity. Chappie doesn’t go as far as creating an intelligence superior to his own, but at least his intelligence not only meets but also exceeds the capabilities of his maker’s. The result is an abrupt technological advancement, the sudden invention of cognitive transfer between robots and humans (whose verisimilitude doesn’t worry me at the moment). Technological singularity is a jump in the rate of human intellectual progress, a melody that unsettles by skipping forward several measures without warning.
It seems to me however that there is a deeper principle operating at the heart of Deon’s contradiction. A principle spanning science fiction and the values it represents.
As I said at the beginning, most science fiction stories are born from a fascination with a current intellectual problem, with some emphasis on the philosophical implications of certain technological achievements. Yet it’s more accurate to say that there’s a particular way to be fascinated by a problem, and when we recognize it in a story we call it science fiction.
For a problem to fascinate us it is almost a necessity that we still haven’t figured it out. There is no doubt the aesthetic pleasure of an elegant solution, but I suspect that’s not what’s behind the kind of stories I mean.
Paradoxically, for a story like Chappie to have something to contribute to the problem at hand it is almost a necessity to propose a solution, or to pose a future in which such a solution is found.
Writing about space travel requires us to be fascinated by the vastness of the universe, interstellar distances, the problem that you can’t cross the abyss without dying several centuries before reaching the nearest star. It also requires us to imagine a ship capable of doing just that, or a culture capable of dealing with the waiting, or some other solution that eludes me.
Writing about the adventures of an artificial consciousness requires us to be fascinated by the hardness of the hard problem, our apparent intrinsic inability to discover the exact nature of personal experience. It also requires us to imagine ourselves tearing the problem down, to visualize a cognitive researcher at his eureka moment, or a robot saying “I know what consciousness is and this helmet can read it.”
In order to set in motion the events that make Chappie what it is, Deon has to be that future conquest of the intellect, he has to reach his eureka and decipher consciousness, to program it and install it on a dying anthropomorphic robot.
He also has to be that shocked amazement that invades us today, when consciousness seems indecipherable. He must consider Chalmers’ reflections and marvel at the complexity of the phenomenon of consciousness, to which we can’t find an explanation which, frankly, we don’t even know what it would look like.
A science fiction story needs to contain these two elements. Deon just got stuck between two expressions of the same fascination.