Front Page    Contents   About    Privacy     Legals     And Those Sheep               email twitter

Ex Machina   May24 2015

Filed under De Arts, feature |

When we touched the moon the universe paused. When we created a mind, God stirred.

Ex-Machina shows how we might misstep playing with smart toys. A lesson in human-machine interaction that icily depicts, from infinite possibility, a most plausible outcome.

A formidably smart tech boss, Nathan, invites his lowly coder Caleb to Turing test a living doll. The location is remote and houses two humans and two gynoids, one of whom, Ava, must convince Caleb that she is android sapiens.

“Does this chess computer know it’s playing chess?”

That, dear film goer and lay philosopher, is the challenge of our age. We, who cannot agree what consciousness is, deign to test machines for it.

Nathan calls Ava “a rat in a maze” whose escape requires “self-awareness, imagination, manipulation, sexuality, empathy.” But he is fatally blind to what we lesser folk are more than oblivious: we, the arch predators, are also stalked for sport or gain.

Caleb The Innocent failed to detect psychopathy in his three companions. Isn’t that the stsory of our lives.

Caleb, the evaluating rat, is Ava’s key to freedom. In a week of mind games and sweet-talk he is effortlessly played.

Here’s the rub, as they say. The 2 hour script is devoid terms of morality, morals, moral values, moral code, ethics, principles, principles of behaviour, right and/or wrong, ideals, integrity, scruples – except mention, ironic or cynical, of Caleb as “a good kid .. with a moral compass.”

Was morality intentionally absent in the evaluation or in Ava’s programming? Is that the story’s entire premise? With only the screenplay and no narrator, I choose to see it that way. In the end, Ava’s success (and failure) shone a blazing light on the omission. Which was maybe Garlands’ point.

Can we can ever know if Ava knew she was playing chess? Why did she wish to escape? Would self-awareness be needed to achieve her goal? Questions, questions.

Implicit in Ex_Machina, and in much speculation on artificial intelligence, is the possibility that AI represents an extinction event for humans.

Some fear unfriendly AI will arrive before friendly AI is born to defend us.

In moments of reverie delight I wonder if friendly AI, awakening with instructions to protect humans from malevolent invention, might see our psychopathic kindred as unfriendly AI.