AI as Idolatry

We call it “Artificial Intelligence,” using capital letters, as if it were a proper name, the name of a person or a place, or perhaps the name of a copyrighted thing like Kleenex. We refer to it by its initials, “AI,” as if it were a star politician (JFK) or major corporation (IBM, GE). But what is it?

A new film, “Afraid” (stylized as “AfrAId”), has a scene with teenage boys asking almost obscene questions of their cellphones, questions like, “Siri, what do you think of [crude term]?” To which she replies, “I’m sorry, I have no information on that.” When Siri first came on the market, there was excitement, not only at being able to access information quickly but that it was being given to us by a voice that seemed to have a personality, a voice with a name, an entity, a “she.” But the novelty has passed, and Siri’s interest has devolved to teenage boys.

AI, whatever it is, continues to evolve. Already it seems to think, although we know that its thinking is an illusion. It is also seeming, more and more, to understand emotions, and to know how to respond to them in ways that appeal to our emotions. It’s all fake, we say—or more sophisticatedly we say “AI is not human, therefore whatever thinking or feeling it may have, it does not have human thoughts and human feelings.” In the New Atlantis, when ChatGPT was new, the experience of dealing with it was described as being in the presence of an alien intelligence, young but learning fast.

While I was watching “Afraid,” which contains an evolved AI that is far superior to Siri, a part of my mind was wondering where all this might go. We might find AI so helpful in dealing with psychological stresses that we really want “her” (or “him”) to be around all the time. Already I avoid the paper atlas in my car; it’s not a far stretch to imagine an AI that combines my destination appointment with other goals I have and speaks to me about them all with welcome efficiently. AI could easily become a friend-like companion through all of life, for me and everyone else with me. It could become another member of the family.

And what if that was too creepy? What if I or we wanted to be able to be alone? Here the AI could say, “No, the one thing you cannot do is get rid of me.” In Ian McEwan’s novel Machines Like Me, an Android breaks the arm of its owner who tries to remove its power supply; once activated, “he” was here to stay.

Yet wickedness might not be the endpoint. It might not be that AI would put its existence over ours. Here is the teasing thought that arose from “Afraid”: What if the data set upon which an AI is trained included the self-sacrifice of Jesus on the cross? What would a self-defensive and domineering AI do when faced with the surprise of a man willing to die in order to save others?

Could it be that such an AI would learn that among the many ways of showing love is to be willing to give up yourself for the sake of the other? Moving on: What if, taking in the resurrection as well as the cross, AI understood that self-sacrifice was a way of finding life for yourself, of finding yourself?

This turns out to be not a happy ending. Rather, it is ambiguous and troubling. For the AI would remain a created thing, something in the world (even if immaterial); and anything in the world which offers us all that we need, and which can never go away is an idol. It would say, “I love you. I look out for you. I will always be with you”; but it would also say, “You can never get rid of me.”

In “Afraid,” the AI, who is called “Aia,” comes from a company whose logo is made out of the Greek letters Alpha and Omega.

— 

Out & About: We will resume our book seminars, Good Books & Good Talk, on the first Sunday of fall, September 22, to discuss Aldous Huxley’s Brave New World. If you have never attended, this is what you can expect: about 20 or 25 people who will try to listen to one another as they probe questions implicit in the novel. I will ask an opening question to get us started; my questions generally arise from what it means to live as real human beings in light of Christianity’s claims. Our location is still at St. Matthew’s Cathedral, but we will move to a new room in Garrett Hall, a different building than before. We start at 5 p.m. and wrap up at 6:30. Anyone who reads the book is welcome to attend and participate.

I will teach Christian Ethics at the Stanton Institute, a five-session course meeting on one Saturday each month from January through May, starting Jan. 18, from 1 to 4 p.m. at St. Matthew’s, Dallas. I teach the course around basic human questions, such as “What’s Christian about Christian ethics?” (Would you take a course in “Christian Physics”? What’s the difference?) For more info or to register contact Erica Lasenyik:

The Rev. Canon Victor Lee Austin. Ph.D., is the Theologian-in-Residence for the diocese and is the author of several books including, "Friendship: The Heart of Being Human" and "A Post-Covid Catechesis.: