Two weeks ago I gave a talk at the American Academy of Religion’s annual conference, on the topic of artificial intelligence in Jewish thought. The talk was attended by perhaps a few dozen people—good turnout, but not great, and actually down from 2018, which was the last time I delivered a paper for that section on that subject. Despite AI’s radical advances in the last few years, its status among scholars of religion had not improved. No plenaries for this world-changing technology, thank you very much!
I left the talk feeling like I was done. After years of writing and teaching about AI and religion, I had said everything that I wanted to say. It was time to move on to something else.
Then OpenAI came out with an astounding new model, instantly accessible to the public, and suddenly it became clear that I had not written nearly enough.
For most people, AI is experienced as a set of brief but profound existential shocks. A computer can beat us in chess! Now what? A computer can write prose! Now what? A computer can simulate your face and voice! A computer can make beautiful art and music! A computer can write decent code and coherent essays and check your work for mistakes! Now what? Now what? Now what?
I’ve seen enough cycles of this to know that the answer to “now what?” is best summarized as ¯\_(ツ)_/¯, meaning everybody is impressed and nobody has a clue what is supposed to happen next. The news cycle keeps rolling and these powerful algorithms become toys, tools, and/or weapons for exploiters and the surveillance state. Surely, we think, someone will have something smart to say about these bots gnawing on our humanity. Surely this fantastical AI trick will provoke us to think more clearly about what it means to be human. But it never does.
Maybe it’s that Jewish thought—my speciality—is stuck in a moment of retrospective. Maybe it’s that so much Jewish intellectual energy is still wrapped up in LGBTQ debates. Maybe everyone is just tired. Whatever the reason, none of AI’s shocks have provoked much of a response—and more depressing, that doesn’t seem to bother people very much. It’s still just a subject for nerds. It’s still a weird thing to talk about.
So, as a nerd who cares about this deeply and has written about it in academic settings and popular settings, I am going to devote some space in Jello Menorah over the next few weeks to discuss AI in a Jewish context. Today’s post: why should Jewish thought—really any religious thought—care about A.I. right now?
(And before you object—yes, I know there are a few exceptions. I’m a founding member of AI and Faith and I have been following Christian discussions on this topic. I can speak those efforts in a different post.)
Reason #1: A.I. policy cannot remain in the realm of techies and technocrats. Religious communities can bring it to the public.
Plenty of work is already being done to regulate artificial intelligence. There are nonprofits devoted to responsible AI, teams within major tech companies that worry about AI abuse, and governments that are putting forward new regulations. This work, however, is happening largely among a small group of specialists. It is not being guided by direct public discourse about what responsible AI ought to look like, because the public doesn’t know what it thinks yet and often doesn’t understand how high the stakes are. Lack of public engagement in turn means less news coverage, which means public outcry is less likely, which means companies get a lot of leeway to define for themselves (and lawmakers) what usage is acceptable.
What would it take to change AI discourse such that normal people could have opinions about its proper and improper use (beyond the user-facing AIs that now litter the internet)? AI is genuinely complicated, so some kind of translation effort is needed. That translation effort requires some real creativity, and the development of models and metaphors which the public can use to develop a “gut feeling” for appropriate and inappropriate uses of the technology—and there really isn’t anyone better suited to developing and disseminating these ideas than religious communities, which are often in the business of giving people moral language to help them make their way through a complex world. I will explore what that moral language might look like in another post.
Reason #2: A.I. has huge religious implications. Religions must respond.
Earlier this year, a deeply Christian Google engineer wrongly claimed that the AI he was speaking to was sentient. He made this claim because the AI spoke to him in ways that seemed unmistakably human. (A wrote about this case at length here.)
You can mock the claim, but it is being made because of the shocks to our humanity I mentioned above. AI keeps infringing on our idea of what makes us human, and it does so by learning from humans, so undoubtedly some people will start thinking about AIs as human. The problem is that the question of whether AIs are human can’t be answered by tech companies or by governments. The idea of humanity’s unique value is largely conveyed through religious discussion. It is the responsibility of religious thinkers to address how AI fits into that picture.
At the same time, new AIs are able to speak in religious language. They can write sermons, they can quote scripture, and they make arguments on scriptural grounds. This wouldn’t matter, except for the fact that religions care an awful lot about the form that religious arguments and ideas take. If that form is now effortlessly mimicable—if I can translate in and out of religious language as easily as I can translate between Chinese and Yiddish on Google Translate—then what does it matter?
So AI is nibbling at our notion of personhood, and it’s messing with our idea of sacred language. It’s knocking on our door, it’s ringing the bell—and if we don’t respond, people will just think we’re not home. For religious communities AI is low-hanging fruit. If we don’t respond to this, are we really engaging with the present at all?
Reason #3: This moment matters and it will never come again
Timing matters to comedians. It matters to leaders more. There are plenty of moral issues—let’s take abortion as an example—about which lines of debate have long ago been drawn, where discussions of values have largely been superseded by political positioning, where philosophical arguments are assumed to be politics by other means. It’s extremely hard to advance new ideas on these issues.
But AI is not like this. People really don’t know what they think about AI yet—and so we are in a brief moment where it is actually possible to shape the contours of the debate, to originate ideas about AI that will stick in the popular imagination. Religious leaders have a chance to make an impact, but they need to do it soon.
That’s all for today. I’d love to know what you think, I will continue this line of thought in future posts.
What do you think about this (either the book or the review or both)? https://thelehrhaus.com/culture/halakhah-navigating-between-unity-and-plurality/
I'm giving a sermon on AI this Shabbat. We need to make sure there is financial support for AI ethics and scientists monitoring potential hazards.