AI has no business writing Bible commentaries
The humanity of Torah must remain integral to its production and transmission.
Over the last few days, an untold number of people have directed my attention towards one rabbi’s attempt to use ChatGPT to create a sort of Torah commentary. The commentary is written in a variety of styles, from Dr. Seuss to Shakespeare. The project is called “The AI Torah Commentary” or “Impossible Torah” and it is supposed to be done within a matter of weeks, almost certainly the quickest Bible commentary ever to have been written. (I’m not linking to it because I know how the Internet works and I would like to keep this conversation civil.)
I don’t think this is a good idea at all, and I’ll explain why in a second—but before doing that I want to note three things:
Should this person have known better? No. That’s the whole point. New technologies almost inevitably send people in radically different ethical directions, and hashing out exactly why something feels morally repugnant is important in establishing new moral guidance. This is how we move from gut responses to principles. This isn’t a story with good guys and bad guys; it’s about establishing what is good and bad in the first place.
Establishing that AI Torah ≠ Torah is a good first step towards developing an ethical intuition. It’s worth spending the time to explain why Torah must remain a human project.
I would not normally critique someone’s books so severely or so publicly. I love experimental books. These books merit an exception, however, because—as will become clear—they are in fact not authored texts. They are computer output pretending to be commentary on Scripture.
Okay. Without further ado, here is why this project should never have been made.
It frustrates Jewish ethical leadership around AI
Last year, at the height of the NFT craze, someone started selling pieces of the Torah as NFTs. (If you don’t understand that sentence, just know that it means monetizing the Bible through cryptocurrency.) For the equivalent of a few thousand dollars, you could own a digital token that represented Isaiah, or Jeremiah. You could even buy the verse that says you shouldn’t place a stumbling block before the blind—which is ironic, since NFTs and much of the cryptoverse turned out to be a huge scam.
I get why people do this. People who care about Torah want it to matter, and that often means attaching it to popular culture. But Torah+modernity isn’t good by default and doing it mindlessly can end up signaling that Torah tacitly supports something that it almost definitely does not. Just as Torah NFTs implied that NFTs as a whole were fine, AI Torah suggests that AI as a whole is fine. You can’t speak to Jewish ethics of ChatGPT when you have already placed a book it generated on the shelf next to your set of Talmud.
This doesn’t mean Torah can’t be experimental. It can be and it should be. But experiments say something, and experiments with Torah often say “this is ok.” Make sure you know what they’re saying.
Torah should be malleable—but not this malleable
I have an idea for an AI-written book. It’s called The Twilight & Fifty Shades of Gray Bible Commentary. It’s like a regular Bible commentary, except it’s a lot raunchier and it has teenage vampires. The New York Times review? “This book should not have been written.” But I did it anyway, because I could. (Not really.)
The reason that I can make a book like this is that the cost of translation—not just into other languages, but other styles of writing—has fallen precipitously over the last couple of decades. Whereas languages and linguistic registers were once reliable shibboleths and useful as sources of authority in and of themselves, the trivialization of translation means genres of writing—especially sacred writing—which once relied on their form to communicate authority must now rely on content, values, and personalities.
Learning to live with sacred texts in a world of low-cost translation is going to be an adjustment process—but that process is not helped by ham-fistedly shoving sacred ideas into marquee secular names, which serves no purpose other than to demonstrate that anything can puppet anything. A book that is unironically marketed as a Torah commentary should not be sitting at the same level of culture as instructions for how to take peanut butter out of a VCR written in the style of the King James Bible. In fact, the decision not to translate a text in an age when anything and everything is translatable is arguably what it means to treat a text with reverence: we respect it for how it exists, and not how we would like it to exist.
Of course, things are not so black and white. A good amount of modern feminist and LGBTQ Torah requires translating ideas that originated outside of Judaism into Torah terms. This is a careful process, and I’d argue it is a sacred process, but part of what gives it gravitas is that it is done with extreme care, extreme reverence for the texts—and a huge amount of love and creativity. The feminist midrash in books like Dirshuni is the opposite of shallow, a point to which I will return below. Conducting this process by tossing select prompts into ChatGPT and asking people to take it seriously would be rightfully insulting to both feminist thought and Torah. (There is much more to say on this point, including the tension between mourning the translation of the Bible into Greek—and the meturgeman, a mostly-defunct realtime Bible translator employed in synagogues.)
Torah should be learned from people
When the printing press was invented, many rabbis disapproved of its use as a method for transmitting Torah. One reason for this was the sense that Torah was supposed to be learned in conversation with other people, not through static text on dead pages. Living traditions are supposed to be learned from the living.
The printing press won, but the critics were also proven right: the press permanently changed the way that people interacted with Torah by making it much easier to separate a teacher from their words. Right or wrong, something really was sacrificed.
With AI, we are faced with another choice: do we want to remove people from the transmitters of Torah an additional degree by having them engage not with human creativity, but a probabilistic aggregate of that creativity? I don’t see the argument for introducing that extra layer. Lots of people want to teach Torah. Let them teach you.
AI Torah by itself is shallow and will always be shallow
One of the defining characteristics of Torah is that it is multilayered; the texts are deep, they are worthy of careful study, and they never run out of potential meanings. AI texts, on the other hand, are about as shallow as shallow can be. Even the “author” doesn’t understand why its word sequences make sense; the words barely have one meaning, let alone two or more. Why introduce a book into the study hall that clearly is not worthy of deep study? Why introduce the suspicion that any given book of Torah is just so much random noise? Who does that serve?
These thoughts are incomplete, but the conclusion should be quite clear: whatever applications you think are appropriate for AI intervention, the writing of Torah should not be among them. Establishing this norm now is an important part of establishing what a Jewish moral response to AI ought to look like; indeed, interpreting this ban further will likely sprout additional norms. There is a lot of work to do in building out an ethics for AI, but this seems like an excellent place to start.
P.S. For those who want to go deeper, I will be speaking at Arizona State University’s upcoming (free, virtual) conference on Judaism and AI, the first conference of its kind. You can find the schedule and register here.