The Jewish AI Conversation is Ready for its Next Phase
How to move from hot takes to a school of thought
Dear readers: what follows is an edited version of the keynote address I gave to the first conference on AI and Judaism, held at Arizona State University in February. It is a big-picture take on why Jewish thinkers should engage with AI and how they can do so more effectively.
The shotgun wedding between Jewish thought and artificial intelligence is off to a rocky start. Right now, around the world, Jewish thinkers of all types are trying to figure out what to do about AI: what to say about it, how to think about it, how to teach with it, where it does and does not belong, and how it should be regulated.
As someone who has been studying AI for years, I’m worried about how the field is unfolding. Lots of people have first takes, but few have seconds. Too many pieces are being written by people who haven’t bothered to see what else is out there. Despite a surge of interest, it’s unclear if the work is advancing or what it is trying to achieve. No canon of texts is forming. No school of thought is emerging. Humanities scholars don’t know how they’re supposed to talk about a subject that emerged five minutes ago, and rabbis don’t know how to guide their communities in the face of a world-changing technology. As engineers and ethicists debate whether AI will make our lives better or worse, or simply end up killing us all, Jewish thinkers are still trying to demonstrate that they have something to say. There are good foundations, but it’s time to build something. AI matters too much, and Judaism has much to offer and much to receive. At this critical juncture, we need to take a breath to assess where we’ve come from, where we are, and where we ought to go.
There have already been two waves of Jewish AI thought
You wouldn’t know it, but the history of Jewish–AI thought has already had two waves, separated by half a century. The first one was brief but visionary. The current one is largely reactionary.
Let’s start with the second. In the last five years there have been more than a dozen academic articles, a book, countless sermons, and op-eds galore on AI in Jewish thought. This work has mainly focused on questions of agency: basically, how ethical decision-making works in self-driving cars, military drones, medical robots, and other devices that wield lethal power. Until ChatGPT came along, this was the biggest question in the public AI conversation.
This recent work is important, but it’s unfocused. It’s important because we are still in the “extractive” stage of Jewish–AI thought, in which scholars are combing the Jewish textual tradition to unearth ideas that might be instructive in the present moment. It’s unfocused, though, because of the questionable way that the texts are being used.
Let’s pick an easy example: Jewish thinkers who compare Tesla vehicles to animals. The basic argument is that both are “self-driving,” and so Jewish laws about liability for animal damage should tell us something about damage caused by self-driving cars. Is the comparison wrong? No—but it’s also not particularly useful. Yes, the rabbis understood that owner liability should vary based on the animal’s autonomy and its history of violence—but do we really need the rabbis to tell us something that carmakers and lawmakers already know? In reality, this comparison doesn’t actually help the AI conversation. It only serves the small group of people who want to feel like Judaism and AI discourse are loosely connected.
But there’s another way to talk about this. Reading this modern literature, you’d be forgiven for not knowing that Jews first started writing about AI more than fifty years ago, just a few years after the 1956 Dartmouth workshop that founded the field. Those first essays, published by Norman Lamm and Azriel Rosenfeld in the pages of Tradition, were not interested in agency, or even in computers. Instead, they saw AI—together with extraterrestrial life, attempts to synthesize proteins, and genetically-edited human beings—as part of a possible future where “personhood” had blurry edges and humanity’s creative abilities started to rhyme with those of God.
These essays are important because they weren’t trying to regulate technology directly. Instead, they wanted to help the public develop moral intuitions about developments unprecedented in all of human history.
It’s not an accident that these still-relevant essays appeared long before any commercially available products. Instead, it’s the very fact that they lean into a sci-fi premise that has made them timeless. This is, again, another lesson: if Jewish thought only responds to the technology that is already in the world, it will always lag behind. Jewish leadership on AI requires doing what tech companies are doing: looking into the future and helping to shape it.y
Inward-facing questions and outward-facing questions: ethics, pedagogy, theolog
The first thing we can do is ask more forward-looking questions. The second thing is to realize that the current conversation about Judaism and AI mushes together two conversations that are diametrically opposed.
One of those questions could be summarized as: “How might Jewish ideas influence AI’s development and usage?” This question is often mistaken for the entire field, and it is the understood origin of both the first and the current wave of scholarship. Jewish thinkers like this question because it implies a hierarchical relationship between Judaism and the future, in which Judaism gets to be a static, eternal font of enduring wisdom that continues to be relevant today. Because of this implication, this question is theologically pretty safe. It should not be a surprise that variations on this outward-facing question are the origin point for almost every piece of new technology and every new cultural phenomenon.
The inward-facing question—“How might AI influence Jewish life and thought?”—is more intimidating, in part because it is more likely to come to pass. This question isn’t just about AI applications in Jewish life—ChatGPT telling you if your food is kosher and so on—but about the shape of education, and even the transformation of Jewish theology. Despite this, answers to this question are much more likely to be actionable, and it is in this smaller idea space that Jewish leaders are best capable of developing and implementing their ideas.
Both of these questions are worth asking, but their goals are quite different, so let’s play them out. For the outward-facing question, the most important application is ethical. These questions are about things like how much responsibility humans bear for what their AIs say and do; what obligations humans have in programming morality into their machines; how to balance the responsibility of the developers against the responsibility of the end users; and whether there is a minimum level of morality that all AI systems must contain, as Isaac Asimov suggested with his Three Laws of Robotics. Given that some AI researchers believe the technology poses an existential threat to humanity, the last question is particularly important.
A second set of ethical applications involve presentation. For example: is it ever acceptable to pass off AI output as human output, a phenomenon that Astra Taylor has called “fauxtomation?” There are also applications around transparency. For example, can a developer avoid explaining how its AI works to avoid legal liability? Should we be talking to a sophisticated AI as if it were a person, or should we condemn developers for obfuscating their own roles? These questions mostly concern generative AIs and so have received far less attention, though they are just as pressing.
The inward-facing scholarship has ethical applications, too, but will be narrower. Religiously observant Jews might consider, for example, whether one can outsource the fulfillment of a mitzvah (commandment) to an AI, or whether one can use AI systems to circumvent Sabbath regulations, or to answer a question of Jewish law. There are also miniaturized versions of the outward-facing questions, which are crucial for the establishment of broad societal norms. For example, a synagogue might consider whether to use facial recognition software to identify congregants as part of a security system, or to automatically alert authorities in the face of a perceived threat.
There are perceptual ethical questions, too. One might ask, for example, whether it’s acceptable to use an AI to write a sermon, illustrate a haggadah, or look for relevant sources for a class. These questions may seem quaint, but they are nonetheless crucial for establishing where people do and do not care about knowing that they are interacting with actual human beings and human-generated ideas. They are also important for testing whether there is something specific to religious interactions that makes people less willing to accept AI interventions, and whether a guarantee of human interaction ought to become a defining feature of religious community.
This brings us to the realm of pedagogy, where concerns extend beyond the secular public’s general anxiety about the ability to automate school assignments and the lack of clarity around which skills are worth learning for the future. Yes, Jewish pedagogues face these questions, but they also face the problem that Jewish learning is not just a means but an end of religious life, that learning Torah is supposed to be something more than useful and that the provenance of religious materials matters just as much as their substance.
Inward-facing pedagogical problems are less about how to teach and more about what it means to mark certain texts as sacred when the cost of both text production and translation approaches zero. If the format of a text is no longer enough to communicate its sacredness or authority, do we need a new way of thinking about sacred texts—or is it worth maintaining a clear distinction between the existing canon and AI mimicry, much as the rabbis of the Talmud distinguished between the written Torah (the Hebrew Bible) and everything else? These questions have interesting parallels in the relatively recent debate about whether a Torah scroll could be “written” using a silk screen process—an idea that was rejected by almost all rabbis in favor of allowing scribes to continue their inefficient but sacred work.
Finally, there is theology. AI discourse has been infused with theological significance since its inception: some of its creators have imagined themselves to be gods, the fantastical opaque complexity of AI systems has led some to ascribe divinity to AI systems, and we are currently engaged in a debate about whether AI can be a person. Oddly, all of these are secular debates: yes, they use religious language, but they do not profess to speak on behalf of any religion in particular. Those who develop ideas within religious contexts must note this and speak up, because nobody is waiting for them to come up with a theology of AI. It will develop, whether they are involved or not.
For outward-facing work, the religiosity of AI discourse lets scholars of religion engage directly in public conversations. These conversations will likely investigate the divine nature of human creativity and its tension with the need for humility. Some of this work has already been done; for example, Norman Lamm’s proposal that a human super-creator should imitate God but not impersonate God continues to resonate as a healthy shorthand for how religiosity can be made compatible with both power and ethical responsibility.
Of course, the key theological question is about whether AI should ever be granted personhood, and this is one area where Jewish thought actually has quite a lot to contribute. In conversations around humanoid animals, angels, demons, and golems, Jewish texts are filled with all sorts of non-human beings who are granted some degree of personhood without theological fuss. This attitude suggests that personhood exists on a gradient, and that humanity should not be worried that non-human persons will diminish humanity’s value. This magnanimity may turn out to be one of the most distinctive features of Jewish AI theology, and if implemented it would almost certainly have a direct impact on the way that AI is developed, deployed, and destroyed.
It is not just the personhood of AI that is at stake; it is AI’s relationship to its creator, as well. This relationship, many have noticed, bears eerie similarities to the human relationship with God: both are between creator and created, both are relationships premised on some shared special value—and, of course, both creators are filled with anxiety about their progeny going off the rails. This puts Jewish thinkers in the strange position of asking whether Jewish theology itself is a kind of precedent for AI—and if it is, what this precedent can tell us about effectively managing an uncontrollable creation. The precedential re-evaluation of Jewish theology, which is not so different from a new parent re-evaluating their own parents’ decisions, strikes me as one of the most exciting directions of development. This, truly, would be unprecedented.
Slow responses to AI still matter
This work promises to be thrilling and transformative for at least Judaism and possibly AI as well. It also faces two significant challenges: speed and register.
Technology has picked up speed, but AI has moved particularly fast. In the space of less than thirty years, AIs have gone from being able to beat grandmasters at chess to being able to diagnose medical conditions with accuracy that rivals or exceeds doctors. Compared to the speed of academic or religious thought, the contrast is almost comical. An entire generation of large language models can come and go in the time it takes to publish an academic article or organize a symposium. Prior to the 20th century, Jewish responses to new technologies tended to lag by decades, if not centuries, and even the best responses take many months to appear. These delays are so egregious that scholarship on new technology actually can’t afford to be too specific, since the referenced technologies may be long gone by the time the scholarship appears. In short, the pace of this work makes it categorically unable to directly converse with the AI of the moment. Tortoises and hares have a hard time talking to one another.
I am going to talk about what it means to speed up in a different piece—stay tuned for that!—but it is not going to be a viable option for every thinker, especially those in the academy. While speedy work matters, deliberate thinkers who move at moderate speeds can play an important role, too. Such responses can have a value of their own as an anchor in the old world, a sort of protest against the feeling that accelerated living is inevitable, a second draft to counterbalance the breathless first draft of history.
This is a viable position, but if we are slow, we need to be slow on purpose and calibrate our work to be more than just a wordier and pricier version of yesterday’s news. Ironically, some of the best slow responses to AI were the ones written when AI was just a theoretical possibility, since those thinkers were free to dream big about all the ways that personhood might get blurry. We, too, have the ability to dream big, but doing so now requires ignoring whatever shiny AI application is hanging in front of us and instead imagining where things might go next. We need to be thinking three steps ahead of the moment, to all the fantastical futures that may or may not come to pass. We need to assume that, however impressive AI has already become, more impressive achievements are on the way.
If responses are very slow, they may even abstract away from AI altogether. Rather than speak about the personhood of AI, for example, we might speak about the many ways in which the notion of personhood is being discussed with regard to intelligent animals, extraterrestrial life, and human fetuses (see my conversation with Sara Ronis about this). It is possible to imagine slow, holistic responses that bridge these fields into a larger narrative about the trajectory of how we understand humanity.
Despite the fact that I have set this up as a binary, the choice to be fast or slow will likely result in a little bit of both, with some scholars choosing to streamline their processes while others save their firepower for the big-perspective moments. Regardless of choice, however, the speed of the responses must be chosen intentionally, and not simply because of existing patterns of behavior.
Religious and academic thought about AI need to be better differentiated
The second problem emerges from the first. The urgency of the issue has led to a huge demand for scholars and educators who can speak to the “Jewish perspective” on AI even though such a perspective has barely had time to form. This has created a strong incentive for educators to cobble something together on their own.
For scholars writing outside of the academy, this is more or less fine. Religious leaders are constantly being asked to weigh in on new technological advances, and they frequently do so with a mixture of common sense and source analysis. Since their mandate is both exploratory and creative, constituents fully expect these thinkers to decide which sources ought to be relevant.
For academics, especially historians, the situation is somewhat trickier. If you accept the premise that AI is truly unprecedented, then there is no objective way to attach it to historical sources; the best one can do is describe what leaders within the Jewish community are saying—but as we’ve established already, they aren’t saying much. Making matters worse, there is no sharp division between Jewish leaders and academic scholars because many people wear both hats. Because of this it should come as no surprise that there is little stylistic difference between, say, a responsum from the Conservative movement and an article in an academic publication.
This blurry middle space has real consequences. If scholars are confused about what methodologies are acceptable, or if they worry their writing will be seen as thinly-disguised religious rhetoric, they will simply write less. This will inevitably slow the field’s growth, since Jewish leaders often look to academics for ideas. Despite their symbiotic relationship, we need to clarify the difference between confessional and academic registers of Jewish writing about AI.
I think the solution to this problem lies in the inward/outward distinction. Academics, who are ostensibly writing to a non-sectarian audience, are better situated to suggest outward-facing policy positions, and they should be bolder in putting forward specific proposals or frameworks that might help us better regulate computer intelligence. Academics are better positioned to do this not just because their audience is non-sectarian, but because they do not need to restrict themselves to Jewish sources. If a robust proposal policy framework involves ideas from Jewish theology, Buddhist cosmology, and American law, it is only the academic who can easily combine the three.
Scholars writing in Jewish contexts will struggle to influence policy, but they have two unique and underutilized skills. First, it is only religious leaders who can say how AI ought to influence Jewish law and Jewish thought. Second, religious authorities alone carry the currently-underutilized ability to develop attitudes towards artificial intelligence within Jewish communities themselves. This may seem like a small prize, but given that religious communities are the precise places where values are cultivated—the value of putting away electronics on Shabbat is a prime example—we ought to think about these technologies as laboratories for new technological norms.
If the audience for academic scholarship on AI is the larger public, the audience for religious scholarship should be religious communities. Both religious and academic scholars should contribute to the current process of discovering relevant sources, but academics with a taste for prescription should use their ability to synthesize that content with other ideas, past and present, to develop robust public policy recommendations. Religious leaders, on the other hand, should focus on building out internal policies and developing Jewish thought and law itself.
Why do this?
The prospects for growth in this field are truly astounding. It is rare to find a field of Jewish studies so ready for development, with so much new to say and learn. At the same time, I am having a hard time shaking the feeling that nobody, in fact, wants any of this: that the AI conversation can continue on without Jewish involvement. Given the real possibility that all of this work is a powerless sideshow to the conversations taking place closer to centers of power, it is reasonable to ask whether it is worth expending energy resolving a global issue from the perspective of a minority group.
I think the answer is a resounding yes. The 21st century promises to be a time in which the world is constantly throwing at us never-before-seen moral problems that nonetheless require an ethical response. To ignore these problems is to leave a hole in one’s moral code—and those holes will only get bigger as AI’s role in society continues to increase, in the worst scenario leading to a future in which most of the major problems of the day are moral issues about which Judaism has nothing to say at all. This state, which I call ethical obsolescence, undermines any Jewish claims to being a moral force in the world.
That, at least, is the inward-facing rationale. But it is more than that. Consider that public discourse about new technologies is almost always dominated by its developers and the journalists who cover them. Both of these groups have the ability to raise moral questions, but neither has the ability to develop a new moral code; for developers it often involves acting against interest, and for journalists it goes beyond the scope of their profession. Without the intervention of a direct effort to create new moral codes for AI, there may not be a check on this technology until something disastrous happens, and even then the public may not have a clear sense of what went wrong. It is precisely because religious thinkers have not been invited to weigh in on AI that their insight is valuable; in fact, I think we might consider this work to be a kind of intervention, one that will attempt to counter an unprecedented technology with unprecedented moral guidance and leadership. AI may be the first technology to be deserving of this treatment, but it will certainly not be the last. Let us use this moment wisely to reimagine what it means for Jewish thought to encounter the future.