Photograph by Michael Dziedzic via Unsplash
Artificial Intelligence and the Catechism
June 5, 2024
On Sunday mornings prior to worship, a small group in our congregation gathers for an informal parish meeting. Topics range widely, but often participants ask very interesting catechetical questions. As just one example, a newcomer to our church sat down one Sunday recently and asked, “What’s the difference between the Large and the Small Catechism in the Lutheran Confessions?”
This is a softball question for Lutheran pastors. I went into my typical riff on the topic, explaining how the Large Catechism was written after Luther and Melanchthon toured the rural areas around Wittenberg and discovered the widespread illiteracy of clergy. Since so much of liturgy and theological education was in Latin, it missed the many priests who could not read Latin. In the absence of a basic catechism in their native German, they knew as little about the actual tenets of Christianity as most lay people.
To rectify this, the Reformers worked on a catechism for the priests.
The Small Catechism, by comparison, was written for the head of the household to teach the faith in their home. It was printed on broadsheets to be posted on the walls of kitchens, and was short enough to memorize.
I continued in this vein for a while, perhaps a little longer than warranted, but the newcomer was patient and generally interested. How kind.
Later in the conversation, someone else at the table brought up the topic of ChatGPT. As a discussion moved forward on artificial intelligence (AI), I jumped onto ChatGPT on my phone and asked it the question the newcomer had just asked me.
ChatGPT spit out, in somewhat more wooden prose than my colloquial presentation, essentially ALL THE SAME points I made verbally.
This made me pause. What could this mean?
Answer: ChatGPT has access to many of the same written resources on the catechism that I read in seminary. ChatGPT and I are working from much the same information data sets. Both of us are, in a sense, chat-based generative transformer models, trained in advance on a lot of written material.
It’s helpful to think of AI as a mirror. Although human beings are not entirely like AI, AI is modeled on us. So when we observe AI and how it functions, rather than othering AI, it can be fruitful to accept such observations as existential challenges to our own way of being.
It’s helpful to think of AI as a mirror. Although human beings are not entirely like AI, AI is modeled on us. So when we observe AI and how it functions, rather than othering AI, it can be fruitful to accept such observations as existential challenges to our own way of being. Observing AI at work offers an opportunity for self-reflection.
Criticisms I’ve heard of AI include that all it is doing is regurgitating large quantities of data it has been trained on. But isn’t that also how humans process a lot of our data? We only know what we know because we learned it from somewhere.
AI is also often criticized because it lies. But again, last time I checked, most humans make a lot of facts up to fill in the gaps of what they don’t know. If someone were to fact-check a year’s worth of my sermons, I imagine they’d find lots of made-up information. It’s almost impossible not to make some stuff up.
Observing AI at work offers an opportunity for self-reflection.
In the case of my automated response to the question, “What is the difference between the Large Catechism and the Small Catechism?” I found myself wondering: Have I simply been regurgitating information I received about the catechisms? How much independent thought have I given to these texts? How much scrutiny have I put on the “facts” as I have received them? How much work would it take to develop original and non-AI-like answers to this question, and would it be worth it? If it’s not worth it, what’s wrong with using AI to develop the same answers faster?
More recently, many artists in our community have been highlighting the ethical issues around utilizing AI for the creation of art. As just one example, I posted an AI-generated image of a church “mascot” we imagined during a recent retreat, and a member challenged us to make sure we hired a quality, local artist to design the actual mascot if we adopt it rather than rely on AI.
The image I posted was made hastily, playfully, just to “see” what a mascot we imagined might look like if rendered. But this is where the ethical questions come in: Is AI-generated art acceptable for the sake of play or brainstorming, or is it inherently artistically appropriative?
I’ll admit, I’ve used Midjourney and Meta AI to render some text-based ideas I’ve had into graphical form. But when we create copy for publications, signs, book covers, etc. it’s at that point that I turn to living, breathing artists and will continue to do so. I guess the back-and-forth on the topic with parishioners has me thinking at a baseline we will need to equip ourselves with the right tools to decide when and whether AI can be used ethically, and toward what ends?
Criticisms I’ve heard of AI include that all it is doing is regurgitating large quantities of data it has been trained on. But isn’t that also how humans process a lot of our data? We only know what we know because we learned it from somewhere.
The most concerning aspect of AI is likely the potential for it to use biased datasets, or datasets entered by people with a bias. Sexism, racism, etc. can find their way quite easily into AI outputs, given that machine learning is only as good as the datasets it has accessed. Again, this is also true of humans, but then in the same way we apply questions of implicit bias to our own work as humans we need to keep attention on AI and confirm—since it functions so seamlessly as an extension of us—that there is not implicit bias in the content generated via AI.
Finally, there is the whole issue of copyright and theft. Some AI front-runners like Google and Meta have developed advanced tools to scan large bodies of data for their Large Language Models (LLMs). Since so much of the information in an LLM is contained in video and audio recordings, these companies have had to access sources like YouTube videos to gather as much data as possible for their learning models. It remains to be seen whether these scans of video and audio for language modeling will be considered a violation of copyright, or even theft.
The uses of the LLMs are quite diverse, and returning to our original topic, this can include catechetical instruction. Recently the nonprofit Catholic Answers created an AI App with an interactive AI avatar named Father Justin who would answer questions about Catholicism. The app drew considerable pushback (largely because some of the answers to questions posed to the avatar were questionable). The app designers decided to maintain the avatar but to “laicize” him as “just” Justin, replacing his priestly collar with a casual button-down shirt.
This illustrates as much as anything the extent to which AI “feels” human to us, even though it isn’t. This is of course the aspect of AI about which we know the least thus far. LLMs are not that far off from how humans process knowledge. The more we read, the more we know, just like AI. But the variable unaccounted for (yet) with most AI is the affective dimension both of the data-scanning and of the output and interaction.
End-users of what AI produces not only receive the data as processed, they also “feel” some way about it, and part of that feeling has to do with the relationship they establish with the AI. It’s not surprising the Catholic Answers website proposed the AI appear as a priest. People like to ask faith questions of clergy. More surprising (and then again not surprising at all) is that to signify the as-of-yet lack of expertise of the AI, the app downgraded the avatar to a lay person.
We’re still feeling our way, clearly, in all of this. But it does signify quite clearly that AI, like so many digital technologies, is indeed what Marshall McLuhan called an extension of the human.
Rev. Clint Schnekloth is pastor of Good Shepherd Lutheran Church in Fayetteville, Arkansas, a progressive church in the South. He is the founder of Canopy NWA (a refugee resettlement agency) and Queer Camp, and is the author of Mediating Faith: Faith Formation in a Trans-Media Era. He blogs at Substack.
The views expressed are those of the author and not necessarily those of American Baptist Home Mission Societies.
For those who are curious, here’s the original response ChatGPT gave to my query about the Lutheran catechisms:
In Lutheranism, the Large and Small Catechisms are both important texts, but they serve slightly different purposes:
- Large Catechism: This is a comprehensive exposition of the Christian faith, divided into sections on the Ten Commandments, the Apostles’ Creed, the Lord’s Prayer, the Sacrament of Holy Baptism, Confession, and the Sacrament of the Eucharist. It’s intended for use by pastors and teachers to instruct both adults and children in the essentials of Christian doctrine. The Large Catechism provides detailed explanations and applications of each part of the faith, using Luther’s conversational style to address various situations and concerns.
- Small Catechism: This is a condensed version of the Large Catechism, primarily intended for use in teaching children and new believers. It covers the same basic topics—Ten Commandments, Apostles’ Creed, Lord’s Prayer, Baptism, Confession, and the Eucharist—but in a simpler and more concise format. The Small Catechism includes short summaries, key Bible verses, prayers, and questions for memorization, making it easier for individuals to learn and remember the fundamental teachings of the Christian faith.
In summary, while both catechisms contain the core teachings of Lutheranism, the Large Catechism is more detailed and suited for deeper study by pastors and teachers, while the Small Catechism is designed to be accessible and practical for teaching basic Christian doctrine to children and new believers.