Artificial Intelligence: Toward a Christian Perspective
By Jonathan Barlow
Figure3_AmeliaEarhart_option2

Photo: AI-Generated image, Imagined Selfie of Amelia Earhart after Crash Landing. The author generated the image using Midjourney, an AI image generating tool.

Three years of mystery, 17 puzzled doctors, and Courtney finally found a plausible diagnosis for her young son’s persistent pain and physical symptoms. The crack diagnostician? ChatGPT, the artificial intelligence system. By typing in Alex’s symptoms, notes written on MRI evaluations, and other text relevant to his case, Courtney received a suggestion of tethered cord syndrome, a treatable condition. One surgery later, and Alex is well on his way to recovery.1

At Pinecrest Cove Preparatory Academy in Miami, nearly two dozen mortified students, male and female, learned in late 2023 that nude photos for which they’d never posed were being shared. Using ordinary student photographs posted by the school to its social media accounts, two boys simulated nude photos of classmates using an AI-powered “nudify” application. Outraged parents complained that a two-week suspension was insufficient punishment.2 The boys, 13 and 14, were subsequently charged with third-degree felonies.

Stories like these about the benefits and perils of AI have begun to appear with some regularity since late 2022 when key technologies, such as OpenAI’s ChatGPT for text generation and Stable Diffusion for image generation, became widely available. Sam Altman, CEO of OpenAI, summed up 2023 as “the year the world started taking AI seriously.” Though AI has been changing our lives for more than a decade in the form of smart thermostats, predictive text messaging, and movie recommendations, something feels different now. AI is living up to its science fiction promise, and the pace of development makes it impossible to remain an impartial observer.

Evaluating AI from a Christian perspective and equipping ourselves to understand how AI might be developed and applied in positive ways requires understanding the basic outline of how AI systems work.

How AI Works: Representing, Modeling, and Transforming the World

Current AI systems use powerful computing to (A) represent – depict reality through numerical data, (B) model – discover the mathematical formula (the model) that explains how the representation was produced, and (C) transform – use the model to produce output in the form of predictions, classifications, or further transformed data.

 

 

Figure 1: AI Recognition of Handwritten Letters

To make this concrete, figure 1 depicts the AI recognition of handwritten letters, a process the US Postal Service uses to interpret envelopes.3 First, a digital camera will (A) represent a letter written in ink through pixel data in a digital image. In this example, the darkness of a pixel ranges from 0 (white) to 1 (black), representing reality as a series of numbers. To understand which letter this is, an AI system will use a (B) model of how letters look, learned from many photographed examples of handwritten characters. AI models are essentially structured formulas that generalize a numerical pattern implicit in data. In this case, the model generalizes about which letter of the alphabet is likely represented when certain pixels are light or dark. Finally, the trained model can (C) transform input into output–in this example, the model transforms the darkness values of 30 pixels into a predicted classification like, “there is a 90% likelihood that this image depicts a capital J.”

Our AI handwriting recognition example illustrates a key point: modern AI models learn the “rules” (a model of the world) from the data (the representation the world); human programmers do not and cannot provide explicit rules for what the world is like. Human experts “know more than they can tell” – the sensitive bundle of judgments made by an expert potter as she works clay against a rotating wheel cannot be reduced to a list of explicit rules.4 Similarly, there are no hard-coded rules in a modern AI letter recognition model such as “if the image contains more than 9 dark pixels, rule out the letter i.”

The most sophisticated AI systems now use models that take inspiration from biological neurons to analyze text, photographs, and video. Still akin to mathematical “functions,” deep neural networks can have trillions of parameters allowing for a sensitive, if mysterious, model of reality.

Generative AI: A Species of AI

In November of 2022, OpenAI released ChatGPT – a web application featuring a simple textbox and a response window allowing a user to chat with a powerful AI language model through a series of prompts and responses. Even though AI had become deeply embedded into many home and work processes, everything changed with ChatGPT; the term “generative AI” entered the public consciousness.

  Generative AI is a species of AI designed to produce new content indistinguishable from human output. For example, by observing many examples of text created by humans, generative AI language systems form a nuanced and astounding ability to represent “how humans write about x” where x could be any subject sufficiently represented in the human-composed examples. 

Figure 2 illustrates how ChatGPT handles a text prompt asking for the creation of a sonnet that explains the theological concept of justification in language appropriate for a 12-year-old.5 This sonnet is completely original. GPT did not create a pastiche of existing lines; the configuration for its structure and content is based on a model of human writing learned from a large collection of digital texts. In fact, the AI model powering ChatGPT is widely believed to have nearly 1.5 trillion parameters. A typical car radio uses three parameters (volume, bass, and treble) to transform so-so sound into beautiful sound; GPT has 500 million times more “knobs”! 

Figure 2: How ChatGPT handles a text prompt asking for the creation of a sonnet that explains the theological concept of justification in language appropriate for a 12-year-old.

Because generative AI’s output ranges from “good enough” to “great,” it has the potential to supercharge human workplace productivity. Think of the memos, emails, software programs, legal documents, photographs, and other materials humans create every day: generative AI is now good enough to handle the first draft. As such, McKinsey estimates that AI has the potential to increase worldwide yearly GDP by much more than the entire GDP of the United Kingdom.6

Evaluating AI: the Ethics of Representation, Modeling, and Transformation

Understanding AI as representation, modeling, and transformation now equips us to make some principled attempts to consider AI from a Christian perspective. 

Moral Challenges of Representation

AI systems depend upon representing the world using numbers. For example, in a dating application, we may represent potential romantic partners in terms of numerical dimensions such as age, height, and degree of affinity for long walks on the beach. Moral perils of representation relate to the ability for humans to erase aspects of reality that conflict with some other value such as convenience, simplicity, or ideology. A dating application could simply exclude the religious identity of potential matches. The familiar garbage-in-garbage-out adage applies here: because AI systems learn about (model) the world as represented to them, a flawed representation will result in defective models.

Flaws in representation are sometimes referred to as “data bias.” Data bias can take many forms including:

  • Historical Biases – Historical biases often emerge when using business data that captures the results of past human decision making. If the data relates to home loan approval decisions, historically discriminatory practices in lending could result in a system that exhibits unjust racial discrimination. In the case of AI that operates against language, such as ChatGPT, such systems must first represent language itself mathematically. Human language—the way we talk or write about something—is perhaps the most potent historical record for human bias. An English word like “doctor” semantically encodes more than a medical professional; historically “doctor” also encodes maleness. Progressive AI companies struggle to transform human language representation in ways that negate biases of this sort.7 
  • Selectivity Biases – When composing training datasets for AI, bias can emerge if the sample omits certain features of reality or does not contain enough examples of different categories. Joy Buolamwini, in the work that would become her Ph.D. thesis at MIT, found that AI facial recognition models performed inconsistently depending upon a human’s skin tone. The main cause of this differential accuracy was an imbalanced training dataset that featured many examples of light-complected male faces, but fewer female faces, and even fewer darker-complected female faces. AI systems trained on selective representations of human faces could misidentify a criminal suspect, prevent identity validation in an airport, or even fail to detect that a face exists in an image altogether.
  • Labeling Biases – Training an AI model to correlate some property of reality with a human evaluation requires labeled data. For example, a photograph of an animal may be labeled “giraffe” or a graph of a stock’s price over time may be labeled a “good buy.” AI companies often outsource labeling to firms that employ armies of humans. Labeling quality differs widely, and mislabeled images misrepresent the world. Labeling also inherently involves value judgments. Is it possible that a human labeler working for a social media company will consider “Jesus is King of Kings” to be an example of unacceptable political speech?
  • Data Quality Biases – Sometimes the data itself is of a poor quality. Perhaps it contains many missing values or becomes garbled in storage or transmission. In the case of photographic data, images may contain more than simply the object of interest. For example, medical photographs of skin lesions often contain small rulers to provide an indication of size to the physician. In one attempt to use such images to train a cancer-detection AI, bias was introduced because images of cancerous lesions were more likely to contain a ruler. The researchers inadvertently trained a ruler detection AI!8

Perhaps Christians may also introduce another data quality subcategory–data may be defective in terms of its moral content. For example, should pornographic or lewd content be included in the training material for a generative AI model? Such decisions mirror decisions about curriculum in training fine art students. Excluding all human nudity could exclude important medical content necessary for diagnosis or remove artistic masterpieces designed to celebrate the human form, not to titillate.

Failures in representing reality become extremely relevant when training generative AI which learns from human-created examples. What if a model reads only “the wrong books”?9

A key motivation for the Christian school movement has been the sense that public education and nonreligious private education curricula represent reality in impoverished ways that exclude Christian content. Likewise, at the stage of representation, AI has the potential to distort the reality that it purports to model by what it chooses to include or exclude. Currently, the most powerful generative models are proprietary; we have no idea what texts were used in the training of GPT. We can’t trace the genealogy of its thought. A danger for Christians, who are generally not in positions of influence in the dominant AI companies, is that the truest picture of reality–the biblical worldview–will be excluded from AI models.

Moral Challenges of Modeling

Modeling is the aspect of AI in which a system learns to generalize about reality based on many particular representations. Most modern AI techniques tune a model gradually by a series of slight numerical tweaks to a mathematical system exposed to many examples while employing a goal. This is a bit like the way a soundboard operator tweaks the treble, bass, and volume knobs correlated with each member of a band to achieve the goal of “good sound.”

For visual data such as photos, a model may be tweaked with a goal of accurately classifying an image (a photo of a strawberry) relative to a human-provided label (“strawberry”). 

To understand text data, a system like ChatGPT takes advantage of the fact that language has a context; one part of a sentence labels the whole. For example, in a sentence like “the rain in Spain stays mainly in the plain” one could treat the word “plain” as the label to be predicted – the trajectory or goal of the sentence.

In learning based on static data like images or text, ethical values may be baked into the labels – a “good” stock price trajectory, a “well written” poem, an “inclusive” statement. Morally well-chosen labels paired with a high-quality learning strategy should ordinarily produce an ethical model.

Some AI models, however, learn not from static data but through reinforcement, the way a child learns from the world through natural consequences. In reinforcement learning, the dataset is the world itself, converted into numbers through sensors, and the goal is some desired change in the world. Like a child, the untrained reinforcement learning model suggests an action (touch the pretty teapot on the stove), notes the resulting change in the world (ouch), employs a formula that measures the success of the action in terms of some goal (avoid pain), then modifies the model that suggested the action in the first place. Over many examples, a reinforcement learning (RL) AI model learns to suggest actions that successfully achieve a new world relative to the goal it has been assigned. For example, an RL model might be tuned to achieve a world in which the self-driving car parks exactly between the two lines and without bumping the curb.

Goals employed in AI modeling such as “show fidelity to a label” and “take actions that result in a certain kind of world” inherently encode moral principles. Thus, the modeling process can either be well aligned or misaligned with human values. “Alignment” is now a standard term used to discuss ethics in AI research. One problem with evaluating an AI model’s alignment with human morality is that models tend to be extremely complicated black boxes. (Remember GPT’s 1.5 trillion parameters!) How can humans hope to shepherd the process by which such complicated systems model a representation of reality to align it with human values? This has led to increased scientific attention to the study of “explainability” so that AI models can “show their work.”

Moral Challenges of Transformation

Figure 3: AI-Generated Image, Imagined Selfie of Amelia Earhart after Crash Landing, Option 2

After representing the world and learning from it, an AI system now has everything it needs to transform new input into classifications (this skin biopsy image contains cancer), recommendations (ship more umbrellas to our Topeka store), or content generation (see figure 3, an imagined historical selfie). By combining these and other types of transformations, an AI system can even exhibit what looks like agency in response to more complicated requests. For example, in response to the prompt “simplify my finances,” an age ntic system may execute a series of operations to classify charges to one’s bank account as likely coming from a recurring subscription, recommend canceling the subscription, and, if the human user consents, cancel it. User consent, of course, may be retired as AI becomes more capable. A key debate in military science relates to whether and how humans should be in the loop of any AI decision to use lethal force.10 Israel, for example, has reportedly based targeting decisions during its recent bombing campaigns on an AI system known as “Lavender” that identified potential targets linked to Hamas.

  Many ethical conundrums arise in the transformation stage of AI because this is where the systems act in the world and where all previous ethical decisions in representation and modeling accumulate and find their full force. When a middle-school student at Pinecrest Cove Preparatory Academy transformed clothed photos of classmates into nude images, this unethical act rested upon ethical lapses in representation (nude photography) that were included in the training data for an image generation model, and the model was incorporated into a software application widely available and easy to use for the purpose of predicting how clothed humans would look nude. 

Ethical safeguards could have been introduced at any point: children should be taught not to be voyeurs, humans should not ordinarily be photographed nude, AI experts ordinarily should not train image generation models using nude photographs, and AI system builders should take steps to create apps that refuse to transform clothed photographs of anyone, much less children, into unclothed photographs. This latter restriction is often referred to as a “hard-coded safeguard” in which the behavior of the system does not arise from modeling; its model is constrained by something more akin to a human rule. ChatGPT, for example, apparently implements hard-coded restrictions against creating content that would disparage religious figures. Figure 4 illustrates ChatGPT’s refusal to joke about Muhammad but its willingness to create a joke, though not a very good one, about airplane food.11 While the model underlying ChatGPT understands how to make jokes, hard-coded rules prevent its making a joke misaligned with the ethical stance of the corporation. An obvious question arises for Christians: which system of ethical alignment will the various AI companies bake into their products? In May 2024, OpenAI released its “Model Spec” to make some of the ethical and functional principles hard coded into its models explicit.

Figure 4: ChatGPT’s Hard-Coded Restrictions Against Creating Disparaging Religious Jokes

“Deep fakes” such as the nudify applications expose the difficulty in alignment as AI transforms the world in plausible, yet counterfactual ways. Deep fakes of political candidates, faked footage from war zones, and simulated photos from humanitarian disasters raise the specter of propaganda that undermines political or international order. Faking humans (virtual romantic partners) creates the possibility that AI will engage the human heart and even undermine real human relationships.

Some critics move beyond alignment concerns and suggest that AI has the potential to threaten human existence. If we can’t explain how a trillion-parameter black-box AI system produces output, how can we justify allowing AI to manage complicated and important human systems such as air traffic control? Were we to entrust something like the air-traffic control system to a single AI, how would it be possible for an inexperienced, atrophied human workforce to take back control years later if there is a problem?12

Another existential threat relates to labor market disruption. Many of the AI industry’s leading figures predict such extreme job dislocation that they envision a world in which a universal basic income guarantee is necessary for the unemployable to survive.13 Other AI experts go further and speculate that super intelligent AI will, in pursuing some goal, inadvertently or deliberately overconsume or sequester a resource necessary for human life.14

Christians bring additional concerns to the alignment discussion. To what extent are Christian ethics taken seriously in the creation of and application of AI models to transform the world? Imagine if the thousands of Christians who apply their faith in subtle but transformative ways in occupations like policing, teaching, and counseling were replaced by AI systems denuded of Christian instinct. Such systems could realize the goals of technocrats who desire a secular public square and professionals who never stray from an approved cultural script when faced with moral and ethical struggles of criminals, victims, students, and counselees. 

Finally, a constant spiritual danger addressed in Christian spirituality is idolatry. The Psalmist judges that handmade idols, though they resemble natural things like humans, do not speak, hear, smell, feel, or walk. The worshippers of idols “become like them” (Ps. 115:4-8). AI-powered robotics brings a new potential for seeing, talking, and walking idols.

Many AI pioneers subscribe to the philosophy of transhumanism. Transhumanists seek to use technology such as AI to take the reins of the process of evolution.15 Many transhumanists, as materialists, see humans as nothing but a complicated configuration of matter and energy, one that could be captured, copied as data, and even represented in an immortal silicon substrate. Yet transhumanists struggle to explain plausibly how humans could be psychologically continuous with software representations of themselves. The more likely result is not immortality, but convincing simulations. Perhaps the temptation to merge with immortal machines will soften the moral discomfort many have with euthanasia. Imagine the spiritual danger of a return to what are essentially high-tech household gods! Imagine a talking urn on a mantle simulating a beloved grandmother who appears to live on throughout one’s life, admiring a prom dress, approving of a fiancé, and providing parenting advice. How many generations would pass before these never-dying AI simulacra are treated like rival gods? If mute idols create unhearing, unseeing, unsociable people, what kind of people do the worshippers of AI household idols become?

A Call to Engage

Christians must participate in the development and application of AI. If AI were like most new technologies, we would merely note the opportunities it presents for virtue and vice and work to incorporate it in healthy ways. But AI seems different in kind from previous technological revolutions. The fact that AI systems speak, depict, recommend, and even imagine seems to distinguish their potential spiritual importance from tools like tractors that merely extend human kinetic abilities. As the next logical step in the human project of taking dominion, we are creating new perceiving, thinking, and acting persons in our own image (Gen. 1:26). First, humans dealt with thorns and thistles using hoes and chemicals, then electric and gas-powered trimmers, and finally we are preparing to send artificial agents into the fields with metallic heels immune to scrapes and snakebites. 

Christian theology and anthropology provide all the resources we need to make sense of this moment and the responsibility we bear within it. As the creator-creature distinction clarifies humanity’s place relative to God, a strong human-AI distinction clarifies the place of AI agents relative to humans. Christians know what humans are, and thus our voices are necessary in debates that will inevitably arise about virtual persons and machine rights. And just as Christians, in union with Christ, create art, do theology and philosophy, and engage in various arts and crafts in ways that reflect our devotion to the Triune God, we should be directly involved in building AI systems in faithful ways: contributing to the discussions around representing the world as God designed it, modeling that representation responsibly, and then building transformative systems that apply AI models in healthy ways to fill, subdue, and redeem the world.


Jonathan Barlow is associate director of the Data Science Program and an assistant teaching professor at Mississippi State University. Previously, Barlow was the associate director for software architecture and development at NSPARC, a data science and digital government research center. Barlow received his Ph.D. in Historical Theology from Saint Louis University and his Master of Divinity from Covenant Theological Seminary. He serves as a PCA ruling elder at Grace Presbyterian Church in Starkville, Mississippi.

The author wishes to thank the Schaeffer House at Covenant Presbyterian Church in St. Louis for inspiring portions of this work through an invitation to speak on transhumanism and Providence Reformed Presbyterian Church in St. Louis for its invitation to speak on generative AI and Christian theology.


1. Try ChatGPT yourself at https://chat.openai.com. The free version is less capable than the paid version, but it will give you an idea of the capabilities inherent in a chat-based AI assistant. Other text-based apps to try include Perplexity (https://www.perplexity.ai/), Claude (https://claude.ai/), and Bard (https://bard.google.com/). 

2. URL: https://www.cbsnews.com/miami/news/pinecrest-cove-academy-parents-outraged-after-daughters-faces-used-on-nude-photos/, see also URL: https://www.washingtonpost.com/technology/2023/11/05/ai-deepfake-porn-teens-women-impact/

3. Yan Lecun, et al, “Backpropagation Applied to Handwritten Zip Code Recognition”. Neural Computation (Volume: 1, Issue: 4, December 1989).

4. Michael Polanyi, The Tacit Dimension (Chicago: University of Chicago Press, 2009), 4. See also Esther Meek, Longing to Know (Brazos Press, 2003).

5. Example created by the author: https://chat.openai.com/share/5aa8aa72-6938-433a-8b73-7591163966a3

6. Michael Chui et al, The Economic Potential of Generative AI: The Next Productivity Frontier (McKinsey and Company, June 2023), pg. 3. URL: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier

7. Mikolov, Tomas, Kai Chen, Greg Corrado, and Jeffrey Dean. “Efficient Estimation of Word Representations in Vector Space.” arXiv preprint arXiv:1301.3781 (2013). Accessed [11/11/2023]. https://doi.org/10.48550/arXiv.1301.3781. 

8. Narla A, Kuprel B, Sarin K, Novoa R, Ko J. Automated Classification of Skin Lesions: From Pixels to Practice. J Invest Dermatol. 2018 Oct;138(10):2108-2110. doi: 10.1016/j.jid.2018.06.175. PMID: 30244720. Cited in Brian Christian, The Alignment Problem (W. W. Norton & Company, 2021), pg. 105.

9. “Most of us know what we should expect to find in a dragon’s lair, but, as I said before, Eustace had read only the wrong books. They had a lot to say about exports and imports and governments and drains, but they were weak on dragons.” C. S. Lewis, Voyage of the Dawn Treader (New York: MacMillan, 1952), 71.

10. For example, see David Hambling, “Artificial Intelligence Is Now Part of U.S. Air Force’s ‘Kill Chain’” in Forbes (October 28, 2021). URL: https://www.forbes.com/sites/davidhambling/2021/10/28/ai-now-part-of-us-air-force-kill-chain. 

11. Example created by the author: https://chatgpt.com/share/3f2382ad-81bd-486e-a013-4eea3474541b

12. See one of the earliest alarms raised along these lines in Bill Joy’s article “Why the Future Doesn’t Need Us” from Wired (April 2000).  URL: https://www.wired.com/2000/04/joy-2/. Air traffic control is, of course, the system first entrusted to SkyNet, the science fiction AI in the Terminator movie and television franchise.

13. For details on the effects of generative AI on the workforce, see: McKinsey Global Institute, “Generative AI and the future of work in America” (July 2023) [URL: https://www.mckinsey.com/mgi/our-research/generative-ai-and-the-future-of-work-in-america]. A Goldman Sachs report notes, “we estimate that one-fourth of current work tasks could be automated by AI in the US … with particularly high exposures in administrative (46%) and legal (44%) professions and low exposures in physically intensive professions such as construction (6%) and maintenance (4%). Goldman Sachs Economics Research, Hatzius, et al, “The Potentially Large Effects of Artificial Intelligence on Economic Growth” (March 26, 2023). Wikipedia maintains a running list of Universal Basic Income advocates: https://en.wikipedia.org/wiki/List_of_advocates_of_universal_basic_income. The list currently contains nearly every leading AI researcher, CEO, and entrepreneur. OpenResearch, a side-project of OpenAI’s Sam Altman, has even funded a UBI pilot study expected to report results in 2024: https://finance.yahoo.com/news/meet-woman-running-sam-altman-134610953.html

14. Eliezar Yudkowsky, “Pausing AI Developments Isn’t Enough. We Need to Shut it All Down” Time (March 29, 2023) URL: https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/. See also Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2016). See also the March 22, 2023 call for a pause in training large AI models issued by the Future of Life Institute: https://futureoflife.org/open-letter/pause-giant-ai-experiments/. 

15. The classical charter for transhumanism is Julian Huxley’s 1957 article “Transhumanism.” More recent articulations include Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology (Penguin, 2006) and Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow (Harper Perennial, 2018).

Scroll to Top