Even as a boy, Noel Weichbrodt dreamed of a future when computers could talk.
The depictions of artificial intelligence (AI) that he found in crinkled paperbacks and sci-fi TV shows captured his imagination: R. Daneel Olivaw, the humanoid robot detective of Isaac Asimov’s “Foundation” series; Data from “Star Trek: The Next Generation.”
“[They were] hopelessly modernist, but tremendously inspiring,” Weichbrodt said.
He was hardly alone in his fascination. Popular TV shows and movies such as “Blade Runner,” “Ex Machina,” and “Westworld” have dazzled audiences for decades with futuristic fight scenes and philosophical musings on the limits of technology and what it means to be human. And when most of us hear “AI,” this is what we imagine — increasingly powerful and intelligent robots that eventually cross some magical threshold to personhood.
Already, machines have defeated humans at some of our most cherished intellectual games, including chess, Jeopardy, and Go. In January 2019, Google’s DeepMind AI lab announced that its AlphaStar system had beaten two of the world’s best human players of StarCraft II, a video game. Many of these victories are a product of “deep learning,” a technique in which AI neural networks (algorithms modeled on the human brain) are “trained” on huge data sets (records of Go or StarCraft games, for example), and make predictions based on patterns found in that data. The more data the neural network is given, the smarter it becomes over time.
Christians ought to be paying attention to AI, grappling with its implications, and discerning a way forward influenced by a Christian worldview.
Elsewhere, AI systems are writing articles, scanning MRIs, driving cars, and much, much more. The technology’s advance has been so rapid and its application so wide that some predict an impending moment of “singularity,” the point at which artificial intelligence becomes self-aware, triggering a massive technological leap forward and, ultimately, the destruction of the human race by a superior AI species.
Don’t worry. Computers aren’t going to come to life and take over the world (more on that shortly). Yet the rise of AI as an extraordinarily powerful technology has sweeping implications that we, as Christians, must deal with. It raises questions about what it means to be human, made in the image of God; about power and justice; and about the mark of sin and the beauty of grace stamped not only on our souls, but on the technology we create.
Weichbrodt never lost his fascination with AI. He studied computer science and philosophy at Covenant College, learned to code, and today works as a software engineer developing consumer-oriented AI applications. Every day, he teaches computers how to talk and how to listen, building the future he once dreamed.
“We have an Alexa in our house, and I love watching my kids interact with it,” he said. “[I love] hearing them ask it a dictionary or encyclopedia question and get back a satisfying, thorough answer. They’re naturally curious, and here’s a wonderful machine that can use the power of the largest set of knowledge ever assembled to answer their questions about the world. That’s pretty amazing, and there’s a ton of AI work that goes into just being able to do that.”
The appeal of his sentient robot heroes notwithstanding, Weichbrodt’s professional experience and personal faith have convinced him that fears of AI developing consciousness are complete hogwash.
“It’s important to draw a distinction between specific intelligence and general intelligence,” he added. “It’s a mistake to believe that as these [AI] systems get smarter and more capable they will begin to generalize their capabilities to other domains. … It’s a mistake to say: ‘[AI] is working really well in this domain, therefore it’s generally intelligent.’”
In other words, AI systems are good at doing and learning very specific tasks. Quite often, they can do these tasks much better and faster than humans. But just because a particular AI technology can beat people at chess, that doesn’t mean it can do a different job, such as synthesizing human speech. Even the most complex neural networks of the future, capable of many different tasks, are still limited by the bounds of their programming.
“Human brains are irreducibly complex,” Weichbrodt said. “They can’t be reproduced on a computer.”
The imago Dei can’t be etched into an algorithm.
Calvin College computer science professor Derek Schuurman agrees. “The notion that machines could completely replicate a person presupposes that you can completely replicate what it means to be human just by looking at the physical and biological brain,” he said. “That’s a reductionistic view of what it means to be human. … Look at the creation story. Look at the image in Ezekiel of the valley of dry bones. Humans are created with the stuff of the earth but also the breath of God. … We don’t really understand how it works, but we know that a human created in the image of God is much, much more than just electrochemical reactions in the brain.”
According to Weichbrodt and Schuurman, we need not fear the rise of sentient machines hell-bent on ruling or destroying mankind. That sort of alarm raising is built on an atheistic and reductionist view of the world — misunderstanding the complexity of human beings as image bearers and replacing God with technology.
AI, properly understood, is simply an extraordinarily powerful tool. We shouldn’t be afraid of the tool itself. But as AI becomes more powerful and more widespread, we should be afraid of what sinful human beings can do with it.
Too Powerful to Ignore
AI is all around us. Nearly every week, it seems, the headlines blaze with reports of neural networks, smart devices, and a thousand wonderful or terrifying applications of this rapidly accelerating technology. Major companies such as Amazon, Apple, and Google have invested heavily in AI research, and many applications have become so commonplace that we barely even notice them anymore. Amazon AI tracks your browsing and purchase history to suggest new products to buy. Alexa responds to your questions and commands. Google’s search engine adapts over time to churn out better search suggestions and results.
But when you look beyond your browser, AI’s impact is far more dramatic. China has undertaken the Herculean task of becoming the dominant global leader in AI by 2030, investing tens of billions and sparking fears that it will use the technology’s remarkable powers to win economic and military hegemony. Increasingly, AI is being utilized in nearly every business sector, from finance to food service, entertainment to security. Global consulting company Accenture estimates that AI will double economic growth rates and increase labor productivity in developed countries by 40 percent by 2035. Others predict widespread disruption and massive job losses as algorithms and autonomous vehicles replace workers.
“Anyone who is in computer science right now can’t ignore AI,” Schuurman said. “It’s reshaping the discipline, [and it’s] shaping or misshaping our world.”
Schuurman argues that not only responsible computer scientists but also responsible Christians ought to be paying attention to AI, grappling with its implications, and discerning a way forward influenced by a Christian worldview.
Sin Encoded into Technology
Inherent to that worldview is the idea that human beings are fallen and sinful. AI, like all technology, is not itself evil, but its applications can be distorted by sin, often with tragic results.
Schuurman cites the example of a company that uses an AI-driven algorithm to assign scores to potential new hires based on data about candidates. Let’s say the company has a history of not hiring women or people of certain ethnic backgrounds. Those patterns, possibly rooted in misogyny or racial bias, get codified into the algorithm’s training data. The result: an AI algorithm that has learned to correlate race and gender with bad hiring scores — a destructive and self-fulfilling cycle caused by sin encoded into technology.
“The biases begin in choosing what data you feed in, where you get your data, how much data,” said Schuurman. “There are all kinds of values embedded in the data itself.”
In a similar example, criminal-risk algorithms are being increasingly used in the justice system to help assess a defendant’s chance of recidivism and determine the severity of a sentence. Here, neural networks analyze data from a defendant’s profile, including any arrest record and convictions, but also data patterns more distantly correlated with crime — family members in prison, for example; or whether the defendant grew up in a high-crime neighborhood.
Researchers argue that many of these data points often function as proxies for race or class. The neural network makes inferences based on historical statistics, which may themselves be biased based on, for example, a racist police department’s history of arresting far more people from certain racial or socioeconomic backgrounds. This data can cause the algorithm to suggest a higher chance of recidivism and harsher sentences for defendants from poor neighborhoods or minority groups. This is exactly what happened in several real-life case studies: The algorithm, trained on biased data, began to label black defendants with clean records as higher risk to re-offend than white defendants.
“Technology is not neutral,” Schuurman said. “Data is not neutral. The allure is that this is cold, hard math. The truth is [that] biases are sort of baked in and you have to actively seek justice in the systems we design.”
AI and Tyranny
On a macro level, there is also a very real concern that AI tools work in favor of the unscrupulously powerful. In an October 2018 article in The Atlantic titled “Why Technology Favors Tyranny,” historian Yuval Noah Harari writes: “There is no particular reason to believe that AI will develop consciousness as it becomes more intelligent. We should instead fear AI because it will probably always obey its human masters, and never rebel. AI is a tool and a weapon unlike any other that human beings have developed; it will almost certainly allow the already powerful to consolidate their power further.”
Governments that prioritize the rights of citizens may put limits on what can legally be done with AI technology. Tyrants with little interest in individual freedoms may harness the technology to create surveillance states, manipulate public opinion, or wage war. Unfettered by pesky regulations or rule of law, big data may give authoritarian governments a competitive advantage over democracies.
In the hands of human beings made in the image of God, AI applications can be equally redemptive — tools to pursue justice and mercy, to steward the earth and foster beauty, to heal and bring hope.
Already, China uses AI-powered facial-recognition systems in tandem with millions of CCTV cameras to track its citizens in major cities across the country. (Police in the United States and Britain have used similar facial-recognition algorithms in urban security cameras.)
The technology has reportedly already been used to identify and capture numerous criminals. But then, a police state’s definition of a criminal is rarely ideal. China is further harnessing big data to implement a “social credit system” in which individuals are given a “social credit score” based on their behavior. This score influences everything from a person’s eligibility for a loan to their ability to travel freely throughout the country. Should an AI-equipped camera spot you jaywalking or participating in a protest, your score goes down. The system is unfinished, but as AI facial-recognition technology continues to improve, aspiring dystopias will become increasingly efficient.
“[We] encode society’s problems into technology,” said Weichbrodt. “The more powerful a technology, the more sin can be encoded into it.”
Technology As a Tool of Redemption
Yet, for every example of AI technology being abused, there are examples of people harnessing this powerful technology to pursue justice, mercy, and human flourishing.
In medicine, deep-learning algorithms have been used to detect the presence of tuberculosis in chest X-rays, analyze images to find and pinpoint cancer, detect diabetes changes in patients’ retinas, and correlate medical histories to predict heart attacks — all faster and often more accurately than human doctors. AI drones are being developed to monitor and manage crops, localizing treatment for insects and disease, ultimately saving farmers money and increasing yields. Automated environmental monitoring allows scientists a better, real-time look at climate change. Humanitarian relief workers can use AI to determine how to best allocate and deploy limited resources, while other applications help detect fraud, translate foreign languages, and assist the disabled.
This is the other implication of approaching AI from a biblical worldview: Yes, this technology and many of its uses are marred by sin — capable of empowering tyrants, bolstering prejudice, and executing enormous evil. But, in the hands of human beings made in the image of God, AI applications can be equally redemptive — tools to pursue justice and mercy, to steward the earth and foster beauty, to heal and bring hope.
“We don’t need to reject any technology out of hand,” Schuurman said. “The question ‘Is AI good or bad?’ is a false dichotomy. You could ask the same question of any area of creation. ‘Is music good or bad?’ That’s not the right question. The question is: ‘Is this particular cultural activity, AI in this case, being employed in ways that are obedient to God, making this world [more like] the one He’s called us to unfold, or is it being pointed in a direction that’s disobedient?’”
ANDREW SHAUGHNESSY is a freelance writer in Portland, Oregon. A graduate of Covenant College, he has lived and worked in England, South Sudan, and India, honing his craft with a focus on nonprofits, startups, and international affairs.
Photography by Paper Boat Creative / Getty