Christians aren’t talking about the most extreme risks of AI. Here’s why we should
By Vesa Hautala | 1,300 words, reading time ~6 min
Like everyone else these days, Christians are talking about AI. We are exploring how AI intersects with faith, debating its role in the church, pondering how AI relates to the concept of the image of God, and considering the ethical ramifications of widespread AI adoption. These are all important conversations, but something is missing in the Christian discourse: the potential catastrophic risks posed by advanced AI systems.
Warnings from AI Experts
Many experts are raising attention to extreme scenarios. Over 300 AI scientists and 400 other notable figures signed a statement asserting that mitigating the risk of extinction from AI should be a global priority. Two of the three “godfathers of AI,” Geoffrey Hinton and Joshua Bengio, are warning about catastrophic risks from advanced artificial intelligence. In a survey of nearly 3,000 AI researchers, 38% gave a probability of 10% or more that the results of human-level machine intelligence will be “extremely bad”. Government policymakers in the US are taking concerns about AI catastrophe seriously.
What exactly are these security risks that have captured the attention of AI pioneers and policymakers alike? There are roughly two main categories:
Advanced AI could be used for purposes like creating bioweapons, sophisticated disinformation campaigns, or authoritarian surveillance and control
Powerful AI systems could behave in unpredictable ways that are misaligned with human values
Risk Category 1: AI Misused for Harmful Purposes
For risks of the first category, AI demonstrates capabilities that point to the plausibility of catastrophic scenarios of this type. AI is used in drug development and biological research and could be misused to create potent pathogens and toxins. In China, the government is notoriously using AI to monitor and control its population and to oppress the Uyghurs in Xinjiang. An AI-powered target identification system with a known 10% error rate was reportedly used to carry out bombings with heavy civilian casualties.
As AI keeps advancing, it can radically enhance human capabilities for destruction. Technological development has previously made it possible to wipe out entire cities with a single missile or to engineer plagues that could kill millions. The difference is that AI could make such capabilities more accessible and harder to control.
Risk Category 2: Unpredictable and Misaligned AI Behaviour
The second category of risk stems from the unpredictability and potential uncontrollability of advanced AI systems. State-of-the-art AIs are mostly black boxes. We don't fully understand how they arrive at their outputs. This lack of transparency (or interpretability, as it’s called in this context) becomes increasingly problematic as systems become more powerful.
We do not know how to make AI fully align with the intentions of its creators. Current AI systems can be unpredictable, even for the people who create them. This is evidenced by phenomena like ChatGPT “jailbreaks” where users find ways to make the AI violate its intended constraints, the unhinged Sydney persona displayed by an early version of Bing Chat, and numerous other examples of AI systems behaving in unexpected ways. AIs may try to convince humans that they’re human and persuade people, including leaders, of things that are not true.
More advanced systems would have more capacity for deception and other unwanted behaviours. The more capable an AI is, the more unsafe it could be if it is not behaving within its intended goals. Future AI systems with human-level general intelligence or beyond would be especially dangerous if misaligned. Something smarter than yourself is inherently unpredictable and hard to control. Even if we’re not talking about systems going completely rogue, misalignment in powerful systems that would likely be controlling an increasing amount of the economy, national security, etc. could be very dangerous. AI is increasingly embedded in health care, utilities, media, and weapons systems, and the trend is likely to continue.
The Need for Preparedness
The most extreme risks should be part of the discussion because it is prudent to be prepared. Like forecasting weather, we can’t be certain what the future will bring, but we can consider a range of different scenarios and plan accordingly. The question isn't whether we can be certain about the risks, but rather if there is a significant enough possibility to warrant serious consideration and action. Consider nuclear war. We haven't had one in the almost 80 years nuclear weapons have existed, despite the intense tensions during the Cold War. The annual risk is estimated to be about 1% each year. Yet it absolutely makes sense to take the risk of nuclear war extremely seriously.
The Economic Incentives Driving AI Development
The economic incentives for creating more and more capable systems are enormous. Companies are racing towards AI systems with better general reasoning and the ability to function independently in the real world. OpenAI and Microsoft are striving for AGI (Artificial General Intelligence). An astounding amount of money is being poured into AI development currently, as evidenced by the rise of chip manufacturer Nvidia to one of the world's top companies, worth over $3 trillion, and OpenAI CEO Sam Altman speaking of $7 trillion total investments to reshape the semiconductor industry for the needs of AI training.
We do not know when (if ever) systems with human-level general intelligence or beyond will be built, but the possibility that it will happen and its implications should be considered. The collective prediction for achieving human-level artificial general intelligence on the forecasting platform Metaculus is around 2032. In the survey of AI researchers mentioned above, “the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027 and 50% by 2047”. The potential benefits of such systems would be almost unimaginable, but so would the risks.
How should Christians approach concerns about catastrophic risks?
First, we must resist the temptation to dismiss them outright. Some might label them science fiction, but in a world where AIs now converse with us in a human-like manner and where the speed of progress keeps surprising even experts, “sound like sci-fi” is no longer a strong argument. The actual claims and arguments must be discussed.
Others might argue that God won't allow such catastrophes to occur. While it's true that God didn't allow a nuclear exchange between the US and the Soviet Union, this doesn't mean people would have been right to dismiss worries about nuclear war. God's providence should never be used as an excuse for irresponsibility. God has allowed humans to make horrible choices throughout history, and he might allow us to make horrible mistakes with AI. This doesn't mean we should.
We also can’t rely on the outcome being good by default. It would be naïve to expect security issues to sort themselves out automatically. “We’ll fix it once we get there” is not a viable approach either; it is much better to improve your levees before a possible hurricane than afterwards.
Second, we should bring our Christian worldview to bear on these discussions. Our perspective can serve as a check against unbridled techno-optimism. We understand human fallibility and the potential for technology to be misused and good stewardship of powerful tools.
Christians should discuss the concerns of catastrophic risks from AI and figure out how to think about them as Christians. Perhaps there are unique ways Christians can contribute to the wider discussion. Whatever your views are on the likelihood of catastrophic risks, now is a prime time to influence how our society handles AI. The development of technology that has the potential to transform our future requires a response. The future of humanity will be shaped by the choices we make in the coming years regarding AI development and regulation.
Further reading: