In an age when technology is advancing at an almost unimaginable pace, few figures have more influence over the future than Sam Altman, CEO of OpenAI. His company’s products, like ChatGPT, are already reshaping the way millions of people learn, work, and interact. But behind the excitement over productivity gains and futuristic tools lies a set of deeper, more serious questions about morality, personal privacy, religion, government control, and the future of education.
Altman himself has been unusually candid about both the promise and the peril of artificial intelligence (AI). His comments offer a roadmap, not only of where AI might take us, but also of the values and freedoms we must fight to preserve.
Be informed, not misled.
The following is written by Rick Wilson, Sec/Treasurer of Gary Randall Ministries.
Morality in the Machine Age
AI is not inherently moral. It reflects the priorities, assumptions, and biases of its human creators. Altman acknowledges the difficulty:
“How do we decide whose values we align to, who gets to set the rules for this… In terms of how they’re going to use these systems, that’s all going to be… difficult, to put it lightly, for society to agree on.” (Search Engine Journal)
In practice, this means that those who control AI may have the power to define morality for billions of people, shaping not only what information is accessible, but also how that information is presented. For Christians and conservatives, this raises a red flag: if AI is built on a foundation hostile to biblical truth, it will inevitably become a cultural megaphone for secular ideologies.
Altman says he’s “reasonably optimistic” about solving the technical problem of “alignment” (ensuring AI follows human-set rules), but the bigger question is which rules, and whose morality, will prevail?
The Privacy Predicament
Altman has also painted a vivid picture of AI’s potential to become a digital shadow over every aspect of our lives:
“You can imagine [AI] will be a full recording of your life… You can also imagine the privacy concerns that it would present.” (Forbes)
He has floated the idea of “AI privilege”, a legal protection similar to doctor-patient or attorney-client confidentiality, so that personal interactions with AI remain private. While this might help, the concern remains: who ultimately controls that data, and who can access it?
In a society where big tech has already shown a willingness to deplatform and silence dissenting voices, giving a handful of corporations (or government agencies) the ability to track, store, and analyze every conversation could be catastrophic for individual liberty.
Projects like Altman’s “World ID”, a biometric system to verify human identity, are marketed as tools to protect against bots and fraud. But skeptics warn that such systems, if centralized, could become the backbone of a digital ID regime, enabling surveillance on a scale once reserved for dystopian fiction.
Technology as a New Religion?
While Altman himself does not describe AI in spiritual terms, many observers have noted a kind of techno-utopian faith in Silicon Valley circles, a belief that superintelligent AI could act as a savior of humanity. The Guardian warns this thinking can “echo religious beliefs” and potentially sideline democratic principles in favor of rule by an elite class of technologists.
For Christians, the concern is twofold: first, that AI will be imbued with values contrary to God’s Word, and second, that people, especially younger generations, will begin to place their trust in machines instead of their Creator. If AI is treated as an all-knowing oracle, the temptation to treat it as an ultimate authority will be strong.
Scripture warns against placing our faith in the works of human hands. As AI becomes more capable, the need to ground our moral and spiritual compass in eternal truth will only grow more urgent.
Government Control: The Double-Edged Sword
Altman has compared regulating AI to regulating nuclear technology:
“We can imagine a world where we have to put in place a licensing regime, where the most powerful systems require oversight.” (TIME)
Early in his advocacy, he supported robust regulation. More recently, however, he has warned that requiring government approval for new AI tools could stifle innovation and weaken U.S. competitiveness, especially against authoritarian regimes like China that are racing to deploy AI for surveillance and military purposes.
Here lies the tension: without oversight, AI development risks running wild. But with too much centralized control, especially in the hands of an administration hostile to certain viewpoints, AI could become a powerful tool of censorship and political enforcement. Altman has called for a U.S.-led democratic coalition to set global AI norms, warning that authoritarian powers will otherwise define the future.
Education: From Classrooms to AI Tutors
Perhaps the most radical change Altman envisions is in education. He predicts that the traditional college model may be obsolete within two decades:
“AI will always be smarter than they are.” (Times of India)
In his view, students will soon have personal AI tutors capable of teaching any subject, in any language, at any pace. This could democratize high-quality education, but it could also become a vehicle for mass indoctrination if the AI’s content is shaped by biased or anti-religious assumptions.
Altman foresees a shift away from memorization toward creativity and problem-solving. While that may sound appealing, we must ask: creativity in service of what values? Without a moral anchor, education becomes merely a tool for producing “effective” workers rather than virtuous citizens.
The Crossroads Before Us
Taken together, Altman’s remarks point to a future where AI could either enhance human dignity and freedom or undermine them in ways we can barely imagine. The technology’s direction will be determined less by its raw capability than by the worldview of those who control it.
For conservatives, Christians, and freedom-minded citizens, the implications are clear:
-
Morality: AI will embody someone’s moral framework. If we are not active in shaping that framework, it will be shaped without us—and likely against us.
-
Privacy: Without strong safeguards, AI could become the ultimate surveillance tool.
-
Religion: Technology cannot replace God, but a culture without discernment might treat it as a new form of authority.
-
Government: Oversight is necessary, but must be balanced to prevent both corporate abuse and state overreach.
-
Education: AI’s teaching power is immense, but so is its potential to mold young minds according to a single ideology.
A Call to Vigilance
The debate over AI is not merely about efficiency, innovation, or market competition. It is about the kind of society we want to live in—and whether we will safeguard the freedoms that have defined America since its founding.
We must insist on transparency about how AI systems are trained, demand clear privacy protections, and work to ensure that a diverse range of moral and philosophical viewpoints are represented in their design. Most importantly, we must refuse to cede our ethical and spiritual authority to machines, no matter how “smart” they appear.
As Altman himself admits,
“What happens if an AI reads everything you’ve ever written online … and then sends you one message customized for you that really changes the way you think about the world?” (TIME)
That possibility should give every freedom-loving citizen pause. AI may well become the most powerful technology ever created, but whether it strengthens or erodes our moral, spiritual, and civic foundations will depend entirely on the choices we make today.
Takeaway
AI’s future is not inevitable; it is being written in real time by policymakers, tech leaders, and cultural influencers. If we want that future to honor morality, protect privacy, respect religious conviction, restrain government overreach, and strengthen—not weaken—education, we must engage now. This is not just a technology story. It is a story about the preservation of truth, freedom, and the values that make civilization possible.