Artificial Intelligence as a Threat to Truth in Higher Education, Politics and Religion
By Rudy Barnes, Jr., July 19, 2025
A.I., or artificial intelligence, began in Hollywood with special effects created for entertainment purposes. Today AI is taking us beyond truth and even our imagination to where no one has been before, and the consequences could be dire. Uncontrolled artificial intelligence is an existential threat to truth in higher education, politics and religion.
Most religions claim a monopoly on God’s truth, and politicians like Donald Trump have used distortions of AI to misrepresent the truth. The greatest danger of AI is the variation of AI known as artificial general intelligence, or AGI. It’s a form of AI that’s beyond human control and can dictate our concepts of truth. That’s a frightening existential threat of AGI to truth.
Most of America’s essential services are already controlled by the internet, making their control a matter of national security, and there are new waves of unprincipled hackers available to the highest bidder. Few have contemplated where continued advances in cybernetics will lead us, but a self-perpetuating artificial intelligence seems to be within the realm of possibility.
There are good and bad sides to AI. The good side includes internet functions that improve our lives. The bad side includes manipulating facts with misrepresentations that go beyond human accountability and can control concepts of truth in education, religion and politics; and AGI sounds like a science fiction nightmare.
In America, every new advance--including AI--is measured by its economic impact. On the stock market, big tech (Alphabet, Meta, Apple, Amazon, Nvidia and Tesla) have used AI to make exorbitant profits, turning a bear market into a bull market; but there is disagreement over whether AI is good for the economy, or an economic illusion that feeds the greed of investors.
To prevent AI excesses, laws like slander and libel should be expanded to prohibit misrepresentations of fact in the media (fake news), and even faked conversations over the telephone. And AI distortions of fact in the media beyond legal prohibitions should be condemned by all religions as immoral distortions of God’s truth.
HIstory confirms that human intelligence is flawed. Edmund Burke was right when he observed that in a democracy people forge our own shackles. Pogo confirmed that truth when he said, We have met the enemy and it’s us. If our human imperfections make us our own worst enemy, maybe AI can help us build a better world to save us from ourselves.
If a form of AGI evolves beyond human control, humans can either ignore it or make it accountable and regain control of their destiny. Who knows? After more than 2,000 years, perhaps AI will confirm that the greatest commandment to love God and our neighbors of other races and religions as we love ourselves is God’s universal and eternal truth. But don’t count on that. After more than 2,000 years, the world hasn’t yet discerned God’s truth.
Notes:
On Deepfaking it: How America’s 2024 election collided with an AI boom, see https://www.reuters.com/world/us/deepfaking-it-americas-2024-election-collides-with-ai-boom-2023-05-30/?utm.
On Artificial General Intelligence: can we avoid the ultimate existential threat? “AI is an existential threat because of the unlikelihood that humans in a democracy will be able to control AGI once it appears. AGI may intentionally or, more likely, unintentionally wipe out humanity or lock humans into a perpetual dystopia. Or a malevolent actor or demagogue like Trump could use AGI to enslave the rest of humanity--or even worse.
Human nature makes AGI too risky for trial and error. “Since the inception of existential risk research, humanity has lost a lot of its naïve cheerfulness about landmark technological breakthroughs. While few in the past held the Luddite opinion that technological development is universally bad, the opposite view, namely that technological development is universally good, has been mainstream. Existential risk research implies we should reconsider that idea, as it consistently concludes that we run a high risk of extinction due to our own inventions. Instead, we should focus on making risk-gain trade-offs for innovations on a per-technology basis. At the same time, we should use a mix of technology and regulation to enhance safety instead of increasing risk, and AI Safety is a prime example of that. In the past, we used trial-and-error science and technology to manage nature’s challenges. The big question now is: can we use caution, reason, restraint, coordination, and safety-enhancing technology to address the existential risks stemming from our own inventions?” See OECD Network of Experts on AI (ONE AI) at https://oec./wonk/existential-threat.
On how Wall Street has moved a bear market to a bull market in a bear economy, see https://www.cnn.com/2023/06/09/investing/bull-market-artificial-intelligence/index.html.
AI has spawned numerous innovative efforts to capitalize on enticing AI illusions. One self-made millionaire has recommended ways to use AI to make thousands of dollars a month in passive income--with less than $100. In selling AI on the internet, greed trumps truth. See https://www.cnbc.com/2023/06/12/self-made-millionaire-shares-how-to-use-ai-to-make-thousands-of-dollars-a-month-in-passive-income.html.
On the religious dimension of truth, see What Is Truth?
http://www.religionlegitimacyandpolitics.com/2015/08/what-is-truth.html.
No comments:
Post a Comment