Friday, June 16, 2023

Musings on Artificial Intelligence as a Deepfaking Fad or as an Existential Threat

By Rudy Barnes, Jr., June 17, 2023


It all began in Hollywood, where special effects were created for entertainment purposes.  Today artificial intelligence (AI) is taking us beyond our imagination to where no one has been before, and the consequences could be dire.  The very idea of creating an uncontrolled superior intelligence is an existential threat to humankind, even beyond that of nuclear weapons.


The 2024 election cycle will preview deepfaking AI that misrepresents truth, but we’ve seen the distortions of AI before and can accept the challenge to find the truth.  It’s the idea that a superior form of intelligence that’s beyond human control can control our destiny that’s a frightening existential threat.  It’s known as artificial general intelligence, or AGI.


Most of America’s essential services are already controlled by the internet, making their control a matter of national security, and there are new waves of unprincipled hackers available to the highest bidder.  Few have contemplated where continued advances in cybernetics will lead us, but a self-perpetuating artificial intelligence seems to be within the realm of possibility.


There are good and bad sides to AI and AGI.  The good side includes internet functions that improve our lives.  The bad side includes manipulating facts with fake news, and functions that go beyond human accountability in controlling our destiny.  Scientific speculation of where AGI can take us sounds like something out of a science fiction nightmare.    


In America, every new advance--including AI--is measured by its economic impact.  On the stock market, big tech (Alphabet, Meta, Apple, Amazon, Nvidia and Tesla) have used AI to make exorbitant profits, turning a bear market into a bull market; but there is disagreement over whether AI is good for the economy, or an economic illusion based on the greed of investors.     


To prevent AI excesses, laws like slander and libel should be expanded to prohibit misrepresentations of fact in the media (fake news), and even faked conversations over the telephone.  And AI distortions of fact in the media beyond legal prohibitions should be condemned by all religions as immoral distortions of God’s truth.


HIstory confirms that human intelligence is flawed; and that Edmund Burke was right when he observed that in a democracy people forge our own shackles.  Pogo confirmed that truth when he said, We have met the enemy and it’s us.  If our human imperfections make us our own worst enemy, maybe AGI can help us build a better world to save us from ourselves.    


If a form of AGI evolves beyond human control, so be it.  Humans can either ignore it or learn to live with it while seeking to regain control of their destiny.  Who knows?  After more than 2,000 years, perhaps AI and AGI will confirm that the greatest commandment to love God and our neighbors of other races and religions as we love ourselves is God’s universal and eternal truth.  But don’t hold your breath; these days the world doesn’t seem able to discern God’s truth. 



Notes:

 

On Deepfaking it: How America’s 2024 election collides with an AI boom,  see  https://www.reuters.com/world/us/deepfaking-it-americas-2024-election-collides-with-ai-boom-2023-05-30/?utm.


On Artificial General Intelligence: can we avoid the ultimate existential threat?  “AI is an existential threat because of the unlikelihood that humans will be able to control an AGI/ASI once it appears. The AI may intentionally or, more likely, unintentionally wipe out humanity or lock humans into a perpetual dystopia. Or a malevolent actor may use AI to enslave the rest of humanity, or even worse. 

Human nature makes AGI too risky for trial and error.  “Since the inception of existential risk research, humanity has lost a lot of its naïve cheerfulness about landmark technological breakthroughs. While few in the past held the Luddite opinion that technological development is universally bad, the opposite view, namely that technological development is universally good, has been mainstream. Existential risk research implies we should reconsider that idea, as it consistently concludes that we run a high risk of extinction due to our own inventions. Instead, we should focus on making risk-gain trade-offs for innovations on a per-technology basis. At the same time, we should use a mix of technology and regulation to enhance safety instead of increasing risk, and AI Safety is a prime example of that. In the past, we used trial-and-error science and technology to manage nature’s challenges. The big question now is: can we use caution, reason, restraint, coordination, and safety-enhancing technology to address the existential risks stemming from our own inventions?” See https://oecd.ai/en/wonk/existential-threat.


On how Wall Street has moved a bear market to a bull market in a bear economy, see https://www.cnn.com/2023/06/09/investing/bull-market-artificial-intelligence/index.html.


AI has spawned numerous innovative efforts to capitalize on enticing AI illusions.  One self-made millionaire has recommended ways to use AI to make thousands of dollars a month in passive income--with less than $100.  In selling AI on the internet, greed trumps truth.  See   https://www.cnbc.com/2023/06/12/self-made-millionaire-shares-how-to-use-ai-to-make-thousands-of-dollars-a-month-in-passive-income.html.


On the religious dimension of truth, see What Is Truth?

http://www.religionlegitimacyandpolitics.com/2015/08/what-is-truth.html.



No comments:

Post a Comment