Saturday, November 25, 2023

Musings on Entering a New Era of A.I. Without Understanding its Potential Dangers

By Rudy Barnes, Jr., November 25, 2023


I don’t pretend to understand the potential for Artificial Intelligence (A.I.)--for good or bad.  I have only heard its plaudits and dangers, and for me, the dangers seem to outweigh the potential good.  In time we’ll find out, since A.I. has been embraced by the capitalism that has shaped our culture.  A.I. will likely redefine our capitalism and our culture, but we don’t yet know how.


Will A.I. undermine the essence of ethics and morality by eliminating their human restraints, or will A.I. open the door to creativity and a new era of prosperity?  It seems we are already past limiting it, leaving any restraints on A.I. to the marketplace.  That scares me.  I understand that capitalism is ingrained in our democracy, but our freedom requires restraints.


Is it really possible for A.I. to defy control by its human creators?  It sounds like science fiction, but it has become a very real fear of many, including me--a retired lawyer and pastor.  I worry that it could enable the Ayn Rand faithful who run Wall Street to ignore the altruistic moral standards that have kept American capitalism in check.


A.I. is relevant (and perhaps critical) to the effectiveness of traditional religious and secular moral standards of legitimacy as they relate to politics and the economy.  Human control over its creations is the issue, and the jury remains out.  Imagine a world in which major decisions are made by automatons rather than humans.

       

Evan Klein has concluded, “I don’t mean to be too pessimistic; but if the capabilities of these [A.I.] systems continue to rise exponentially, as many inside the industry believe they will, then nothing I’ve seen in recent weeks makes me think we’ll be able to shut the systems down if they begin to slip out of our control. There is no off switch.”


The fundamental risk in A.I. is in giving free market capitalism control over moral decisions that have traditionally been made by humans.  Religion, law, moral standards and politics have long provided assistance for human control of its destiny.  As Klein points out, if A.I. can override human controls, there is no “off switch”.


The danger of a new era of A.I. is that we may have already begun a journey into unknown territory with no opportunity to turn back.  Since the  beginning of history, humans have controlled all their mechanical instruments.  If humans ever forfeit that control of their creations, concepts of religion, law, moral standards of legitimacy, and politics must be redefined.


The potential dangers of A.I. involve matters of religion, philosophy and politics; and I oppose the idea that capitalism, with greed as its primary motivation, should be the primary means to resolve the complex issues of A.I.  All religions and civic organizations in America should contribute to a national discussion of the potential dangers of A.I.

                           


Notes:


Ezra Klwin has opined: “Science fiction writers and artificial intelligence researchers have long feared the machine you can’t turn off.  As a powerful A.I. is developed. Its designers are thrilled, then unsettled, then terrified. They go to pull the plug, only to learn the A.I. has copied its code elsewhere, perhaps everywhere.  At the heart of OpenAI is — or perhaps was — a mission. A.I. was too powerful a technology to be controlled by profit-seeking corporations or power-seeking states. It needed to be developed by an organization, as Open AI’s charter puts it, “acting in the best interests of humanity.” OpenAI was built to be that organization. It began life as a nonprofit. When it became clear that developing A.I. would require tens or hundreds of billions of dollars, it rebuilt itself around a dual structure, where the nonprofit — controlled by a board chosen for its commitment to OpenAI’s founding mission — would govern the for-profit, which would raise the money and commercialize the A.I. applications necessary to finance the mission.“It would be wise to view any investment in OpenAI Global LLC in the spirit of a donation,” it warned. The company then went out and raised tens of billions of dollars from people who did not see their investments as donations. That certainly describes Microsoft, which invested more than $13 billion in OpenAI, owns 49 percent of the for-profit, put OpenAI’s technologies at the center of its product road map and has never described its strategy as philanthropic.

Ensuring that A.I. serves humanity was always a job too important to be left to corporations, no matter their internal structures. That’s the job of governments, at least in theory. On Oct. 30, the Biden administration released a major executive order “On the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence.”  Broadly speaking I’d describe this as an effort not to regulate A.I. but to lay out the infrastructure, definitions and concerns that will eventually be used to regulate A.I.  For now, the order mostly calls for reports and analyses and consultations. But all of that is necessary to eventually build a working regulatory structure. Even so, this quite cautious early initiative met outrage among many in the Silicon Valley venture-capital class who accused the government of, among other things, attempting to “ban math,” a reference to the enhanced scrutiny of more complex systems. Two weeks later, Britain announced that it would not regulate A.I. at all in the short term, preferring instead to maintain a “pro-innovation approach.” The European Union’s proposed regulations may stall on concerns from France, Germany and Italy, all of whom worry that the scrutiny of more powerful systems will simply mean those systems are developed elsewhere.

I don’t mean to be too pessimistic. If A.I. develops as most technologies develop — in an incremental fashion, so regulators and companies and legal systems can keep up (or at least catch up) — then we’re on a good track. But if the capabilities of these systems continue to rise exponentially, as many inside the industry believe they will, then nothing I’ve seen in recent weeks makes me think we’ll be able to shut the systems down if they begin to slip out of our control. There is no off switch.” See The Unsettling Lesson of the OpenAI Mess at https://www.nytimes.com/2023/11/22/opinion/openai-sam-altman.html.


David  Books has opined, “The people in A.I. seem to be experiencing radically different brain states all at once. I’ve found it incredibly hard to write about A.I. because it is literally unknowable whether this technology is leading us to heaven or hell, and so my attitude about it shifts with my mood.  

A.I. is a field that has brilliant people painting wildly diverging but also persuasive portraits of where this is going. Organizational culture is not easily built but is easy to destroy. The literal safety of the world is wrapped up in the question: Will a newly unleashed Altman preserve the fruitful contradiction, or will he succumb to the pressures of go-go-go?” See https://www.nytimes.com/2023/11/23/opinion/sam-altman-openai.html.


  

No comments:

Post a Comment