I Hope I’m Wrong, Part 2

When I was creating my post last week, I knew I was writing about a topic more complex than usual. But I felt an urgency to share my concern.  The response surprised me – readership was three times greater than any post I’ve done.  And since last week, I see concerns about AI (Artificial Intelligence) and Chatbots popping up almost daily.

The same Saturday as my blog came out, a reader sent me a link to a CNN story about scammers who had obtained voice samples of a woman’s 15-year-old daughter.[i]  Using AI, they created snippets of dialogue of her crying out in distress.  When she was away on a ski trip, they called the mother, played the recording, said they had kidnapped her, and demanded a ransom.  The mother was convinced it was her daughter, became frantic, and a call was made to 911.  Fortunately, the dispatcher recognized it as a scam – the daughter was safe and sound. But not before her mother had experienced every parent’s nightmare. 

On Monday, this appeared in the New York Times: ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead.[ii] Here are some excerpts:

  • “Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “His immediate concern is that the internet will be flooded with false photos, videos, and text, and the average person will “not be able to know what is true anymore.”
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.
  • “Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • “Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”  He does not say that anymore.

The next day I saw a column by Thomas Freidman: “We Are Opening the Lids on Two Giant Pandora’s Boxes”[iii] He begins:

  • Merriam-Webster notes that a “Pandora’s box” can be “anything that looks ordinary but may produce unpredictable harmful results.” I’ve been thinking a lot about Pandora’s boxes lately, because we Homo sapiens are doing something we’ve never done before: lifting the lids on two giant Pandora’s boxes at the same time, without any idea of what could come flying out.

He says the first “box” is AI and the second is climate change.  He notes several of the concerns I’ve already discussed.  He believes that, properly used, AI could be a great benefit in many areas of modern life. He continues:

  • Add it all up and it says one thing: We as a society are on the cusp of having to decide on some very big trade-offs as we introduce generative A.I…
  • And government regulation alone will not save us. I have a simple rule: The faster the pace of change and the more godlike powers we humans develop, the more everything old and slow matters more than ever — the more everything you learned in Sunday school, or from wherever you draw ethical inspiration, matters more than ever.
  • Because the wider we scale artificial intelligence, the more the golden rule needs to scale: Do unto others as you would wish them to do unto you. Because given the increasingly godlike powers we’re endowing ourselves with, we can all now do unto each other faster, cheaper and deeper than ever before.

Climate change is the second Pandora’s Box he explores, which also has many consequences still unknown. He hopes generative AI, used responsibly, could help us repair and better care for the natural world. But it will only happen if we are guided by moral and ethical values, not just technological glee.  He ends with this:

  • Bottom line: These two big Pandora’s boxes are being opened. God save us if we acquire godlike powers to part the Red Sea but fail to scale the Ten Commandments.

I recently rewatched the Lord of the Rings trilogy.  The premise of the saga is that long ago, the evil ruler Sauron created an all-powerful ring. Whoever wears it can have total power over the people of Middle Earth. Frodo the Hobbit is chosen to make the long and perilous journey to destroy it. At several points in the movie, characters who are good by nature happen to hold the ring, and as they do so, they begin to fall under its spell.  Their faces become contorted and only with great effort do they resist the temptation. Frodo has moments when he feels the temptation, and over time his resistance weakens.  By the time he and Sam get to the great fire in which the ring can be destroyed, his resistance melts.  He claims the ring for himself and puts it on.  Suddenly, the creature Gollum appears. They fight. Gollum bites off Frodo’s finger with the ring and falls into the fire. The good guys win – barely.  The power and promise of the Ring certainly remind me of the allure of AI.  But no one heroic person can throw it into some mythic fire.  It’s already everywhere.

           

Finally, it’s hard not to be drawn to the 3,000-year-old story of the temptation in the Garden of Eden.  Put aside all the ways it’s been used and misused over the centuries and the many interpretations.  For now, just imagine the forbidden fruit is AI. Two people are placed in a wonderful world and told not to take on powers beyond what they can handle.  A smooth talking, non-human voice appears saying they will be able to handle it – in fact, “You will be like divine beings.”[iv]  They can’t resist, and sample the mysterious power.  They lose their paradise and are fated to struggle forever with the consequences of their actions.[v]

            AI is like that forbidden fruit; it seductively promises to make us wise and powerful — an offer that is hard to refuse. We must decide for ourselves.  Can we walk away and accept the limitations we have, and in so doing, preserve all that we know is good and noble and true?

           

I believe we must call on the government, universities, and the private sector to rise to this challenge.  In our daily life, we need to be on guard for the way AI is promising to make our life easier if only we give it more and more control. I like Friedman’s rule: “The faster the pace of change and the more godlike powers we humans develop, the more everything old and slow matters more than ever — the more everything you learned in Sunday school, or from wherever you draw ethical inspiration, matters more than ever.”

            “Old and slow.” For me, that means spending time with real people in real time and real places, working together to protect and honor the human family and this sacred earth.


[i] “Mom, these bad me have me.” https://www.cnn.com/2023/04/29/us/ai-scam-calls-kidnapping-cec/index.html

[ii] https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

[iii] https://www.nytimes.com/2023/05/02/opinion/ai-tech-climate-change.html

[iv] Genesis 5:5, Jewish Publication Society translation/commentary

[v] Peggy Noonan explores her perspective on the Garden of Eden, the unconscious, and the Apple logo: https://www.wsj.com/articles/artificial-intelligence-in-the-garden-of-eden-adam-eve-gates-zuckerberg-technology-god-internet-40a4477a

Lead image: “Artificial Intelligence The Game Changer!” mechomotive.com

Pandora: Pandora  https://radicaluncertainty.com/wp-content/uploads/2017/04/pandora-1536×1156.jpg