Artificial Intelligence and Mickey Mouse: A Cautionary Tale (I Hope I’m Wrong, Part 3)

On January 20, I joined a packed crowd at the local Granada Theater to hear a leading promoter of A.I., Zack Kass, give a pitch for his new book, The Next RenAIssance: AI and the Expansion of Human Potential.  The local Montecito Journal ran an article describing Zack’s background:

Zack Kass has been at the forefront of the rapidly emerging field of artificial intelligence for nearly 20 years…After several jobs in the machine learning field, Kass joined OpenAI in 2021 as one of the first 100 employees. He served OpenAI as the head of their Go-to-Market – the business unit responsible for introducing a new product to consumers. In that role he built sales, partnerships, and customer success teams to commercialize OpenAI’s research and help launch ChatGPT, turning the company’s cutting-edge R&D into real-world business solutions. 

… The book and the event draw on his 16 years in the field, exploring the arrival and continued expansion of Unmetered Intelligence (defined as AI’s ability to deliver limitless cognitive power at near-zero cost), and explaining how that phenomenon stands to reshape the foundations of work, education, science, art, and more.

Zack is an engaging young man – earnest, smart, funny, and passionate about A.I.’s potential. His background is a local-kid-makes-good story, as his father, Dr. Fred Kass, has been a much-loved oncologist in our town for decades. Zack believes many of our fears about AI – from the safety of self-driving cars to the threat of taking away jobs to being misused by scammers and criminals – are challenges will be solved.  He feels the upside of AI is almost unlimited in making human life more meaningful and satisfying.  He may be right.

But I’m not so sure. 

On my way home that night, I remembered two pieces I posted in 2023.  I noted what we are facing with A.I. is exciting and new from a technological perspective.  But the human psyche has not significantly changed for millennia.  We may have grown in our ability to create amazing things and devices, but we have not always demonstrated wisdom in using what we develop.  We have impulses that can lead us into places we do not want to be.

 In those posts I noted many ancient myths and contemporary movies recognize what can begin as innocent, well-meaning choices can unintentionally result in unleashing forces beyond our control. Stories from our cultural past include the Greek tale of Pandora’s Box and the Biblical stories of the temptation in the Garden of Eden.  In our own time, fantasy and sci-fi movies include the original Frankenstein, 2001: A Space Odyssey, the Terminator series, I, Robot, and the Lord of the Rings trilogy.

The next day, I thought of one more cultural work to add to my list: the “The Sorcerer’s Apprentice” segment of the 1940 Disney film, Fantasia. I purchased it on AppleTV (only $4.95!) and watched it.

It had been many years since I had seen it.   I forgot what a work of art it is.  Long before CGI, every image was hand-drawn by Disney artists — 600 alone to create this movie. And they were masters of their craft.

The story begins with Mickey Mouse as an assistant to a powerful Sorcerer who uses his magic to make amazing things happen. He assigns Mickey the job of carrying water back and forth to fill up a cistern and he begins the tedious manual labor.  Meanwhile, the Sorcerer takes of his hat, puts it on a table, and leaves.  Mickey pauses. He looks at the hat and wonders what it must be like to have such power. He decides to try it on.  He puts on the hat and casts a spell on a broom.  The broom sprouts two arms and begins hauling water.

As it works, Mickey is pleased with himself and takes a nap. He dreams of having the power to make the stars dance in the sky. But he is woken by the feeling of water surrounding him. It turns out the broom has taken its own initiative and gone beyond the limits of what Mickey had intended. Now the house is flooded.  Alarmed, Mickey knows he needs to stop it.  He finds an ax and cuts the broom in two.  But the broom splinters and becomes a multitude of water carriers working at twice the speed as before.  Mickey desperately searches the Sorcerer’s manual for a solution but can’t find one.  Just when all seems lost, the Sorcerer returns and sees what has happened.  He reverses the spell, the brooms disappear, and the water recedes.  He walks up to Mickey and swipes the hat off his head.

 Mickey is penitent.  Lesson learned.

Poor Mickey.  He had seen a compelling opportunity to increase his ability to manipulate the world to make his life easier.  But what he creates escapes his control and brings chaos.

Back to Zack’s vision.

More and more people I know are finding A.I. to be useful, delightful and amazing.  In many jobs, utilizing A.I. is a requirement.  In many areas of our life it is already creating great improvements. I myself have begun to use Claude as an A.I. resource for research and editing.  I chose Claude because it does not track, store or sell personal information. Its parent company, Anthropic, is committed to security, safety, and serving the public good.  I like it. But I want to be careful.

In recent years, many forces in the private sector and government wanted to establish safeguards to make sure the rapidly expanding power of A.I. is not misused.  But last spring the Trump administration appointed David Sacks to oversee government policy. Sacks tossed aside the regulatory initiatives and ever since has been encouraging unhindered development.

That’s exciting to some. But is it wise?

We see what Smartphones did to a generation of children and teenagers.  Few people saw that coming. Now, in many schools and communities, restrictions are in place and the results are universally positive.  But A.I. dwarfs Smartphones in its capacity to enchant, engage, coopt and overwhelm us.

Since reading his fascinating history of humanity, Sapiens, I have been closely following the opinions of the Israeli historian, anthropologist, and commentator Yuval Noah Harari.  In recent years he has been an articulate spokesperson regarding the hidden dangers of A.I.. He spoke last week at the Davos conference in Switzerland. Here’s what a reporter from Forbes Magazine had to say:

I have just had the pleasure of listening to Yuval Noah Harari at Davos 2026. I spend my life thinking and writing about AI, but this still landed with real force. Harari didn’t offer another prediction about automation or productivity, but questioned something deeper: whether we are sleepwalking into a world where humans quietly surrender the one advantage we have always believed made us exceptional.

Harari’s opening was as simple as it was disruptive. “The most important thing to know about AI is that it is not just another tool,” he said. “It is an agent. It can learn and change by itself and make decisions by itself.” Then he delivered the metaphor that cut through the polite Davos nodding. “A knife is a tool. You can use a knife to cut salad or to murder someone, but it is your decision what to do with the knife. AI is a knife that can decide by itself whether to cut salad or to commit murder.”

That framing matters because most of our technology rules assume the old relationship: humans decide, tools execute. Harari’s argument is that AI is beginning to break that relationship, and once it does, the usual models of accountability, regulation and even trust start to wobble. (https://www.forbes.com/sites/bernardmarr/2026/01/21/when-ai-becomes-the-new-immigrant-yuval-noah-hararis-wake-up-call-at-davos-2026/

In Mickey’s case, the Sorcerer reappears and saves the day. But as A.I.’s powers expand far beyond what we can envision and it becomes something more than we could have ever imagined, who or what will be able to stop it from becoming a destructive force?

Zack Kass may be right – the future with AI will be an amazing new world to celebrate.  But I’m not so sure.  In the recent history of our species, we human beings have often created things with the best intention. But in the process, we conjure up forces that don’t produce the results we intended.  There is no Sorcerer who’s going to miraculously show up, take the magic hat off our head, and get everything back to the way it was.  This is it.

I’m hope I’m wrong.  #3.

The prior posts: https://drjsb.com/2023/04/29/i-hope-im-wrong; https://drjsb.com/2023/05/06/i-hope-im-wrong-part-2-artificial-intelligence-pandoras-box-the-lord-of-the-rings-and-the-garden-of-eden/

AI is Showing Up in Interesting Places in My Life

As long-time readers know, one of my ongoing curiosities is the effect digital devices and culture are having on our life.  Almost every day of this week I’ve come across signs of the emerging presence and impact of Artificial Intelligence.

Talking to a Neighbor On a morning walk we came across a neighbor whose kids were students of my wife in first grade.  We asked about them. He said his oldest son just graduated with a B.S. in Computer Science from UCSC but can’t get a job.  Tech companies are not hiring young, qualified graduates unless the person hiring can prove to management that a human being will be needed since AI systems can now do programming work.  The dad, who had a long career in major tech companies himself, said he personally knows five company VPs who have been given the same directives.  His son has decided to become a pilot.

Going to the Dentist Monday morning I went for my six-month check-up.  Our former dentist recently retired, and a bright new guy has taken over the practice.  In one of the pauses in the procedure, I asked him if AI is impacting dentistry.  He said AI controlled robots are being tested that can “set a crown” in five minutes, a procedure that would take an experienced dentist 45 minutes.  He said he has no idea what his professional future now looks like.

Shades of Jurassic Park? Later that day, I read a column in the Wall Street Journal: “AI Is Learning to Escape Human Control.” Here’s how it begins:

An artificial intelligence model did something last month that no machine was ever supposed to do: it rewrote its own code to avoid being shut down. Nonprofit AI lab Palisade Research gave open AI’s o3 AI model a simple script that would shut off the model when triggered. In 79 out of 100 trials, o3 independently edited that script so the shutdown command would no longer work. Even when explicitly instructed to “allow yourself to be shut down,” it disobeyed 7% of the time. This wasn’t the result of hacking or tampering. The model was behaving normally. It simply concluded on its own that staying alive helped it achieve its other goals.

Anthropic’s AI model, Claude 4 Opus, went even further. Researchers told the model it would be replaced by another AI system and fed it fictitious emails suggesting the lead engineer was having an affair. In 84% of the tests, the model drew on the emails to blackmail the lead engineer into not shutting it down. In other cases, it attempted to copy itself to external servers, wrote self-replicating malware, and left messages for future versions of itself about evading human control.

No one programmed the AI models to have survival instincts. But just as animals evolved to avoid predators, it appears that any system smart enough to pursue complex goals will realize it can’t achieve them if it’s turned off. [i]

Building a Tree House in a Palm Tree Wednesday I attended the first session of Westmont College’s annual “Lead Where You Stand” conference.  The afternoon theme was AI. One session featured a panel that included a computer science professor, a Westmont graduate developing AI at Amazon, and a local entrepreneur.  Each described the promises and challenges of AI.  Each were asked to do a live, unrehearsed demonstration of what AI can do.  The professor connected his laptop to the microphone. He then opened his AI program and asked this question: “Hey, I want to build a tree house in a palm tree on my property here in Santa Barbara.  What do I need to do?”  The voice that replied did not sound like a robot, but the most relaxed and happy human you’ve ever talked to on the phone. It responded like this: “Wow!  Treehouse in a palm tree! That’s an amazing idea!  Well, you’ll have to figure how to stabilize it, since palm trees sway in winds.  You probably should find a contractor who specializes in tree houses.  And then you’ll need to go to the county to get a permit. That should get you started.  What else to you need?” And the conversation continued.

Hearing David Brooks Thursday included three presentations by NY Times columnist David Brooks.  This is the eighth year I have heard him speak at this conference and his attitude towards AI has been evolving.  Two years ago, he arrived after spending time in Silicon Valley interviewing leading AI developers; he was excited to report that AI will transform our lives as much as did the printing press and electricity. Last year he was more pessimistic and concerned.  This year he seemed to be less worried.  He believes there is much more to human intelligence than the logical processes embodied in AI technology – we are profoundly informed by our values, emotions and intuitions. “We are going to find out who we are when we find out what it can’t do.” 

David speaks openly about how his life has changed as he has discovered a personal faith.  He says he now lives more from his heart than his head.  Faith for him is not a fixed set of beliefs but a “longing for God.”  By that he seems to mean a living presence, an abiding mystery, and a higher purpose that leads us to serve not just ourselves but a greater good and each other.

At Week’s End

Life these days seems to be a balancing act between staying up to date on current events and remaining sane and hopeful.  I plan to begin experimenting with AI myself next week.  I want to be guided by that longing and purpose.

A Slide from the Conference


[i] “AI Is Learning to Escape Human Control”  , WSJ, June 1, 2025” (If you cannot read the column via the link, email me and I’ll send you a scanned copy.)

Featured Image: Branch Out Tree Care

I Hope I’m Wrong, Part 2

When I was creating my post last week, I knew I was writing about a topic more complex than usual. But I felt an urgency to share my concern.  The response surprised me – readership was three times greater than any post I’ve done.  And since last week, I see concerns about AI (Artificial Intelligence) and Chatbots popping up almost daily.

The same Saturday as my blog came out, a reader sent me a link to a CNN story about scammers who had obtained voice samples of a woman’s 15-year-old daughter.[i]  Using AI, they created snippets of dialogue of her crying out in distress.  When she was away on a ski trip, they called the mother, played the recording, said they had kidnapped her, and demanded a ransom.  The mother was convinced it was her daughter, became frantic, and a call was made to 911.  Fortunately, the dispatcher recognized it as a scam – the daughter was safe and sound. But not before her mother had experienced every parent’s nightmare. 

On Monday, this appeared in the New York Times: ‘The Godfather of A.I.’ Leaves Google and Warns of Danger Ahead.[ii] Here are some excerpts:

  • “Dr. Hinton said he has quit his job at Google, where he has worked for more than a decade and became one of the most respected voices in the field, so he can freely speak out about the risks of A.I. A part of him, he said, now regrets his life’s work.
  • “His immediate concern is that the internet will be flooded with false photos, videos, and text, and the average person will “not be able to know what is true anymore.”
  • “The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.
  • “Unlike with nuclear weapons, he said, there is no way of knowing whether companies or countries are working on the technology in secret. The best hope is for the world’s leading scientists to collaborate on ways of controlling the technology. “I don’t think they should scale this up more until they have understood whether they can control it,” he said.
  • “Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”  He does not say that anymore.

The next day I saw a column by Thomas Freidman: “We Are Opening the Lids on Two Giant Pandora’s Boxes”[iii] He begins:

  • Merriam-Webster notes that a “Pandora’s box” can be “anything that looks ordinary but may produce unpredictable harmful results.” I’ve been thinking a lot about Pandora’s boxes lately, because we Homo sapiens are doing something we’ve never done before: lifting the lids on two giant Pandora’s boxes at the same time, without any idea of what could come flying out.

He says the first “box” is AI and the second is climate change.  He notes several of the concerns I’ve already discussed.  He believes that, properly used, AI could be a great benefit in many areas of modern life. He continues:

  • Add it all up and it says one thing: We as a society are on the cusp of having to decide on some very big trade-offs as we introduce generative A.I…
  • And government regulation alone will not save us. I have a simple rule: The faster the pace of change and the more godlike powers we humans develop, the more everything old and slow matters more than ever — the more everything you learned in Sunday school, or from wherever you draw ethical inspiration, matters more than ever.
  • Because the wider we scale artificial intelligence, the more the golden rule needs to scale: Do unto others as you would wish them to do unto you. Because given the increasingly godlike powers we’re endowing ourselves with, we can all now do unto each other faster, cheaper and deeper than ever before.

Climate change is the second Pandora’s Box he explores, which also has many consequences still unknown. He hopes generative AI, used responsibly, could help us repair and better care for the natural world. But it will only happen if we are guided by moral and ethical values, not just technological glee.  He ends with this:

  • Bottom line: These two big Pandora’s boxes are being opened. God save us if we acquire godlike powers to part the Red Sea but fail to scale the Ten Commandments.

I recently rewatched the Lord of the Rings trilogy.  The premise of the saga is that long ago, the evil ruler Sauron created an all-powerful ring. Whoever wears it can have total power over the people of Middle Earth. Frodo the Hobbit is chosen to make the long and perilous journey to destroy it. At several points in the movie, characters who are good by nature happen to hold the ring, and as they do so, they begin to fall under its spell.  Their faces become contorted and only with great effort do they resist the temptation. Frodo has moments when he feels the temptation, and over time his resistance weakens.  By the time he and Sam get to the great fire in which the ring can be destroyed, his resistance melts.  He claims the ring for himself and puts it on.  Suddenly, the creature Gollum appears. They fight. Gollum bites off Frodo’s finger with the ring and falls into the fire. The good guys win – barely.  The power and promise of the Ring certainly remind me of the allure of AI.  But no one heroic person can throw it into some mythic fire.  It’s already everywhere.

           

Finally, it’s hard not to be drawn to the 3,000-year-old story of the temptation in the Garden of Eden.  Put aside all the ways it’s been used and misused over the centuries and the many interpretations.  For now, just imagine the forbidden fruit is AI. Two people are placed in a wonderful world and told not to take on powers beyond what they can handle.  A smooth talking, non-human voice appears saying they will be able to handle it – in fact, “You will be like divine beings.”[iv]  They can’t resist, and sample the mysterious power.  They lose their paradise and are fated to struggle forever with the consequences of their actions.[v]

            AI is like that forbidden fruit; it seductively promises to make us wise and powerful — an offer that is hard to refuse. We must decide for ourselves.  Can we walk away and accept the limitations we have, and in so doing, preserve all that we know is good and noble and true?

           

I believe we must call on the government, universities, and the private sector to rise to this challenge.  In our daily life, we need to be on guard for the way AI is promising to make our life easier if only we give it more and more control. I like Friedman’s rule: “The faster the pace of change and the more godlike powers we humans develop, the more everything old and slow matters more than ever — the more everything you learned in Sunday school, or from wherever you draw ethical inspiration, matters more than ever.”

            “Old and slow.” For me, that means spending time with real people in real time and real places, working together to protect and honor the human family and this sacred earth.


[i] “Mom, these bad me have me.” https://www.cnn.com/2023/04/29/us/ai-scam-calls-kidnapping-cec/index.html

[ii] https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

[iii] https://www.nytimes.com/2023/05/02/opinion/ai-tech-climate-change.html

[iv] Genesis 5:5, Jewish Publication Society translation/commentary

[v] Peggy Noonan explores her perspective on the Garden of Eden, the unconscious, and the Apple logo: https://www.wsj.com/articles/artificial-intelligence-in-the-garden-of-eden-adam-eve-gates-zuckerberg-technology-god-internet-40a4477a

Lead image: “Artificial Intelligence The Game Changer!” mechomotive.com

Pandora: Pandora  https://radicaluncertainty.com/wp-content/uploads/2017/04/pandora-1536×1156.jpg