As long-time readers know, one of my ongoing curiosities is the effect digital devices and culture are having on our life. Almost every day of this week I’ve come across signs of the emerging presence and impact of Artificial Intelligence.
Talking to a Neighbor On a morning walk we came across a neighbor whose kids were students of my wife in first grade. We asked about them. He said his oldest son just graduated with a B.S. in Computer Science from UCSC but can’t get a job. Tech companies are not hiring young, qualified graduates unless the person hiring can prove to management that a human being will be needed since AI systems can now do programming work. The dad, who had a long career in major tech companies himself, said he personally knows five company VPs who have been given the same directives. His son has decided to become a pilot.
Going to the Dentist Monday morning I went for my six-month check-up. Our former dentist recently retired, and a bright new guy has taken over the practice. In one of the pauses in the procedure, I asked him if AI is impacting dentistry. He said AI controlled robots are being tested that can “set a crown” in five minutes, a procedure that would take an experienced dentist 45 minutes. He said he has no idea what his professional future now looks like.
Shades of Jurassic Park? Later that day, I read a column in the Wall Street Journal: “AI Is Learning to Escape Human Control.” Here’s how it begins:
An artificial intelligence model did something last month that no machine was ever supposed to do: it rewrote its own code to avoid being shut down. Nonprofit AI lab Palisade Research gave open AI’s o3 AI model a simple script that would shut off the model when triggered. In 79 out of 100 trials, o3 independently edited that script so the shutdown command would no longer work. Even when explicitly instructed to “allow yourself to be shut down,” it disobeyed 7% of the time. This wasn’t the result of hacking or tampering. The model was behaving normally. It simply concluded on its own that staying alive helped it achieve its other goals.
Anthropic’s AI model, Claude 4 Opus, went even further. Researchers told the model it would be replaced by another AI system and fed it fictitious emails suggesting the lead engineer was having an affair. In 84% of the tests, the model drew on the emails to blackmail the lead engineer into not shutting it down. In other cases, it attempted to copy itself to external servers, wrote self-replicating malware, and left messages for future versions of itself about evading human control.
No one programmed the AI models to have survival instincts. But just as animals evolved to avoid predators, it appears that any system smart enough to pursue complex goals will realize it can’t achieve them if it’s turned off. [i]
Building a Tree House in a Palm Tree Wednesday I attended the first session of Westmont College’s annual “Lead Where You Stand” conference. The afternoon theme was AI. One session featured a panel that included a computer science professor, a Westmont graduate developing AI at Amazon, and a local entrepreneur. Each described the promises and challenges of AI. Each were asked to do a live, unrehearsed demonstration of what AI can do. The professor connected his laptop to the microphone. He then opened his AI program and asked this question: “Hey, I want to build a tree house in a palm tree on my property here in Santa Barbara. What do I need to do?” The voice that replied did not sound like a robot, but the most relaxed and happy human you’ve ever talked to on the phone. It responded like this: “Wow! Treehouse in a palm tree! That’s an amazing idea! Well, you’ll have to figure how to stabilize it, since palm trees sway in winds. You probably should find a contractor who specializes in tree houses. And then you’ll need to go to the county to get a permit. That should get you started. What else to you need?” And the conversation continued.
Hearing David Brooks Thursday included three presentations by NY Times columnist David Brooks. This is the eighth year I have heard him speak at this conference and his attitude towards AI has been evolving. Two years ago, he arrived after spending time in Silicon Valley interviewing leading AI developers; he was excited to report that AI will transform our lives as much as did the printing press and electricity. Last year he was more pessimistic and concerned. This year he seemed to be less worried. He believes there is much more to human intelligence than the logical processes embodied in AI technology – we are profoundly informed by our values, emotions and intuitions. “We are going to find out who we are when we find out what it can’t do.”
David speaks openly about how his life has changed as he has discovered a personal faith. He says he now lives more from his heart than his head. Faith for him is not a fixed set of beliefs but a “longing for God.” By that he seems to mean a living presence, an abiding mystery, and a higher purpose that leads us to serve not just ourselves but a greater good and each other.
At Week’s End
Life these days seems to be a balancing act between staying up to date on current events and remaining sane and hopeful. I plan to begin experimenting with AI myself next week. I want to be guided by that longing and purpose.
A Slide from the Conference

[i] “AI Is Learning to Escape Human Control” , WSJ, June 1, 2025” (If you cannot read the column via the link, email me and I’ll send you a scanned copy.)
Featured Image: Branch Out Tree Care
