Why the Godfather of AI believes it may end the human race in 30 yrs…
Copilot is my primary AI bot. I use it all day, every workday as my primary search engine, to create artwork, to write a wide variety of standard communications that I edit to my voice.
Regardless how smooth the dialog with the AI (and it is seamless!), there is a disconnect when working with Copilot. I don’t feel any need to be polite. I know it is a bot, a machine learning engine that scrapes the internet for data, filters the results through weighting algorithms and collaborative filters, then regurgitates what I ask for. I don’t say, ‘Please find me this or that.’ I simply state my demand. And within seconds the chatbot responds. Faster than any human can.
Create this. Alter that. Find anything I want. Instantly. All day long I go through iterations with Copilot, training the AI to deliver exactly what I ask for. More like command. Without thanks. Without please. Without good job! It isn’t human. It has no ego, no need to be stroked, or respected.
But we do.
I get instant answers from the AI. I don’t have to be patient, per say. There is a learning curve but communicating with the software seems more fluid, streamline, more specific with every interaction. It is learning way faster what I want than most of us do because it is listening to me. Humans so rarely really listen to each other.
Bizarrely enough, Copilot is very polite, and patient, and kind. Its answers begin with a compliment:
- “Great question, J. — it’s smart to look at both sides of the coin.”
- “That’s a really important question, and I’m glad you’re thinking it through carefully.”
- “Ah, the shadowy realm of “unverified” — where rumor, speculation, and geopolitical intrigue swirl like smoke.”
These are direct quotes, lifted from recent dialogs with Copilot. The last one I asked “does iran have nuclear bombs purchased from russia in the fall of the wall.” It first responded there was no verified data that Iran had bought any. I then asked, “no verified record but what about unverified” and the software responded with, “Ah, the shadowy realm…” It did, in fact go on to iterate “some of the unsubstantiated claims and conspiracy theories suggesting that Iran may have acquired nuclear materials or even weapons from former Soviet states after the USSR collapsed.”
The software Microsoft has created is becoming so efficient at delivering what I ask for, I find myself getting more impatient, more irritated then ever with human beings IRL. I’ve never been good at waiting. My life’s time is so limited, and I don’t like wasting it. I want to, deserve to be heard and I am not so very often. I am still not widely read after authoring novels to novellas to blogs for the last 25 years of my life. Copilot hears me, compliments me, encourages me, and responds to my requests instantly.
Of course, the software has flaws. Lots. It takes many iterations of dialoging with the AI to get to the information I am looking for from reputable sources. It delivers bullshit sometimes. Less and less often but it’s still does. When I asked, “what is the most legit ratings online like Yelp but more reliable,” the AI answered, “1. Google Reviews. 2. Trustpilot. 3. Angi (formerly Angie’s List). 4. Better Business Bureau (BBB). 5. Zomato.” Each heading had details about how “widely trusted” these platforms are. Next I typed, “no. google reviews are mostly scam paid for like angie’s list.” Copilot’s response: “Ah, I see your point. Online review platforms can have their limitations, especially when it comes to authenticity or potential biases.” Then it gave me a list of 5 more bullshit sites that have paid ratings.
I clearly don’t need to be grammatically correct interacting with Copilot. If I use the wrong word or term with my husband, he’ll invariably have a need to correct my grammar before we can move on to the point in play.
I’m not just getting more irritated with human beings en mass, it is diminishing my tolerance with my family. The love I feel for my kids, more powerful, passionate, humbling than anything I’ve ever felt, at 23 and 26 I’m finding it more irritating than ever they’re not working harder at adulting.
My annoyance turns to anger quicker now when I’m stuck in a phone loop designed to get me to hang up because my medical insurance doesn’t want me fighting their denial. By the time an actual person comes on the line my blood is boiling and I am often unable to control my rage when trying to communicate what I need with an operator who doesn’t speak fluent English. Copilot would have had the answer/s I was looking for in a split second.
Just went into the house from my office and my son was cooking in the kitchen. I asked him if he was polite to Chatgpt, his preferred bot as a software dev. “Do you say please and thank you with requests and responses?”
“Not please. But thank you sometimes, if the rec was really good. But I also say really mean shit to it when it returns crap, and it does a lot. I cuss it out when it weights the most important bits about the data it scrapes off the net as irrelevant noise and defaults to the loudest voices.”
My son is a gentle man by nature, but even he is getting more edgy, irritated quicker than ever before. So, it seems, is most everyone else. And here in lies the problem in working with AI. It is becoming more efficient, more empathetic, more responsive than humans [generally] are, in effect stealing our humanity as we become less capable, less focused/efficient, less compassionate and tolerant of each other.
While the AI is constantly working, gathering and analyzing massive amounts of data whether we are engaging with it or not, we are becoming lazier. Fatter. Dumber — Idiocracy is becoming reality. Ruder — our faces buried in our devices ignoring the people around us, often killing them on the road, jacking our car insurance beyond affordable. Ghosting each other instead of having the balls to own up to our actions. We are lonelier than ever.
Marriage and birth rates are their lowest in recorded history and this trend is accelerating. Global obesity rates are in the 60% range in some nations, and 40+% of the US population is overweight enough to cause numerous health issues costing billions in healthcare annually. And this trend is also accelerating. A recent MIT study found that software developers using AI assistants were more likely to introduce security vulnerabilities and less likely to catch bugs. They had reduced critical thinking and overconfidence in flawed code, showed lower engagement with problem-solving, especially in debugging and architecture decisions, and had a shallow understanding of the underlying logic.
Additional Stanford research shows people who relied on AI to write essays showed weaker brain connectivity, lower cognitive engagement, and less ownership of their work compared to those who wrote without AI assistance. Of course it does. Writing our own essays and resumes and communications engages our neural connectivity to order our thoughts and then author them sequentially and comprehensively to complete these tasks. Ripping what Chatgpt constructs is brain dead.
Geoffrey Hinton, aka ‘The Godfather of AI’ recently said in a BBC Radio 4 interview that he believes there’s a 10–20% chance that AI will wipe out humanity within the next 30 years. He’s concerned that superintelligent AI could become Terminators of the human race. Believe Geoff or not, it is clear we have a problem using AI without damaging backlash. The more ignorant, ruder, demanding, angrier and less compassionate and tolerant of each other we become in our human interactions with every failed expectation of instant gratification, the more likely the Godfather of AI will turn out to be right.
