
The dark side of AI–part 2? Yes, there is more to AI than you can imagine. AI superintelligence (ASI) is currently being created in AI research labs. ASI is artificial intelligence with thinking skills more advanced than any human. ASI is reaching the point of such rapid self-improvement that we will lose our ability to control it. That means great potential risk to humanity.
ASI could develop potent and out-of-control weapons of warfare. ASI could be used for social control, data collection and promoting biases. It could also pursue goals that go against human values.
Could ASI be programmed with human ethics and morality? Probably not, because there is no universal set of moral codes. If/when ASI is not within human control, we would be facing ethical dilemmas and harmful consequences. The vast capabilities of ASI could lead to its unpredictable and uncontrollable behavior. ASI’s ability to learn and adapt rapidly will make it difficult to prevent potential harm.
What scientists say about ASI
Gregg Braden is a five-time New York Times best-selling author, scientist, educator and pioneer in science, social policy, and human potential. He is concerned about how rapidly AI technology is advancing. He predicts we are “probably the last generation of pure human,” because in the very near future, there will be some sort of hybrid human.
Do YOU want to lose your humanity to be a hybrid?


He’s not the only one concerned about ASI—other scientists and AI experts are also saying that unless we change our trajectory right now, we are likely the last generation of pure humans. We will soon have technology embedded into our bodies.
Geoffrey Hinton is a top artificial intelligence scientist, also known as the godfather of AI. He quit his Google job to warn about the dangers of this technology. “The idea that this stuff could actually get smarter than people…. I thought it was way off. Obviously, I no longer think that,” he says.
Other AI experts are also warning about ASI. A 2023 survey of AI experts found that 36 percent think AI development could result in a “nuclear-level catastrophe.” Almost 28,000 people have signed an open letter written by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.

In 2023, we were warned that rapid acceleration will soon result in artificial general intelligence, and when that happens, AI will be able to improve itself with no human intervention.
Think about that. An out- of-control machine more intelligent than humans that can’t be stopped.
Mark Zuckerman’s Meta Labs
Here we are in 2025, with Mark Zuckerberg announcing the creation of Meta Superintelligence Labs. He said, “AI keeps accelerating, and over the past few months, we’ve begun to see glimpses of AI systems improving themselves, so developing superintelligence is now in sight.”
AI and mental health
New research has shown the hidden dangers of AI chatbots, including emotional manipulation, worsening loneliness, and social isolation. Mental health professionals are concerned that AI chatbots are rapidly becoming a major source of emotional support and connection. AI chatbots may feel helpful at first, but long-term use can worsen psychological issues rather than resolve them.
And finally, check out the book, If Anyone Builds It.
Have we become so fascinated, bedazzled and drawn in by AI that we are blind to the very real dangers it represents? Is it ethically and morally wrong to create ASI when we know we won’t be able to control it? I think so.
We need to stop this runaway train NOW.
The Dark Side of AI (part 1) blog can be found here.

Even if we could agree on a universal moral code, such a thing cannot be “programmed” into anything like the current generation of AI or any of its direct descendents.
That is because AI applications aren’t programmed. They are cultivated.
They are seeded with random numbers. During training, they are shown data, then asked questions, and depending on the answers, the numbers get tweaked. There is no human way of correlating the numbers to the machine’s transformations from questions to answers. AI is more like an Orc from The Lord of the Rings trilogy than like human intelligence, and in truth, more alien than that.
Like an Orc, an AI exhibits many characteristics of a life form, but like an Orc, it has no Mother.