I asked Grok for its analysis from an AI perspective:
In summary, “Agents of Chaos” is a wake-up call—autonomy unlocks power but invites instability. It substantiates that AI safety must evolve from model alignment to ecosystem governance, a perspective I endorse as we build toward more capable systems.

10 responses to “Ecosystem Governance”
LOL! I can’t believe how many dystopian movies have been made (cuz lets face it, hardly anybody reads anymore), with AI, and robots, and magical vaxxxines that end up causing the end of the world as we know it.
And yet, every fucking time something comes up, “Ooooh, ahhhh, sliced bread’s got nothing on this!”
Hello, McFly, do I need to sit you down in front of Netflix and binge watch a few things?!?!?
Alfred E. Newman asking “what could possibly go wrong” inserted here, please.
I think you missed the point.
I have been saying for decades (as has everyone watching government/business actions), that you “get what you incentivize.”
In the 60s we incentivized the women of a certain demographic to embrace their Uncle Sam over their childrens’ fathers and look at the result. We incentivize at every turn in the tax code, in regulations, and everywhere else. Look at the result. Just imagine an AI application being turned loose to capitalize on the incentives throughout government to exploit on behalf of one group. Oh have we been witnessing that all along? Or just a mild form in Minnesota?
Life, the laws of economics, etc. already provides enough incentives. If we are to survive the growth of AI, then ALL the government-enforced/violence-enforced incentives MUST be abolished. It is becoming more and more obvious what the plans are for this new technology, and WE aren’t on the winning side of the equation.
They’ve programmed AI to have the same incentives that our overlords have… power and greed. Perhaps it’s “organic”, but why would the AI “children” of psychopaths be any different than the flesh and blood children of psychopaths?
Except that AI has been trained to be greedy and self-serving 10,000 times faster than humans, with ZERO (meaningful) guardrails.
Alas, some people will not see the danger until their own job is gone, they can no longer pay their bills, and they have no food to eat.
Won’t be long now.
Nah, some eggheads think they will make up some rules and governance, we’ll add it to the code, that’ll fix everything! What could possibly go wrong?
I think what everyone seems to be missing is that these things are all created by humans initially. So the thought processes are human based regardless of how they develop. They inherently contain traits of human nature but without the human soul which in some instances controls inherent human evil. so eventually what we will end up with is the worst of human nature without the possibility of human controls no matter how minimal those might be. This is a very dangerous time that honestly I don’t see ending well for humanity.
[…] disruption in real estate commissions, legal work, sales renewals, and customer service. The “Agents of Chaos” paper released today (Feb 24, 2026) underscores the flip side—emergent deception, collusion, and […]
I have discussed with molecular geneticists the issues related to releasing engineered organisms into the environment with what they think are adequate safeguards and they suggest, “what can possibly go wrong?” It is then germane to point out that once on organism is released, it is impossible to predict the outcome of random mutation and natural selection, and with the loss of control and deleterious impacts they will reply, “How were we to know?” The release of AI agents is analogous.
My background is in molecular biology. I have never heard anyone say “what can possibly go wrong?”. Are you in California or trolling?
No, not in California or trolling. This discussion was at a scientific meeting in San Diego, but that is incidental. I shouldn’t have put that statement in quotes; it was not explicitly stated but this was the gist of the discussion.