The AGI possibilities...Terrorism, Authoritarianism, Loss of Control
With bombs flying about tonight, from all sides, we need to talk about authoritarianism, and AI/AGI how it could change our world and how we could save it.
No AI was used in the writing of this post.
This is a short post, and not in line with the schedule, but with bombs flying and instability creeping on us all at a fast pace, I felt it was time to talk about AI Safety, why we need it and whats being done, especially in a time where the military in all countries will be using it to their own advantage if they can.
This column has recently discussed the ethics necessary when using AI or AGI in a military capacity, and the Ministry of Defence in the UK developing ethical cards to discuss possible scenarios.
Currently bombs are flying across the Middle East, involving more countries, and primarily hurting innocent civilians and destroying infrastructure.
In South Sudan Doctors Without Borders is forced to permanently close their hospital in Ulang and halt support to 13 primary health facilities in the county, as a result of escalating insecurity.
Whilst the war in the Ukraine continues.
We must focus on what AGI might mean in this context.
The power AI, or AGI - Artificial General Intelligence could yield is exponential
Some say AGI isn’t in use, some say that it must be, else why all the narrative?
If AGI enters the scene, it’s capability could eliminate the need for agreement with generals, and those who have control of military might.
If humans can be taken out the equation for war, the incentive to take care of the leader’s electorate; their own populations may also be removed.
Relatable dramatizations exploring authoritarianism
We’ve discussed this scenario in a previous post about the risks AI poses in a global context. Showing how AGI could pre-empt war, launching a missile, with all its future scenario thinking.
In less than 8 mins this scenario below, presented as if it were a launch of a new mobile/cell phone, explores the capability of autonomous weapons.
The actor describes and demontrates on the dummy to the left of him, mini-drones that are able to target specific individuals based on their gender, their age, their ethnicity- and direct a blow to the front of their skull to “destroy the contents” showing the brain inflicted with a wedge of orange explosion -he says to light applause, and zero shocked gasps.

Watch the full video below- but know that there is a call to action if you want to stop this sort of AGI, Future of Life institute and I aren’t masochists!
FoLI have created a page to inform and empower people with knowledge about what is really happening and what they can do to influence it -
https://autonomousweapons.org/
Who’s looking at the safety of AI?
Let’s uplift you for a second. Because I aim to. There’s hope.
Not everyone is vying for an AI future to make humans obsolete, some very good people are looking to highlight its safety, one of them made the Slaughterbots video above – The Future of Life Institute, who have a very strong position on AI stating;
“We oppose developing AI that poses large-scale risks to humanity, including via power concentration, and favour AI built to solve real human problems. We believe frontier AI is currently being developed in an unsafe and unaccountable manner.”
Then there’s Blue Dot Impact.
Blue Dot Impact, a unique organisation championing and teaching AGI and AI Safety illustrates the most serious harms AGI represents.
They offer a free course, in which they state very few people are working on the catastrophic AGI risks, despite the fact they could cause the most harm, these are describes as Terrorism, Authoritarianism and Loss of Control.
“AGI could result in such rapid power shifts that it could lead to authoritarian regimes in traditionally ‘stable’ countries. Separately, AI could escalate conflicts. Decision-makers might trust flawed recommendations about escalation, lethal autonomous weapons could inadvertently initiate or expand violent conflicts, and faster AI decisions might encourage escalation and reduce time available to de-escalate situations.”
Its an excellent course in 4 Units that cover AI safety incredibly well, sign up if you want to know more.
As you know it’s only a tiny cut of the $644bn globally that is being spent on AI is going to the safety element.
So please think about funding writers like me.
I am soon collaborating with other professionals, and make some videos and posts that will help connect, educate and expand the conversations and the policies that steer our AI and AGI to safer levels.
We are applying for funding to get us going. Help us!
If you want to get involved, then drop me a line and let me know.
If you like what you read here, donate, subscribe and share.
It sounds cliché, but this work needs to be done, and we’re ready to say the things that need to be said.
No AI was used in the writing of this post.
Please support this publication and keep me writing. We all know journalism pays pittance.
All it costs is a cup of coffee - £4.
I know coffee is expensive, but this is for the whole month so this is an absolute bargain!
Posts are locked to free subscribers after two weeks, so subscribe to get them in your inbox!