Plannedscape Postings

Image

Keep A.I. Off The Electric Grid
Am I Being Paranoid Or Just Safe?

Posted by Charlie Recksieck on 2025-08-14
I've been working with software and electric utilities for my entire professional career. And lately, I’ve seen interest from companies in being able to leverage AI to improve performance.

I get it. Doing more with less is part of EVERY company's ethos, especially those with stockholders. The work I've been doing for decades is intended to both increase the efficiency of electric distribution designers and also make it easier to onboard less experienced (and cheaper) designers, plus eventually reduce necessary staffing.

And in recent years, of course, every utility manager and C-suite executives hear about AI just like the rest of us and feels like they wouldn't be doing their job if they don't employ AI in some fashion.


Should AI Be Allowed In Power Generation Or Distribution?

Generally, "AI access to the electrical grid" means using AI systems to monitor, plan, and (optionally) control grid equipment and decisions - from forecasting load and renewable output to automating grid-connection studies or even issuing control commands in real time. And of course, AI would be great at this.

That said, tight regulations, redundant oversight, engineering, and double- and triple-checking have long been a part of every utility company. I sleep better at night knowing that.

How much trust or unchecked AI work is even possible in a responsible utility environment?


Benefits of AI Getting Involved

Here's a practical breakdown with real-world examples and recommended safeguards.

* Decision support & forecasting: AI improves short- and long-term load, generation (solar/wind) forecasts, and outage prediction, allowing better dispatch and fewer reserves.


* Faster interconnection & planning: AI can automate repetitive grid-connection studies and speed approvals for new generators or data centers (example: Google-PJM collaboration).


* Operational optimization: Machine learning can optimize voltage, congestion, and asset maintenance (predictive maintenance, reduced losses). DeepMind's data-center work is a precedent for energy optimization.



Risks Of AI Involvement

* Cybersecurity & adversarial attacks: AI components increase the attack surface; adversaries can poison training data, launch adversarial inputs, or exploit model behavior to cause outages. Research and industry analyses warn of these vulnerabilities.


* Automation-induced cascades: If an AI issues unsafe control actions or misinterprets rare states, small problems can cascade into large disturbances. Experts emphasize human oversight.


* Explainability & trust: Black-box models hinder operators' ability to validate recommendations during emergencies; lack of transparency undermines trust and regulatory approval.



Commonly Recommended Safeguards

* Human-in-the-loop controls: Require operator approval for high-risk commands and maintain manual override capability. This seems to be at least a minimal guideline.


* Network segmentation & zero-trust: Isolate AI systems from corporate and public networks; apply strong identity, authentication, and least privilege. Yes ... this!


* Adversarial and data-poison testing: Regularly evaluate models against adversarial inputs and monitor training data integrity. Or basically, thorough testing.


* Auditability & explainability: Log decisions, preserve inputs, and use interpretable models or post-hoc explainers for critical actions.


* Regulatory & governance frameworks: Follow DOE/DHS guidance and industry best practices for AI in critical infrastructure. But good luck to the government and industry self-regulation in this area keeping pace or staying ahead of AI progress.



Are Fears Of A.I. Paranoid?

Artificial Superintelligence (ASI) might disregard humans, they're really asking about alignment - whether a system that becomes vastly more capable than humans would continue to act in ways that respect human values, safety, and wellbeing.

Because ASI doesn't exist yet, we can't assign a precise probability. But we can outline why some experts think the risk is low if managed well and why others consider it a serious concern.

Even by ChatGPT's own admission, when it comes to the chances of ASI ignoring human input or overrides, "There is no consensus on the risk probability - estimates among experts range widely, often from <1% to >50%."

Those odds aren't good enough.

With that kind of risk, it's nuts not to think that ASI could motivate itself and figure out how to jump the rails to control systems (e.g., the electric grid) that we don't want it involved with.


My Message To Power Companies

Until you are more sure of things, keep AI out of as many electrical processes as possible. This isn't sci-fi paranoia. It's a real threat to humanity in the long run. Seriously.