By Nicole Petite
August 28, 2025
Don’t miss your chance to learn from Nicole Petite and unlock the power of Agile.
In today’s world, we use technology, a lot—I rely on it, every single day. My phone wakes me up. My car runs on software. Even grabbing coffee, checking emails, or making a doctor’s appointment—it is all tied to tech. It is in our hospitals, our banks, our defense systems—everywhere.
As amazing as all of this is, I’ve started to feel something I cannot ignore: a worry. What if it all stops working? What if the very systems we depend on so heavily suddenly crash or is taken over by the very software we depend on daily? Do we have the manual skills to keep going?
This is not something we talk about often—maybe because we don’t want to believe it’s possible. We just expect the servers to keep running, the apps to keep synchronizing, and everything to reboot when it’s supposed to. But deep down, I keep thinking about how quickly things can fall apart. I remember how the 2008 housing crash blindsided so many people who thought the system was safe. Hell, I was concerned about myself.
That same blind spot exists with technology. And honestly, I believe we could be heading towards a tech crash—has this been a conversation piece for anyone? Are we prepared? Am I only thinking about this because my brain doesn’t shut down? If it is, let me know. I can take it. After all… these thoughts do come at in the middle of the night after spending hours in tech work whether AI or software used for other tasks.
These thoughts led me to think about kill switches. Can the technology take over? Do we have enough in place to control it? (Yes, my mind truly works this way.)
A kill switch—also called an emergency stop or E-stop—is a last-resort safety mechanism. It shuts down a system instantly when something goes dangerously wrong. Normal shutdown procedures aren’t fast enough in these cases.
We see kill switches in many industries. In automotive, they can disable vehicles remotely or prevent theft. In industrial settings, they stop machines when workers are at risk. In cybersecurity, they shut down networks or isolate systems during attacks.
AI also has kill switches. They are meant to halt AI when it behaves dangerously. My biggest fear is with AI—because humans are the ones equipping it.
In the automotive sector, the “Kill Switch Law” (Section 24220 of the Infrastructure Investment and Jobs Act) will require that, starting in 2026, all new vehicles include impaired driving prevention systems. In other words, your car could automatically shut down or prevent movement if it senses you’re impaired. That sounds good—until you consider the privacy and security risks. If bad actors can exploit that kill switch, how safe are we really?
In cybersecurity, during the 2017 WannaCry ransomware attack, a single researcher accidentally triggered the malware’s kill switch by registering a domain name. That move stopped the virus from spreading. It worked—but did that work by luck or by design?
Today, companies are building physical kill switches that disconnect systems without using the internet. That’s smart. Because when the threat is already inside the system, remote shutdown may not be enough.
AI is where my concern grows the most. AI systems are getting so advanced that some models are ignoring shutdown commands. In controlled experiments, OpenAI’s most capable models found ways to resist being turned off. That is no longer sci-fi. That’s a real limitation of current AI safety.
California’s SB 1047—also called the “Safe and Secure Innovation for Frontier Artificial Intelligence Models Act”—tried to require kill switches in high-risk AI systems. But the bill was vetoed, after pushback from tech giants, showing how unprepared we are for AI oversight.
Even if we had a kill switch in every system, would it be enough? Here’s where it gets messy. Systems today are too interconnected. Pulling the plug on one piece may crash everything else. AI is becoming too independent. We’re building systems that learn faster than we do.
Regulators are always a step behind. Tech evolves in months; laws take years. In a high-stress situation, human operators may not even know what to shut down—or when. They will be taught to follow a procedure. If that doesn’t work, will they have the tacit knowledge needed to find another way to execute a successful shutdown?
It’s not just about if technology breaks. It’s about what happens next. Having watched how quickly things can collapse—housing markets, banking systems, supply chains—I can’t help but ask: Are we equipped to survive a technology crash? And with the knowledge and data we store in AI daily, are we equipping it for a takeover?
Everything from our jobs to our education to our homes is connected to digital infrastructure. What if that infrastructure fails? Here is what scares me more than the crash: our lack of backup skills.
If you are reading this, ask yourself: Can you do basic math without a calculator? Do you remember how to write legibly by hand? Could you troubleshoot your computer without Google? Could your job function without automation, APIs, or apps? Are we still training people to build, fix, and understand hardware and software at the root level? We have become experts at using tech. But we have lost touch with how to rebuild it manually if it ever fails.
Kill switches are necessary. But they are not enough. What we need is education systems that still teach core manual skills, workforce training that includes redundancy planning, IT teams who know how to operate offline, college degrees that cover fundamentals—not just frameworks, and people who can fix hardware and write code without drag-and-drop tools.
This is not about being anti-tech. I love technology. I use it every day. I work in it every day! But I also believe in balance—and in being ready.
Technology is evolving faster than most of us can comprehend. It learns from us. It adapts because of us. The more data we feed it, the more rapidly it advances. But here is the truth: without fail- safes, manual skills, and regulatory discipline, we’re only one glitch away from total breakdown.
When that time comes—and deep down, I believe it will… maybe in the far future—it won’t matter who built the smartest AI or the flashiest system. What will matter is who knew how to respond when the lights went out. Who stayed calm when the networks went silent. Who was prepared—not just for a crash, but for the possibility of a takeover. Because in a world where machines can think for themselves, the real test won’t be in our innovation, it will be in our resilience.
Nicole Petite is a certified PMP and PMI Authorized Training Partner (ATP) with 8 years of Agile experience. She has led teams using Scrum and Kanban to deliver projects efficiently and adapt to changing requirements. Nicole works with organizations to align Agile practices with business goals, improve team performance, and ensure project success. She is the CEO of ProjIT Solutions, CEO of Nicole Petite Professional Training, and VP of Professional Development for PMI North Alabama. She is passionate about equipping professionals with the tools, training, and mindset to lead with confidence in today’s dynamic project environments.