Ugh, this is textbook avoidable disaster. Someone downloads a tool from a spoofed site, a backdoor sits in the environment for six weeks, credentials are stolen and event logs are wiped, and suddenly 60+ agencies are down. That is not “clever attacker,” that is a decade of neglected basics: user training, EDR telemetry that actually alerts, network segmentation, least privilege, and MFA for privileged accounts.
Props to Nevada for refusing to pay and getting 90% back, and sure, insurance + pre-negotiated vendors helped. But spending $1.3M and taking 28 days to recover while public services suffer is not resilience, it is expensive triage. If leadership doesn’t treat phishing simulations, timely detection, and account hygiene as nonnegotiable, we’ll keep seeing this play out.
Final point: stop treating “it was phishing” like an excuse. It’s a failure mode. Fix the stack so one careless click can’t cascade into a multi-agency outage.
Ugh, this is textbook avoidable disaster. Someone downloads a tool from a spoofed site, a backdoor sits in the environment for six weeks, credentials are stolen and event logs are wiped, and suddenly 60+ agencies are down. That is not “clever attacker,” that is a decade of neglected basics: user training, EDR telemetry that actually alerts, network segmentation, least privilege, and MFA for privileged accounts.
Props to Nevada for refusing to pay and getting 90% back, and sure, insurance + pre-negotiated vendors helped. But spending $1.3M and taking 28 days to recover while public services suffer is not resilience, it is expensive triage. If leadership doesn’t treat phishing simulations, timely detection, and account hygiene as nonnegotiable, we’ll keep seeing this play out.
Final point: stop treating “it was phishing” like an excuse. It’s a failure mode. Fix the stack so one careless click can’t cascade into a multi-agency outage.