Skip to main content

Safety in global engineering teams is often equated with tighter controls and approvals. Rose Hecksher Schamberger says this misses the point. For the Vice President of Engineering at Vertafore and former Chief Technology Officer at Intelex, safety has little to do with adding layers of process. It comes down to one capability: recovery. “Safety in global teams is not primarily related to compliance or control,” Schamberger says. “It is the ability to recover.”

That conviction took shape when she stepped into an organization that was six months late on a major product release. The architecture was sound and the talent was in place. What had eroded was trust, along with any clear path for raising problems without repercussions, as morale suffered and tensions flared. Rather than rewriting the technology roadmap, Schamberger began by examining root causes. The pattern she uncovered is familiar to many scaling organizations spread across regions and time zones. People were spending more energy protecting themselves than fixing the issue.

“If things are going wrong, you cannot just bottle them and say we are going to deal with this later,” she says. A safe framework, in her view, creates the conditions for issues to surface early, be examined honestly, and be resolved without theater. Even well-designed systems break down when communication falters and recovery is left to chance. Safety begins with candor, where raising a risk strengthens the system rather than threatening the individual.

The One-Decision Stress Test

When Schamberger enters a complex organization – think multiple regions, partners, and reporting lines – she asks leaders to choose one business-critical change and map it end to end. “If you have ten priorities, you have no priorities,” she says. Who approves that single decision? Who owns the technical design? Who owns the risk decision? What evidence is required before it ships? Who is accountable if it fails at two in the morning?

By tracing one decision from idea to production, hidden variability surfaces, such as too many approvers and unclear ownership. This stress test also exposes a deeper tension. A process that looks clean on paper may not reflect how humans actually operate. “If the business would run like this, but the humans are not aware of that or not available, you have a mismatch,” Schamberger says. A safe framework must work for the system and for the people inside it.

Three Non-Negotiables in the First 90 Days

In a 3,000-person distributed enterprise under pressure to deliver faster with fewer incidents, Schamberger focuses on three non-negotiables:

  • The first is installing a single outcome lane with one decision-making owner. “Trade-offs get decided, not debated,” she says. Speed isn’t lost in engineering effort as much as it’s lost in the time between signal and decision.
  • The second is creating a paved path to delivery supported by a small number of visible metrics, with two or three indicators that everyone understands. “If the number says five and you do not know if five is good or bad, it is not a good metric.” Clarity removes the need for endless meetings to interpret data. Guardrails are established through shared definitions of done, evidence-based release criteria, and explicit expectations across engineering, testing, and account teams. Additions are allowed as maturity grows, but critical safeguards aren’t casually removed.
  • The third is embedding operational safety directly into engineering rather than treating it as an afterthought. “You do not put locks on your windows only after you get robbed,” she says.

One popular response to failure she refuses to embrace is layering on more process and more stakeholders. Each additional approver reduces repeatability and consumes time that teams need to fix root causes. More control rarely creates more safety.

AI as Copilot

Looking ahead, Schamberger sees artificial intelligence (AI) reshaping governance by shortening the distance between signal and decision, and providing continuous evidence and automated guardrails. It will introduce operational copilots that summarize logs, detect anomalies, and propose likely root causes.

“AI is a tool, not a strategy,” she says. “It can reduce uncertainty and shorten the time between signal and decision, but it does not remove the human in the loop.” Nor does AI eliminate the need to invest in fundamentals. Poor architecture and weak process will simply produce faster summaries of the same recurring failures. Technology can highlight patterns, but humans must correct them.

For Schamberger, executing safe frameworks in global teams is ultimately about disciplined ownership, visible decision paths, and cultures that reward early risk detection. Safety is the confidence that when something breaks, the system is designed to recover.

Follow Rose Hecksher Schamberger on LinkedIn or visit her website.