Noreja Blog

Two for One: The Better Each Team Performs, the Worse the System Gets

Written by Lukas Pfahlsberger | May 12, 2026 7:00:00 AM

In most organizations, the teams are not the problem. Individual departments are often well-run, focused, and genuinely trying to do their jobs well. And yet, end-to-end performance keeps disappointing. Projects stall. Customers wait. Handoffs fail. The numbers look fine in each team's report, but the overall process is not delivering what it should.

This is the paradox of local optimization. When every part of a system is optimized independently, the system as a whole can still underperform. In fact, improving local efficiency can sometimes make the broader process worse.

In this edition of Two for One, we examine why local optimization is one of the most common and least visible sources of process failure — and which two structural levers help organizations shift from fragmented performance to genuine end-to-end process performance.

Why Local Optimization Creates System-Level Failure

The logic of local optimization is intuitive. Organizations are divided into departments for good reasons. Different teams bring different expertise. Specialization improves quality. Measuring each unit's performance creates accountability. These are sound management principles, and they work — up to a point.

The problem begins when each team starts optimizing for its own metrics without visibility into what happens before or after its step. A procurement team that reduces its own processing time may create a bottleneck downstream by releasing too many items at once. A sales team that maximizes its conversion rate may overwhelm a delivery function that was not built to handle the volume. A support team that closes tickets quickly may not be addressing the root causes that keep generating new ones.

None of these teams is doing anything wrong in isolation. Each is responding rationally to the incentives and metrics it has been given. But the aggregate effect of all this rational local behavior is a system that does not perform as intended. Work gets stuck at interfaces. Priorities conflict across teams. Resources get allocated to local wins at the expense of overall throughput.

This is not a new observation. Operations management and systems thinking have described this dynamic for decades. And yet it remains one of the most persistent problems in practice, precisely because it is so hard to see from any single vantage point. Each team is performing. The system is not. And the gap between those two realities often goes unmanaged.

When Local Performance Becomes the Real Problem

The clearest sign that local optimization is driving system failure is a pattern that shows up in many organizations: each team reports improvement, but end-to-end lead times are not getting shorter. Customer satisfaction is not improving. Rework and escalations keep accumulating. Costs keep rising despite efficiency gains inside individual units.

What is happening is that improvements inside one step are not translating into improvements across the full process. They are being absorbed by buffers, queues, or misaligned priorities at the next interface. Or they are creating new pressures that manifest elsewhere in the system.

Another telltale sign is measurement fragmentation. When each team tracks its own KPIs, and those KPIs are not connected to a shared definition of what the end-to-end process is supposed to deliver, there is no organizational view of where value is actually being created or lost. Teams become very good at measuring themselves and very poor at understanding what they are contributing to together.

This is where the distinction between process efficiency and process effectiveness becomes important. A team can be highly efficient — processing its inputs quickly, minimizing errors, meeting its SLAs — while the overall process remains ineffective because the right things are not getting through in the right sequence at the right time. Efficiency at the local level does not guarantee effectiveness at the system level. And in complex, cross-functional processes, it often actively works against it.

The Hidden Costs of Local Optimization in End-to-End Process Performance

The costs of this pattern are real, but they tend to be invisible on any single team's dashboard.

The most direct cost is wasted throughput capacity. When teams optimize independently, they tend to create local surpluses and shortages that do not match. One step finishes faster than the next can absorb. Work accumulates at interfaces. The overall cycle time stays long even though each individual step is shorter. This is one of the core insights of lean and flow-based operations: the speed of the system is determined not by the fastest steps but by the constraints between them.

There are also coordination costs that grow as each team builds its own workarounds for the gaps between steps. Shadow processes emerge. Informal escalation paths develop. People spend increasing amounts of time on activities that exist solely to compensate for poor interface design — double-checking, re-entering data, manually routing items that should flow automatically.

From a business process management perspective, this is a structural problem, not a behavioral one. The teams are not failing because they lack effort or discipline. They are failing because the process architecture has not been designed around the system. Ownership is fragmented. Metrics are siloed. No one has a clear mandate to manage the flow of work across the whole process. And so the system as a whole is, in effect, ungoverned — even when each of its parts is carefully managed.

Two Practical Levers for End-to-End Process Performance

There are many ways to address local optimization, but two structural changes have consistently the greatest impact. The first is to assign end-to-end process ownership. The second is to replace siloed metrics with system-level performance measures.

Assign End-to-End Process Ownership

The most direct way to counteract local optimization is to create a role or function that is explicitly accountable for the performance of the full process — not just one step within it.

In most organizations, this kind of ownership does not exist in practice. Individual teams own their piece of the workflow, and senior leaders own overall business outcomes. But the space in between — the design and performance of the cross-functional process that connects those teams — is often no one's explicit responsibility. It gets managed informally, through coordination meetings and escalation chains, but it does not have a designated owner who can make structural decisions about how the process works.

End-to-end process ownership changes this. It creates a function — often called a process owner, a value stream manager, or a cross-functional lead — whose primary accountability is the performance of the process from start to finish. This role does not replace team-level management. It adds a layer of governance that ensures the interfaces between teams are actively managed rather than left to chance.

What does this look like in practice? An end-to-end process owner has the authority and the visibility to identify where work is getting stuck across the process, not just within one step. They can redesign interfaces, resolve priority conflicts between teams, and ensure that local improvements are implemented in ways that strengthen rather than fragment overall flow. They can also serve as the organizational voice of the customer inside the process — keeping attention on whether the right outputs are being produced, not just whether each internal step is meeting its targets.

Organizations that implement genuine end-to-end process ownership often find that it resolves problems that had been cycling through escalation channels for years. The issues were not new. What was new was the existence of someone with both the authority and the cross-functional view to address them structurally rather than reactively.

A useful check: if no one in your organization can answer, without hesitation, who is accountable for the performance of the full process end to end — that is the gap.

Align Metrics to the System, Not the Silo

The second lever is to change what gets measured and how. Local optimization is, at its core, a measurement problem. Teams optimize what they are measured on. If measurement stops at the team boundary, optimization stops there too.

Addressing this requires introducing system-level metrics that reflect the performance of the process as a whole — metrics that no single team can improve unilaterally, because they depend on how well all the parts work together. End-to-end lead time is one example. Customer outcome measures are another. So are first-time-right rates across the full process, or throughput measured at the system output rather than at each individual step.

The key is that these metrics must be visible to all the teams in the process, not just to senior leadership. When teams can see how their local decisions affect overall performance, the nature of the conversation changes. Trade-offs that were previously invisible — my team gets faster, but the next step gets overwhelmed — become explicit and can be discussed and resolved before they create problems.

This does not mean abandoning local metrics. Team-level KPIs remain useful for operational management. But they need to be complemented by system-level measures that give everyone a shared picture of whether the process as a whole is working. Without that shared picture, coordination is based on opinion and negotiation. With it, coordination can be based on evidence.

A useful principle: if your organization does not have a metric that measures the performance of the process as a whole — one that no team can improve by gaming its own numbers — you do not have a system-level performance view. You have a collection of team-level performance views. They are not the same thing, and treating them as equivalent is one of the most reliable ways to sustain local optimization indefinitely.

Food for Thought

When organizations begin to examine local optimization honestly, a few questions tend to surface that are worth sitting with.

Does your organization have a clear, shared definition of what the end-to-end process is supposed to deliver — and is that definition held by anyone other than senior leadership? Who in your organization is explicitly accountable for the performance of the full cross-functional process, and what authority do they have to make structural changes? When a team improves its own metrics, how does the organization verify that the overall process also improves? Are the KPIs your teams are measured on designed to promote local efficiency, system effectiveness, or both? And when process problems are escalated, do they tend to get resolved at the structural level, or do they cycle back as the same issues in slightly different form?

These questions matter because they reveal whether the organization is managing a process or managing a collection of teams that happen to be connected. The distinction sounds minor, but in practice it determines whether improvement efforts translate into system-level results or simply rearrange where the problems show up.

Conclusion: Optimize the System, Not Just the Parts

When every team performs well but the system keeps underperforming, the root cause is rarely effort or talent. More often, it is that the system has never been formally governed — only its parts have.

Two structural levers are particularly effective. The first is to assign genuine end-to-end process ownership, creating a role with the authority and visibility to manage the full process rather than just one step within it. The second is to introduce system-level metrics that give all teams a shared view of overall performance, so that local decisions can be made with awareness of their effect on the whole.

The goal is not to eliminate team-level performance management. It is to add a layer of governance that ensures local performance and system performance are actually connected — so that when teams improve, the process improves with them.

That is the deeper promise of business process management in complex, cross-functional environments. It is not about controlling each step more tightly. It is about designing the system so that each step's success contributes to the outcome the whole process was built to deliver.

FAQ

What is local optimization and why is it a problem in business processes?

Local optimization means each team or department improves its own performance metrics without considering the effects on the overall process. It becomes a problem because individual improvements can create bottlenecks, misaligned priorities, and wasted capacity elsewhere, leaving end-to-end process performance unchanged or worse.

Why can a team perform well while the overall process still fails?

Teams are measured on their own outputs and optimize accordingly. But a process is a connected system — efficiency at one step does not guarantee effectiveness at the system level. When handoffs are poorly designed, when metrics are siloed, or when no one owns the full flow, local performance gains are absorbed before they can benefit the customer or the overall outcome.

What is end-to-end process ownership and why does it matter?

End-to-end process ownership assigns explicit accountability for the performance of the full cross-functional process to a single role or function — not just one step. It matters because without this accountability, the interfaces between teams go unmanaged, and structural problems persist regardless of how well individual teams perform.

How do system-level metrics differ from team-level KPIs?

Team-level KPIs measure how well a single unit performs its specific tasks. System-level metrics measure the performance of the full process — things like end-to-end lead time, first-time-right rates, or customer outcomes — that no individual team can improve unilaterally. Both are needed, but organizations that rely only on team KPIs have no reliable view of whether the overall process is working.

How can organizations tell if local optimization is causing system-level failure?

Key signals include: teams reporting improvements while end-to-end lead times stay flat or grow, rising coordination costs and shadow processes at team interfaces, escalations that keep recurring without structural resolution, and the absence of any metric that reflects the performance of the process as a whole. When these patterns appear together, local optimization is almost always a contributing factor.