The hidden cost of fragmentation
A mid-size manufacturer we spoke with recently was running their business across seven different systems — an ERP for finance and procurement, a separate MES for production, a CRM for sales orders, a PLM for engineering changes, a standalone WMS, a logistics platform, and a quality management system. None of them were connected.
Every morning, the operations team spent two hours manually re-keying data between systems. A sales order confirmed in the CRM didn't automatically create a production order in the MES. A bill-of-materials change approved in PLM didn't update the ERP until someone manually exported and re-imported it — sometimes days later. Decisions about whether to accept a rush order were made on gut feel because there was no real-time view of production capacity.
This isn't unusual. According to most industry surveys, data re-entry and manual reconciliation between systems accounts for 15–30% of administrative overhead in manufacturing operations. The cost isn't just the hours — it's the quality of decisions made on data that's already out of date by the time it's acted on.
"Integration is the #1 reason ERP implementations fail to deliver their promised ROI. The system works. It just doesn't talk to anything else."
Why integration projects stall
Most integration failures aren't technical. They're architectural and organisational. Here are the four patterns we see most often:
1. Point-to-point sprawl
The first integration is always simple: connect System A to System B. Someone writes a script or uses a vendor connector. It works. Then someone connects A to C, B to D, C to E. Within two years you have 20 custom integrations, each maintained by a different person (or no one), each with slightly different data models. When System B upgrades its API, six integrations break simultaneously — and no one knows which ones until something stops working in production.
2. No single source of truth
When the same data lives in multiple systems without a clear owner, it diverges. The ERP says inventory is 840 units. The WMS says 812. The MES says 791. Who's right? Usually the answer is "it depends on when you last synced" — which means no one can trust any of them without checking the others. Decision-making slows to the pace of the slowest system.
3. Integration that bypasses the core
Some systems — particularly large ERP platforms — have strict rules about how they can be extended. Going around those rules (directly querying the database, writing to tables outside the sanctioned API) creates integrations that appear to work but break silently on every upgrade. We see this frequently with SAP environments where custom ABAP modifications have made the system impossible to patch.
4. Treating integration as an afterthought
Integration is often scoped at the end of a project — after the main system is live, as a line item in the "phase 2" plan that never gets funded. By then, the new system is already operating in isolation, and the business has adapted its processes around the gap. The integration becomes harder to build because now it has to accommodate workarounds that wouldn't exist if it had been designed from the start.
The right architecture
There's no single "right" integration architecture — it depends on the systems involved, the data volumes, the latency requirements, and the team that will maintain it. But there are three patterns worth understanding:
For SAP environments specifically, the answer is usually SAP Business Technology Platform (BTP). BTP provides the sanctioned, non-invasive path to expose SAP data and transactions to external systems — without modifying core, without direct database access, and without creating brittle custom code that breaks on every upgrade.
We used exactly this approach in a recent engagement where a manufacturer needed to mobilise SAP workflows for executives and plant floor supervisors. Rather than building a custom connector, we built a BTP Extension API layer that surfaces the right data to the right role — approval queues for executives, production orders for plant managers, work orders for floor supervisors — all live from SAP, all through the supported API. Read that case study →
Practical rule
Standardise your data models before writing a single line of connector code. Half of all integration bugs are caused by the same field having different names, formats, or units in different systems.
What good integration looks like in practice
Here's how the four most common manufacturing integration patterns should work when they're done right:
Build for observability from day one
The most underrated requirement in any integration project is observability. Integration failures are silent by nature — data stops flowing, but nothing crashes, and the users just quietly start doing things manually again. By the time someone notices, the systems are weeks out of sync.
Every integration layer needs three things:
- A structured log of every message or API call — what was sent, what was received, whether it succeeded
- An alert that fires when messages stop flowing or when error rates exceed a threshold
- A dead-letter queue or retry mechanism so failed messages don't disappear silently
This sounds obvious, but in our experience fewer than half of the integration layers we've inherited have all three. The result is an integration that works — until it doesn't, and no one knows why or when it stopped.
Where to start
If you're looking at a fragmented system landscape and wondering where to begin, we'd suggest the following:
See how this looks in a real manufacturing environment
If system integration is the issue, the next step is usually not a generic contact form. It is seeing how the operational flow, visibility, and execution layer come together in a manufacturing context.