AI Doesn’t Know What Your Planners Know

Date:

Node: 4964529

The standard narrative about why artificial intelligence fails in supply chains goes something like this: The data is a mess, the systems don’t talk to each other, and companies need to fix their infrastructure before anything can work. There is enough truth in that story to make it credible, and so many technology vendors benefit from it that it has become the default explanation. But it is not the right diagnosis. And the organizations that accept it without question will keep making the same expensive mistake.

The real reason most supply chain AI fails is that the AI doesn’t know how the operation actually makes decisions. That is not the same problem, and it has a different solution.

Ask any experienced supply chain planner whether their operation runs the way it’s documented. The answer is usually no, and the gap is almost never small. Documented processes describe how a supply chain was designed to work, not how it currently works. In most manufacturing and distribution environments, significant divergence between the two has accumulated over the years — not because anyone failed, but because operations adapt to reality. Systems get added. Customer mix shifts. Supplier relationships evolve. The team learns what works, and adjusts accordingly, informally, without updating the policy manual.

That accumulated knowledge — the things the team knows that no system has ever recorded — is what I call operational context. It includes a particular vendor that reliably over-promises lead times in the fourth quarter, so experienced planners quietly order at a higher safety threshold in October regardless of what the system recommends. It includes a major retail account that inflates initial orders by roughly 20% every year and then cancels the difference in week six, so treating their forecasts at face value produces chronic overstock. It includes a production scheduler who releases orders 48 hours before the system-recommended date because a specific machine runs consistently behind due to a maintenance backlog that has never been formally resolved. It includes an S&OP output that the team treats as directional rather than binding, because the real decisions happen in a Monday morning call that isn’t logged anywhere.

None of this is irrational. All of it reflects how operations survive and compete in the real world. And all of it is completely invisible to an AI system that reads only the structured data from your ERP, WMS, CRM or TMS.

This is why technically correct AI recommendations get ignored within weeks of deployment. A skilled planner evaluates a new system’s output against everything they know about the operation — supplier behavior, customer patterns, machine quirks, institutional rules that were never written down. When the AI doesn’t account for any of that context, the planner overrides it. No one captures why. The system learns nothing. The same recommendation appears next month. Eventually the planner stops looking.

The 95% failure rate of AI pilots that MIT’s NANDA research documented in 2025 is a contextual knowledge problem. The pilots were technically functional. They just never knew enough about the specific operation to earn trust from the people running it.

Fixing this requires a step that most AI vendors skip entirely because it doesn’t fit a productized implementation methodology: genuine operational discovery before a single agent touches a live decision. That means sitting with the planners, buyers and schedulers who actually make the decisions — not the project sponsor, not IT, and not a consultant facilitating a two-hour process-mapping workshop. It means asking what information those practitioners use that isn’t in the formal system; what rules they apply that were never documented; and what a trustworthy recommendation would actually look like to them, for this product, with this supplier, in this season. It means treating the first planner override not as a system failure but as the most valuable data point the deployment has produced — information about how this operation specifically makes decisions, which no generic model already knows.

The organizations generating durable results with supply chain AI are the ones who started with the smallest, most specific, most operationally understood deployment, engaged the practitioners who carry the knowledge before building anything, and expanded from a foundation of trust rather than a mandate from above.

Infrastructure still matters. A fragmented data landscape creates genuine friction, and an AI platform that assumes clean structured inputs will struggle in the messy reality of most midmarket supply chain environments. But infrastructure is the enabling layer, not the starting point. The question that has to be answered first — before the data architecture, before the agent selection, before anything — is whether the AI will know enough about how this operation actually works to be trusted by the people who run it. The technology is ready. That part was never the hard problem.

Mike Romeri is CEO of A2go.ai.