What is “Explainability” and What Does AI Have to do With it?
Manufacturing is a complex environment. From the equipment that facilitates production to the processes that govern the workflow, there are lots of moving parts and pieces that come together for a factory to function. And they’re getting more complex. Every Industrial Internet of Things (IIoT) expansion and new technology introduced to the factory adds another layer of complexity. With each new iteration, it becomes important to ask yourself, “do you know how it all works?”
This is especially important when artificial intelligence (AI) is at play. Many manufacturers are beginning to deploy AI in their facilities to help with everything from condition monitoring to value stream optimization. The problem is, they’re taking the AI at face value, and not always understanding how it works. Programmed incorrectly — or given poor data — AI may disrupt the workflow. To ensure it’s helping and not hindering, manufacturers need foundations in explainability.
What is explainability?
Explainability is the act of being able to explain, at a core level, how something works. It’s about understanding the mechanics behind an action. This is in contrast to interpretability, which means understanding the theory behind why and how something works. In a simpler context, it’s the difference between saying “an engine uses combustion to produce thrust” and being able to explain the scientific properties of combustion propulsion.
Why is explainability in factories important?
Explainability is the foundation for setting precedent when it comes to AI. To make sure AI has the right data and is able to form the right conclusions, manufacturers need to understand the underlying process they’re trying to improve. More important, technicians need to program AI to understand the process it’s charged with improving. It means being able to explain, in detail, what factors govern it and why they’re so vital.
The concept of explainability works in reverse, too. Manufacturers need to understand the capabilities of AI and the way it functions to best apply it to real-world systems. Trying to fit a square peg in a round hole is fundamentally flawed. Asking AI how to do it better is futile. Programmers and manufacturers need a core understanding of applications and the wherewithal to explain it.
Efficiency comes from understanding
There is no shortage of out-of-the-box solutions for AI in the manufacturing environment. It’s easy to connect a vibration sensor to a program and teach AI to recognize patterns, for example. But we’re rapidly reaching the point where stock applications aren’t enough. Producers that want to bring efficiency and reliability to nuanced areas of their value stream need to rely on custom solutions. And while the AI and machine learning tools are out there, there’s still a disconnect in applying them.
Manufacturers that focus on explainability as the missing component in AI technology will quickly bridge the gap between application and solution. To get there means understanding both sides of the coin: the process and the technology, and how they fit together.