How essential is measuring the efficiency and efficacy of industrial digital transformation (dX) solutions across industries today?
KPIs and performance measurements are not new, but a culture of measurement becomes essential for Dx solutions. In many cases, Dx is about transformations taking place in an existing industrial system that has already its established processes. A proposed upgrade is in the challenger position: there will be uncertainty on the value, concerns about risks, skepticism and resistance. A consensus between the different stakeholders involved in a Dx solution can only be reached based on some measurable agreed-upon objectives.
Assessing “efficiency” has several dimensions: while improving performance of a particular operation or quality of its output (speed, productivity, throughput, error and defect rate, etc.) may be at the forefront, other aspects need be measured as well. These aspects have their own objectives: non-functional properties such as security, safety, robustness, flexibility. These properties are harder to evaluate, and take more time. Finally, the impact (or “side effects”) of a solution on other operations must be assessed as well. Only then can we assess in a well-rounded (a well as consensual) manner the value and viability of a solution.
There is an investigation side to deploying Dx. The case for digital transformation is rarely clear from the start. Digital solutions rely on the recent or emerging technologies that may or may not work for the situation at hand. A problem to be resolved may be agreed upon, but its causes may remain obscure.
An investigation may be necessary to understand these (Kaizen process in Japan). Collecting IoT data on various parameters of a process/product to improve, including its context (e.g. who is operating this machine, what is the inventory of parts to be used) , then correlating these with actual performance/quality variations has been observed to speed-up this research phase.
In several instances, a solution or technology has been used with success in other places by other users. Even if a solution is replicated, there are many parameters and requirements that are different from one implementation to the other. We have observed in IIC that one size does not fit all: every case is different and needs to be adjusted, based on measurements and targets specific to this case.
How can one objectively assess what works and what does not?
That is where metrics and targets need to be precisely defined. These are two different but complementary things. you may set a target of 98% of service uptime per month, but for this to be meaningful you need to define precisely what a downtime is: do we count as downtime a case where the cause of disruption is external and not the responsibility of the provider?
Does a service degraded beyond some threshold (e.g. acceptable response time limit), qualify as downtime? Should downtimes not be measured during scheduled maintenance periods? Significant differences exist between the providers of a same type of service based on such details.
You can read up to 3 premium stories before you subscribe to Magzter GOLD
Log in, if you are already a subscriber
Get unlimited access to thousands of curated premium stories, newspapers and 5,000+ magazines
READ THE ENTIRE ISSUE