Top 5 Enterprise AI Costs in Manufacturing
For most manufacturers, the goal of implementing a data-driven AI strategy is to harness large, heterogeneous datasets in order to improve manufacturing processes. For some, the end goal may be optimising manufacturing scheduling and asset utilisation in order to understand and predict scheduling issues and root causes.
For others, AI may be the solution to quality management. By using data to reduce the cost of quality checks, failures and reworks, you can optimise settings and production parameters. For others still, AI is intended to drive real-time reporting and analysis, as it can unveil patterns in manufacturing processes and supply chains, allowing manufacturers to predict and act in real time.
As with most manufacturing use cases, the tenth or even twentieth applied AI business use case will generally still have a positive impact on the balance sheet, but what most are finding is that the marginal cost of supplemental use cases does not decrease, while the marginal value of the use cases does in fact decrease. Most manufacturers in this position will quickly reach a point where the marginal profit of new AI use cases enters the negative.
AI needs to drive efficiency for manufacturers and ideally, in today’s challenging economy, it needs to be a revenue centre, not a cost centre. It’s not enough for organisations to leverage AI at any price. If AI is to be truly sustainable, we need to deal with some of the less tangible costs that add up over time and, therefore, hinder manufacturers from scaling and profiting with AI.
The 5 Costs Hindering Enterprise AI and What to Do About Them
1. Data Cleaning and Preparation: Most data teams will agree that data cleaning and wrangling is the most difficult or time-consuming part of data processes at their organisations. The real problem with data prep is that organisations are doing it separately for every single use case and project.
Instead of having repeated (and therefore costly) work across people, teams, and the wider organisation, manufacturers need to prioritise data prep efficiency. This means putting systems in place that allow data to be found, cleaned, and prepared once, then used a maximum number of times across different use cases.
2. Operationalisation and Preparation: It’s not uncommon in a manufacturing data science lab for the first version of any machine learning model to take six months from release to production. The problem here is also one of scale: think about the lost revenue for the amount of time a given machine learning model is not in production and able to benefit the business. When an organisation wants to scale and implement hundreds of models, the cost is debilitating.
Manufacturers need to invest in establishing consistent processes to manage the packaging of code, release, and operationalisation. By incorporating reuse from design to production, manufacturers will be able to scale without the need to recode models and pipelines from scratch in order to operationalize.
3. Data Scientist Cost and Retention: There are also people costs to deal with while data scientists spend time in the lab prepping prototypes to work in the real world. Data scientists, by nature, want to be executing on models. Many organisations find they may lose talented data scientists along the way because they couldn’t streamline operationalisation and production.
Don’t make data scientists do things twice. Reducing costs associated with the inefficient use of data scientist time is usually a matter of introducing proper tooling along with the right operational processes to improve efficiency. Companies need to provide the resources for data scientists to reuse work and lessons learned from past projects.
4. Model Maintenance: Even once models make it into production, many manufacturers encounter difficulties in maintaining them, as data is always changing.
The Solution: The more use cases a manufacturer takes on, maintenance becomes even more challenging — which drives up the costs even further. MLOps has emerged as a way of controlling the cost of maintenance, shifting from a one-off task handled by a different person — usually the original data scientist who worked on the project — for each model into a systematised, centralised task.
Part of maintenance is also ensuring easy reuse across organisations, which means that people can easily access information and consume things done by others, including seeing data transformation and models.
5. Complex Technological Stacks: Model maintenance itself is complex and can be costly if not properly addressed, but this is often also the case with the infrastructure and AI technologies that various parts of organisations are using, especially if they are geographically dispersed.
The Solution: Organisations need to be able to stitch together their larger technology pictures, as without this ability, they won’t be able to reuse and share knowledge across teams, which usually leads to additional costs.
Shifting to AI as a Revenue Centre
In the coming years, the ability of manufacturers to pivot their activities around Enterprise AI will fundamentally determine their fate. Those able to efficiently leverage data science and machine learning techniques to improve business operations and processes will find new business opportunities and get ahead of the competition, while those unable to shift will fall behind, perhaps swept away with the tide of rising costs and diminishing revenue. To achieve this, we need to move AI from a cost to a revenue centre.
If manufacturers can address the costs outlined above and decrease both marginal costs and incremental maintenance costs, they will be in a strong position to apply reuse models and will find scaling a data-driven AI approach much easier in the future.