As a decision-maker in the water utility sector, you manage high-stakes responsibilities: keeping assets reliable, protecting public safety, and making tough financial decisions where mistakes are costly. So, when vendors talk about AI predicting pipe failures or optimizing networks, skepticism is not only normal – it’s healthy.
You want to know whether the predictions are trustworthy. You want to know how decisions are made. You want evidence that this approach reduces risk, not adds to it.
AI and Machine Learning can be highly accurate tools for identifying assets with a high likelihood of failure and deprioritizing low-risk ones, but the technology comes with its own set of questions. This article breaks down the most common concerns utility teams raise about AI/ML risk prediction – and what you need to know to move forward confidently.
- GIGO:“Our Data Isn’t Ready”:
The concern of “garbage in leads to garbage out” (GIGO) applies to any digital decision-making approach. AI is no different. Poor-quality data will always result in poor decisions whether you rely on advanced analytics, spreadsheets, or engineering judgment.
If data is incomplete, inconsistent, or inaccurate, AI will be handicapped, and the results will suffer: models are only as good as the data they are trained on. That reality is not unique to AI, but it does force organizations to confront data issues that already affect everyday decisions. AI does not introduce data quality problems, but it makes it harder to ignore them.
| The Concern | What You Should Know |
| Data Scarcity | “Do we have enough of signal?” Pipe failures are rare events. Many utilities do not have long, perfectly documented digital histories of failures, which raises a reasonable concern about whether there is enough information to support reliable predictions. |
| Data Quality & Consistency | “Can we trust what we have?” Missing records, inconsistent failure logging, limited digitalization, and differing standards across districts can weaken any decision framework, not just AI-based ones. |
| The Reality: | Data readiness is part of the project. Any successful effort to improve asset decisions begins with auditing, cleaning, and organizing data. AI does not remove that discipline: it reinforces it.
Where AI adds value is in making better use of the information utilities already have. By combining domain knowledge about how water infrastructure behaves with statistical learning techniques, AI can extract patterns from sparse and imperfect data more consistently than manual statistical analysis alone. AI does not fix bad data, and it does require data in electronic form. But when implemented correctly, it can identify anomalies, handle gaps systematically, and – most importantly – support more defensible, repeatable decisions about which assets truly deserve attention and funding. Finally, AI can serve as a forcing function and provide useful guidelines to improving utility data. In the abstract, answering the question of “when is our data good enough” is not easy to answer. Adopting a machine-learning approach, on the other hand, yields clear definitions for what data needs to look like to be useful for predicting failure. |
- “It’s a Black Box:” The Explainability Challenge
When an AI model suggests replacing a main line that seems fine to an engineer, the immediate and necessary question is: Why? That reaction appears any time a model incorporates more variables, more interactions, and more complexity than a human can easily track. Better decision quality always comes with a tradeoff. As models become more powerful and consider more inputs, explaining exactly how those inputs interact becomes harder.
In reality, this challenge exists for any sufficiently advanced analytical or statistical approach – not just AI. Anyone who has worked with a sufficiently large dataset and complicated statistical failure model will have felt the cognitive burden of understanding its inner workings and how it arrives at its outputs. The benefit of an AI approach lies in providing a standardized rigorous toolkit for assessing decision quality. This helps scale reasoning about model behavior to much bigger data size and decisioning complexity.
| The Concern | What You Should Know |
| Lack of Explainability | Complex models can be difficult to interpret. AI models, in particular, can combine many variables in non-obvious ways, making it harder for operators to intuitively understand how a result was produced—especially in mission-critical environments. |
| Trust and Accountability | When a major failure occurs, utilities need to understand not just what happened, but why. That includes the factors behind a prediction – or a missed prediction – for regulatory review, auditing, and long-term planning. |
| The Reality: | Explainability becomes more challenging with machine learning precisely because ML is more capable. ML models can evaluate far more variables and interactions than traditional approaches, which is why they can surface risks that simpler methods miss.
This is not a new problem, and it is not an afterthought. ML practitioners have long invested in ways to reason about model behavior, because without that capability, the models would be unusable in real-world decision-making. In practice, explanations tend to fall into two categories. One is reviewing risk rankings across the network and applying basic analytics and engineering expertise to understand why certain assets rise to the top. This can be complemented with techniques developed specifically for reasoning about ML models, such as assessing input feature importance. These are most useful when paired with domain knowledge. For a high-risk pipe, an explanation might point to factors such as material (for example, cast iron), age, and external conditions like stress from proximity to a high– traffic area. AI does not replace engineering judgment – it strengthens it by making complex risk patterns visible and defensible. |
- “We Can’t Afford Major Mistakes:” False Alarms, Misallocations
In high-stakes infrastructure decisions, mistakes are costly – whether they lead to unnecessary work or missed failures.
| The Concern | What You Should Know |
| False Positives | If a model predicts a failure that never occurs, crews may spend time and money inspecting or replacing assets that did not need attention. If this happens frequently, trust erodes, and genuine warnings risk being ignored. |
| False Negatives | If a model misses an imminent failure, the consequences are immediate and visible: unplanned outages, emergency repairs, higher costs, and public disruption. |
| The Reality: | False positives and false negatives are not unique to AI. Every decision framework – experience-driven, rules-based, or analytical – produces both. The real question is not whether mistakes happen, but how often they happen and what they cost over time.
This is where machine learning changes the equation. Quantifying mistakes lies at the heart of machine learning. Any AI development includes defining formal ways to measure decision errors consistently, and to adjust models to minimize such errors. Instead of relying on assumptions or intuition, utilities can quantify how well their approach is working and improve it iteratively. AI vendors need to show prediction accuracy against your historical baseline and aim to reduce water loss and unplanned outages. |
- “We Don’t Have the Staff:” Skills and Governance Risk
Adopting any new planning tool raises a practical concern: who owns it, who uses it, and who is accountable for the decisions it informs? For many utilities, the worry is not whether AI works, but whether they have the staff capacity and governance structure to use it responsibly.
| The Concern | What You Should Know |
| Talent Gap | Utility teams are built around engineering and operations expertise, not data science or model development. There is a concern that AI would require new roles, new skills, or ongoing technical maintenance that existing teams cannot absorb. |
| Governance and Oversight | Decisions that affect capital planning, field work, and public safety require clear ownership. Utilities need to understand who reviews model outputs, how recommendations are challenged, and how decisions are ultimately made and documented. |
| The Reality: | AI is not a replacement for engineers, nor does it require them to become data scientists. In practice, AI functions as another analytical tool in the decision-maker’s toolbox—one that engineers use, review, and apply within existing planning and governance processes.
Well-designed AI systems are built to fit utility workflows, not disrupt them. Model development and maintenance sit with the provider, while utilities retain control over how outputs are interpreted, validated, and acted upon. Engineering judgment remains central, supported by more comprehensive analysis than manual methods can provide. As infrastructure systems become more complex and data-rich, relying solely on simplified proxies like age or material becomes increasingly limiting. AI does not make decisions on behalf of utilities, but it does expand what engineering teams can reasonably evaluate, helping them make more informed, defensible choices without adding operational burden. |
Artificial Intelligence offers water utilities a powerful tool for assessing risk and prioritizing action. It does not remove uncertainty, but makes it measurable, explainable, and easier to manage. Aside from more accurate predictions, it provides a clear framework for leveraging existing data, a way to create advanced insights into patterns of failure, and robust measurement for how well it performs.
The real challenge is rarely technology itself. It is understanding what questions AI can answer, how its outputs should be interpreted, and how those insights fit into existing engineering judgment and governance. When used that way, AI stops being a black box and becomes a practical, defensible tool for asset management decisions.
This article is part of The VODA.ai Lens series — where we share how our platform evolves, the thinking behind our technology, and the real-world impact behind every release. We believe transparency builds trust—and that smarter infrastructure starts with better tools.
Subscribe to our blog for product insights, methodology breakdowns, and stories behind the model.



