Top Questions Utilities Are Asking About AI for Risk Prediction (Part 2)
Cost, ROI, and Adoption

This is part 2 of the series highlighting questions in the water industry about the readiness to adopt and deploy AI, a technology with the potential to dramatically transform processes and entire industries.

1. How much will it cost to source and augment our planning and operations with AI? 

The Answer: In most cases, less than people expect. Pricing typically scales with the size of the system, the use case and the modules selected, so utilities are not paying a one-size-fits-all price. AI-risk prediction represents a small investment compared with the cost of field investigations, emergency repairs, unnecessary replacement, or adding headcount. Replacement or condition assessment can cost hundreds of thousands of dollars per mile or even millions, while AI risk models can assess an entire network – or a critical subsection – for just tens to hundreds of dollars per mile. 

What drives cost: 

  • System size: Pricing usually scales with factors like miles of pipe, asset count, and whether the utility is starting with one use case or multiple.  
  • Scope and modules selected: A utility evaluating one application will have a different cost profile than one deploying risk across mains and service lines or adding lead prediction as well. 
  • Staff Time: There is some up-front time spent collecting the data necessary for AI and help with organizing it in a logical manner.  This usually involves a few hours of GIS staff each week for the first few weeks of implementation.  Once the model is built, however, maintenance and updates are generally minimal and can be accomplished on an as-needed basis.   In fact, utilities are usually able to spend less time on risk prediction than before while delivering more accurate results.  The real staff time involves using the tool for actual decision making. It shifts how utilities prioritize and plan, rather than adding a separate layer of effort. Because the model is digital, it generates risk insights much faster and makes scenario planning and project development far more efficient than the manual spreadsheet methods many utilities still rely on today. 

Compare this against the cost of getting it wrong: One catastrophic main break can cost hundreds of thousands of dollars – and in severe cases, millions – while avoidable capital replacement can waste far more over time. Against that, AI risk prediction is modest in cost, frequently less than the cost of an employee, and routinely pays for itself in under a year. (see The true Return on Investment of AI-driven Risk Prediction).  

See the Top AI Questions utilities are asking in 2026    

2. How long until we see measurable results?

The Answer: Results often start earlier than many utilities expect. Some value emerges in the first few weeks through data cleansing and risk accuracy validation. Measurable operational results usually follow once the model is built, validated, and put into use, and should begin to appear well within the first year. 

0-3 months: Utilities typically begin seeing early value through improved data visibility and clearer understanding of what data gaps need to be addressed. This stage often helps teams organize asset information, strengthen data quality, and prepare for more efficient planning once model outputs are available. 

3-6 months: Initial model outputs, risk scores, and prioritized watch lists are often available. Utilities can begin reviewing high-risk areas, validating results, and identifying where inspections or interventions should be focused. 

6-12 months: Measurable results emerge. This may include better-targeted inspections, stronger confidence in replacement priorities, clearer support for capital decisions, and growing evidence that model predictions align with field conditions. 

12+ months: roader operational and financial benefits (ROI) often become easier to quantify, including avoided emergency work, more efficient capital allocation, and stronger integration of risk insights into planning workflows. 

Quick wins you can show leadership within 6 months: 

  • “Our model identified 47 high-risk pipes; we inspected 15 and found critical deterioration in 12.” 
  • “We deferred replacement of 200 pipe segments flagged as low-risk, saving $2.3M in capital.” 
  • “We avoided 8 emergency repairs through targeted intervention, saving $180,000.” 

The temptation: Expecting transformation in 90 days. AI is powerful but not magic. Plan for meaningful impact in year one and optimization in year two. 

3. Can we start with a pilot, or should we go all-in?

The Answer: Some utilities want to start with a pilot, but fewer see it as a required first step. As the technology is mature and proven, more utilities are comfortable moving directly to deployment – or using a pilot only to validate fit and define a path forward. Pilot programs make sense but beware of the “pilot trap” mentioned in our ROI article. Here’s how to pilot effectively: 

Good pilot approach: 

  • Evaluate the software’s prediction accuracy against your broader network, not just a small or unusually clean section. The real test should be whether the model performs accurately enough to support decisions across the full system. 
  • Define success criteria upfront, including the thresholds that would justify full deployment if the pilot meets them. 
  • Set a clear timeline and assign dedicated staff time to manage and learn from the pilot. 
  • Use the pilot to build confidence in adoption, not to keep the project in a holding pattern. 

Pilot trap warning signs: 

  • Choosing an unrepresentative area that does not reflect real system conditions (all new pipes or all old). 
  • Treating the pilot as an open-ended experiment with no defined decision point. 
  • Treating it as an IT project instead of an operational decision. 

When to skip the pilot and deploy system-wide: 

  • You have strong executive sponsorship. You’ve seen proven results from peer utilities. 
  • Your organization is ready to act on the insights and embed them into planning workflows. 

The reality: Pilots are sometimes part of the process.  The key is to ensure your pilot is designed to confirm value, inform deployment and support a decision – not to postpone one. Set a hard deadline: “We’ll evaluate results in X months and decide to scale, modify, or discontinue.”  

VODA.ai offers a risk-free subscription: if the risk prediction model does not meet the agreed-upon success criteria, you pay nothing. 

4. What about vendor lock-in and proprietary models?

The Answer: This is a legitimate concern that deserves careful contract negotiation. 

Questions and considerations for vendors: 

Results and data ownership: “If we terminate the contract, do we retain access to all risk scores, predictions, and historical analyses?” The answer must be yes, this is your data. 

Model transparency: “Can you explain how the model weighs different risk factors?” You don’t need to see proprietary algorithms, but you should understand the logic. Beware of pure “black box” solutions. 

Integration flexibility: “Can your predictions export to our GIS/CMMS in standard formats (CSV, API, etc.)?” Avoid solutions that trap data in proprietary dashboards. 

Migration path: “What happens if we want to switch vendors or bring this in-house?” Look for vendors offering data portability and documentation. 

Best practice: Negotiate data export rights into your contract from day one. Require quarterly exports of all predictions and model outputs in usable formats. This protects your investment even if you change vendors. 

Reality check: Some degree of vendor relationship is normal for specialized AI services. The key is ensuring you’re buying a tool, not creating a dependency. 

5. How do we get buy-in from field crewswho’ve been doing this for 30 years? 

The Answer: This is often the hardest challenge, and the most critical for success. Field crews can make or break AI implementation. 

What doesn’t work: 

  • Presenting AI as a replacement for human expertise. 
  • Implementing top-down without field input. 
  • Ignoring crews when predictions prove wrong. 
  • Treating AI as an IT initiative disconnected from operations. 

Utilities should not treat AI as a workforce replacement strategy, because accurate risk prediction still depends on human expertise to interpret results, validate findings, and act on them effectively. 

See the Top AI Questions utility leaders are asking in 2026

What works: 

  • Frame it as a tool, not a replacement: “You know these systems better than anyone. AI helps you focus your expertise on the highest-risk areas instead of spreading it thin across everything. AI isn’t about replacing human expertise – it’s about amplifying it.” 
  • Involve crews early: Include field supervisors in implementation design. Ask them to identify the areas they’re worried about, and see if AI agrees. When it does, you’ve got credibility. When it doesn’t, you’ve got valuable learning. 
  • Create feedback loops: When crews inspect a high-risk pipe and find problems, celebrate it. When they find a false positive, log it as model improvement. Make them partners in refinement, not subjects of evaluation. 
  • Show respect for institutional knowledge: “AI is learning from years of break dataincluding breaks you’ve responded to. This is your knowledge, codified.” 
  • Highlight wins: Track emergency repairs avoided through proactive intervention. Share these stories in team meetings. Field crews take pride in preventing problems, not just fixing them. 
  • Address the retirement reality: Many veteran operators understand the “silver tsunami” problem. Frame AI as a way to preserve their knowledge for the next generation rather than a threat to their value. 
  • Start with volunteers: Find one or two field supervisors or engineers who are curious and tech friendly. Let them be ambassadors. Peer influence is more powerful than management mandates. 
  • The ultimate validation: When a crew chief says, “AI flagged this pipe last month, and I’m glad we replaced it before winter,” you’ve won. Until then, expect healthy skepticism. You will earn trust through results. 

This has been a two-part series exploring some of the questions raised by utilities as they plan to take advantage of the transforming potential of AI to augment decision-making. Read the first article here 👉Top Questions Utilities Are Asking About AI for Risk Prediction (Part 1) Data, Accuracy, and Integration 

 

This article is part of our Utility Voices series – where we share real stories, field-tested insights, and trusted perspectives from across the water sector. From frontline engineers to leading consultants, from early questions to proven outcomes, these are the voices shaping the future of water.    

🔔 Subscribe to our blog so you don’t miss the next chapter. 

Picture of Cory Sides

Cory Sides

Cory Sides is Senior Vice President of Sales at VODA.ai. A mechanical engineer with over 20 years of water utility experience, he helps utilities leverage digital solutions – including machine learning and system optimization– to better manage and protect their infrastructure.

You May Also Like

The Northeast’s Toughest Water Infrastructure Challenges

Utility leaders in the Northeast U.S. manage some of the most complex water and wastewater systems in the country: dense,...

Top Questions Utilities Are Asking About AI for Risk Prediction (Part 1)
Data, Accuracy, and Integration 

As a decision-maker in the water utility sector, you manage high-stakes responsibilities: keeping assets reliable, protecting public safety, and making...

The true Return on Investment of AI-driven Risk Prediction

In 2026, most utilities are no longer asking whether AI can predict pipe failures. Machine learning models are now able...
Recent Posts