From Reactive Casework to Predictive Risk: Building Explainable AI for Damp and Mould
A shorter version of this story was featured in Housing Technology. This is the fuller write-up, with more of the thinking behind the journey.
Damp and mould rarely arrives as one dramatic event.
More often, it builds quietly: a repair here, a repeat visit there, a property type that keeps appearing, a pattern that only becomes obvious once someone has finally joined the dots.
By the time a case is formally logged, whether through a resident report, an inspection, or a sequence of reactive visits, the opportunity for earlier intervention may already have narrowed.
That is one of the uncomfortable realities in social housing. You can have committed people, responsive teams and good intent, and still find yourself asking a harder question:
Are we seeing risk early enough to act before conditions escalate?
At Moat, this became increasingly important. The challenge was not a lack of effort. It was that the early signals of damp and mould risk were spread across different systems, teams and moments in time.
The information existed. The pattern was harder to see.
Slowing down before speeding up
When people talk about AI, there is often a temptation to start with the model.
What algorithm should we use?
Can we predict which homes are at risk?
How quickly can we build something?
Those are useful questions, but they were not the first questions we asked.
Before getting excited about predictive analytics, we had to ask something more basic:
Are our processes and data reliable enough to support prediction in the first place?
Because if they are not, AI simply gives you a faster and more confident way of being wrong.
So we deliberately slowed down before speeding up.
In 2023, our first step was not machine learning. It was not a dashboard. It was not a shiny AI pilot.
It was process discipline.
We introduced a structured, low-code workflow so that damp and mould cases could be logged consistently, triaged in a standard way, and followed through with clearer audit trails.
This kind of work rarely makes headlines. It is the plumbing behind the insight. But without it, everything else becomes fragile.
Only once that baseline was in place did it make sense to move forward.
From hindsight to visibility
In 2024, with the workflow embedded, we focused on visibility.
We developed a Power BI reporting suite that brought together case volumes, repeat visits, property types, geographic clustering and other operational indicators.
This changed the conversation.
Teams could see where damp and mould cases were concentrating, how long cases were staying open, and which homes were experiencing repeat interventions.
The discussion shifted from anecdote to evidence.
That was valuable in its own right. Business intelligence delivered value before any predictive model existed.
But it also exposed the next limitation.
The reporting showed us where problems had already happened.
It did not show us where risk might be quietly building next.
That gap became the catalyst for the next phase.
AI as decision support, not decision authority
By 2025, working closely with Property Services, we explored whether historical patterns could help identify homes at higher risk of damp and mould before issues were reported.
The framing mattered.
This was not about automated decision-making. It was not about replacing surveyors, professional judgement or local knowledge.
It was about decision support: using data to help prioritise inspections, conversations and earlier intervention.
We combined data from multiple sources, including repairs history, property characteristics, EPC data, voids information, CRM records and selected environmental indicators.
We then engineered features such as construction era, property type, repeat repair patterns and previous damp-related activity as potential risk indicators.
Several machine learning approaches were tested using Python. We selected XGBoost because it offered a strong balance between performance and interpretability. Model development and evaluation were tracked using MLflow, giving us a transparent record of experiments, metrics and versions.
But from the start, explainability was non-negotiable.
Damp and mould is not a low-stakes analytical exercise. It touches resident wellbeing, property condition, safety, trust and regulatory expectations.
So we embedded SHAP explainability into the model, allowing staff to see which factors most influenced each risk score.
In practice, this means a surveyor can understand why a home has been flagged, and challenge the output where professional judgement suggests otherwise.
The model is advisory.
The decision remains human.
The accountability remains clear.
The harder part is operational
The model now highlights properties with elevated risk indicators, helping teams identify where earlier inspection or proactive engagement may be appropriate.
But surfacing risk is only half the battle.
The harder question is:
What do you do with the signal once you have it?
Every organisation has finite capacity. Surveyors are already balancing reactive demand, planned work, resident expectations and compliance requirements.
So the challenge is not simply building a model that can identify risk. The challenge is integrating that model into real operational decision-making in a way that is useful, proportionate and sustainable.
A risk score by itself does not fix damp and mould.
It has to connect to triage, inspection capacity, case management, resident communication and clear ownership of the next action.
That is where responsible AI becomes less about technology and more about service design.
What we learned
Three lessons stand out.
First, trust the foundation before trusting the model. Structured workflows, consistent triage, audit trails and Power BI reporting were not just stepping stones. They were essential foundations.
Second, explainability is not optional. In resident-impacting contexts, opaque predictions are not good enough. Staff need to understand why something has been flagged, and leaders need confidence that the organisation can explain how the model is being used.
Third, maturity is a journey, not a leap. We did not jump from fragmented case handling straight into machine learning. We moved from workflow, to reporting, to predictive analytics. Each stage created value before the next one began.
That reduced risk.
It improved adoption.
It made the AI work feel like an evolution, not a stunt.
What responsible AI looks like
Predictive analytics will never replace professional judgement in social housing.
Nor should it.
But when built on strong foundations, it can help organisations have better conversations about risk, timing and prioritisation.
For us, the most important outcome is not the model itself.
It is the confidence that comes from knowing why a home has been flagged, what evidence sits behind that signal, who reviews it, what action follows, and who remains accountable for the decision.
That is where AI becomes useful.
Not as a shortcut around professional judgement, but as a way to support it earlier, with better evidence and clearer focus.
In damp and mould, timing matters.
The earlier we can see risk, the better chance we have of acting before the problem escalates.
That, to me, is what responsible AI looks like in practice.