Defining heat losses as percentages misses the point of network optimisation. Casey Cole explains why.
Losses that go unchecked can easily double the cost of heat on the network, but while everyone agrees it's hugely important to limit losses, the approach we currently take to heat loss is damaging the performance of our networks. Let me explain.
Industry players and government policies talk about losses from networks as percentages. Open almost any project specification or government consultation relating to district heat and you'll find losses defined as the percentage of heat that leaves the plant room, but isn't consumed as useful heat by the customers.
There are two big problems with defining losses in this way.
First, because it's calculated as a percentage of plant room output, the losses figure will depend on heat demand - if customers require more heat, output from the plant room will be higher. If they require less, it will be lower. So even if the amount of heat that's lost stays the same, the percentage changes depending on demand.
"Using percentage heat loss in a contract makes it impossible to determine whether you're getting what you're paying for. Who on earth would define their contract requirements in a way that's impossible to verify?"
Second, because percentage heat losses can only be defined relative to demand, it's impossible to measure it until you've got customers. And their presence isn't enough - you can't calculate a percentage loss on move-in day. Instead, you've got to measure consumption over time, say across several seasons, to ensure you capture an accurate picture of how much people use.
Framing heat loss as a percentage leads to perverse outcomes that are doing real damage to heat network performance.
Because percentages are dependent on consumption, the measured losses can vary even though the actual (kWh) heat loss is unchanged. As an illustration, consider two identical heat networks, one with heat-hungry customers and the other with thrifty customers. The percentage losses on the heat-hungry network will appear lower than its neighbour.
Imagine, I'm a maintenance contractor or facilities manager tasked with keeping percentage losses low. I'll want customers to consume as much as possible to keep percentages down. The last thing I want to see is energy efficiency improvements like solid wall insulation - they'll cause the percentage loss figure to rise dramatically, potentially making me miss my contract KPIs.
Arguably the most dangerous side-effect of using percentages is that they're impossible to measure before practical completion (when the building is handed over by the contractor to the client). As a result, it's impossible to prove whether the contractor and project team have done their jobs properly and should get paid.
Let that sink in for a second: using percentage heat loss in a contract makes it impossible to determine whether you're getting what you're paying for. Who on earth would define their contract requirements in a way that's impossible to verify? The answer: almost everyone.
In this situation, clients have little option but to write a cheque, cross their fingers and let the network go into operation without knowing whether it's been commissioned properly. Usually it hasn't. And the customers bear the brunt of poor performance, high heat cost and repeat visits from engineers trying to retrospectively sort out the root causes.
Instead of using percentages, we should define heat loss from networks in absolute terms, such as watts per dwelling or kWh per customer per year.
This would have several important advantages over the current practice, for example:
- Designers and project teams would know exactly what performance targets they'd be held to.
- Losses would be independent of consumption, so two identical networks would appear as such, regardless of how much the customers use. An energy performance contract defined in this way sits happily alongside energy efficiency works.
- Most importantly, performance could be verified at commissioning and before practical completion. With an absolute metric, the project team can be held to account before they collect the cheque.
In my opinion the way the industry currently defines losses is all wrong. I'd go so far as to say that we won't consistently deliver high efficiency (low loss) networks until we change how we talk about heat loss.