Across Victorian public health services, EMR investments are being approved on the basis of business cases that promise substantial benefits — reduced length of stay, fewer adverse drug events, improved surgical throughput, better clinical coding cycles, lower readmission rates. The Department of Health publishes a Compendium of EMR Benefit Measures precisely so that health services can frame these benefits consistently and reference the evidence base.

And yet, when the Victorian Auditor-General reviewed clinical ICT systems across the public health system, the finding was uncomfortable: “there has been limited assessment to date of the benefits and outcomes of the various clinical ICT systems put in place.” That report was published in 2013. The Compendium that responded to it was published in 2020. The pattern it was designed to break has, in many places, persisted.
This article is a practitioner’s view of why benefits realisation continues to fail in Victorian EMR programs — and a framework for getting it right. It is written for IT Program Directors, CIOs, and digital health leaders preparing for, or recovering from, an EMR implementation.
The Compendium of EMR Benefit Measures sets out 19 benefit profiles, each with a defined description, baseline measure, ongoing benefit measure, target, calculation, reference literature, and required EMR modules. The benefits range from clinical and safety outcomes (reduced adverse drug events, reduced hospital-acquired complications, reduced length of stay, reduced readmissions) through productivity and cost benefits (medical and nursing productivity, reduced clinical coding time, reduced medication costs, improved billing accuracy) to operational throughput (improved surgical throughput, reduced ED length of stay).
The Compendium does not exist in isolation. It is the measurement framework for the Department’s strategy Improving patient care by lifting digital maturity in Victoria’s public health services 2021–2025 — the state’s digital health roadmap. The roadmap sets the direction; the Compendium translates direction into measurable benefit. Health services that ignore the Compendium are not just missing a useful reference document — they are stepping outside the framework against which their digital investments will eventually be assessed.
Two features of the Compendium are worth pausing on, because they shape everything that follows:
First, every benefit profile requires a baseline measure. Without a documented pre-EMR baseline, the post-EMR measure is meaningless. You cannot claim a 6% reduction in inpatient length of stay if you cannot defend what your length of stay was before the EMR went live. This is obvious in principle and routinely overlooked in practice.
Second, almost every benefit is derived from comparison. Pre-EMR versus post-EMR. Baseline versus measured. The Compendium is a comparison framework, not a snapshot framework. This means benefits realisation is a longitudinal exercise, not an event — and longitudinal measurement requires a layer of capability that EMRs themselves do not provide.
The single most consequential mistake in Victorian EMR programs is treating benefits realisation as a post-implementation activity.
It cannot be. The window to capture defensible baselines is finite, and it closes the day the EMR goes live.
Consider what the Compendium actually asks for in baseline form. To evidence a reduction in inpatient length of stay, you need pre-EMR length of stay segmented by ward, DRG, and clinician. To evidence a reduction in unplanned readmissions, you need pre-EMR 28-day readmission rates segmented by index diagnosis and discharge destination. To evidence improved surgical throughput, you need theatre utilisation, on-time first-case start rates, and turnaround times under the legacy operating model. To evidence reduced ED length of stay, you need timestamped ED patient movement under the existing PAS. To evidence a reduction in coding cycle time, you need coding throughput data from before the EMR was switched on.
None of this can be reconstructed retrospectively to a defensible standard once the EMR is live and the legacy systems are decommissioned, archived, or made read-only. The data exists, but the analytical infrastructure to compute these measures, validate them, and produce a baseline report endorsed by the project steering committee — that infrastructure either exists before go-live or it doesn’t exist at all.
The result is a recurring pattern. The EMR goes live. Six months pass. Someone — the CFO, an internal auditor, the Department, the board — asks for evidence that the benefits projected in the business case are being realised. The project team reaches for the Compendium and discovers that the baseline measures it requires were never captured. What follows is a scramble: manual extracts from archived systems, best-effort estimates, retrospective claims that cannot be defended under scrutiny. The benefits case quietly transitions from measured outcome to retrospective narrative. This is exactly the failure pattern the Compendium was created to prevent.
Health services that successfully evidence benefits against the Compendium follow a recognisable pattern. It has four phases, each with a distinct purpose, and the phases cannot be collapsed or reordered without breaking the framework.
In the months leading up to EMR go-live, the analytical infrastructure is deployed against the incumbent systems — typically the existing PAS, supplemented by relevant clinical and operational sources. Baseline values are computed for every Compendium benefit the health service intends to claim. The output is a baseline report endorsed by the EMR Project Steering Committee, dated, and locked. This report becomes the reference point against which all subsequent measurement is compared.
The work in this phase also produces something equally valuable: a longitudinal historical dataset that survives the system transition. Years of operational and clinical history that would otherwise become inaccessible the day the legacy PAS is retired remain available for trend analysis, demand forecasting, and casemix analysis well into the future.
Benefits measurement is deliberately suspended in this window, because performance almost always dips during EMR adoption. Clinicians are learning the system, workflows are being adjusted, data quality is unstable, and any measurement of benefits in this period will mislead leadership and undermine confidence in the program.
What is tracked instead are leading indicators: EMR adoption rates by user group, data completeness and quality, dis-benefit signals such as length of stay temporarily lengthening or theatre throughput temporarily falling. These indicators allow leadership to intervene early where adoption is lagging or workflows need reconfiguration. Setting this expectation explicitly in the business case — that benefits will not be claimed in the first 90 days — is one of the most important governance moves a Program Director can make. It protects the program from premature judgement and preserves the integrity of the eventual benefits report.
This is when measurable benefits begin to appear and a measurement cadence is activated. Operational benefits — discharge predictability, ED length of stay, theatre utilisation — are typically reported monthly. Outcome benefits — hospital-acquired complications, readmissions, adverse drug events — are typically reported quarterly, because the cohort sizes required for statistically meaningful comparison are larger.
Each benefit needs a named executive owner. The Director of Operations or COO owns theatre throughput. The Director of Nursing owns hospital-acquired complications. The Chief Medical Officer owns adverse drug events. The Director of Clinical Services owns length of stay. Without named ownership, benefits reporting becomes a reporting exercise rather than a management exercise, and the data fails to drive behaviour change.
By twelve months post go-live, full-year comparisons against baseline become defensible. The orientation shifts from proving benefits to optimising them — identifying which wards, clinicians, or service lines are lagging and where targeted intervention will lift performance further. This is also where benefits reporting feeds into Statement of Priorities (SOP) reporting and the Department’s state-wide digital health monitoring — fulfilling one of the four stated purposes of the Compendium itself.
The framework above only works if a health service can answer a concrete question: which analytical capabilities do we actually need to evidence each benefit? The table below provides a working answer to that question, drawn from practical engagement with Victorian public health services across multiple EMR programs.
The table covers 13 of the 19 Compendium benefits. The remaining six — reduced pathology test duplication, reduced imaging test duplication, reduced medication costs, reduced cost of paper records, reduced scanning costs, and reduced software costs — can be measured analytically in principle, but the source data they rely on (pathology orders, imaging orders, pharmacy dispensing, procurement, consumables, IT licensing) is unlikely to be captured in the PAS. They sit outside the scope of a PAS-fed pre-EMR baseline because the upstream data simply isn’t there, not because analytics cannot reach them. They are addressed separately in the final table.
The remaining six Compendium benefits sit outside the scope of a PAS-fed analytical baseline. They are measurable in principle — with access to the right source systems — but the data they depend on is unlikely to flow through the PAS, which is the natural anchor point for pre-EMR baseline capture:
Calling these benefits out explicitly is not a weakness in a benefits realisation strategy — it is a strength. A benefits register that quietly claims all 19 measures while quietly assuming the data will appear from somewhere is far less credible than a register that claims 13, evidences them rigorously from the PAS-fed analytical baseline, and identifies clearly which additional source systems would need to be brought into scope to evidence the remaining six. That second-order question — which other systems do we need to integrate, and when? — is a separate and entirely legitimate workstream for a health service that wants full Compendium coverage.
If you are leading the digital health program at a Victorian public health service, the practical implications of this framework are direct.
The benefits realisation conversation needs to begin before EMR procurement closes, not after go-live. The analytical capability required to capture baselines is a workstream of the EMR program, not a downstream activity. It needs to be funded, scoped, and resourced as part of the program plan, with deliverables that include a dated baseline report endorsed by the Project Steering Committee before go-live.
The benefits register needs named owners. A benefits register without named executive accountability for each measure is a reporting exercise, not a management exercise. The Compendium specifies what to measure; it does not specify who is responsible. That decision is yours.
The first 90 days post go-live need explicit governance. Performance will dip; this is normal and expected. Setting the expectation in the business case — that benefits measurement begins at month four, not month one — protects the program from premature judgement and preserves the credibility of the eventual benefits report.
And the longitudinal continuity question deserves attention separate from benefits realisation. When the legacy PAS is decommissioned, years of operational and clinical history risk becoming inaccessible. Whatever analytical infrastructure is established for benefits measurement should also be designed to preserve that historical record — because it will be needed for trend analysis, demand forecasting, and strategic planning long after the EMR program is closed.
The Compendium of EMR Benefit Measures is a more demanding document than it first appears. It does not simply list things to measure; it establishes a comparison framework that requires deliberate analytical infrastructure, time-bound baseline capture, and sustained governance. Victorian health services that treat it as a checklist will struggle to evidence their EMR investments. Health services that treat it as a discipline — one that begins before procurement closes and continues for years after go-live — will find that the EMR investment delivers what the business case promised, and that the digital maturity strategy the Department has set out is something they can credibly contribute to rather than be measured against.
The work is not difficult in principle. It is difficult in timing. The window to do it well is open during EMR procurement and EMR implementation. After go-live, the window narrows considerably. After legacy system decommissioning, it closes.
Written By: Bernard Herrok, proofed by AI.
