A common theme in e-health, particularly in Australia, is the often conflicting perspective of different participants in the healthcare landscape. I’d like to highlight a couple of these in the diagnostic testing arena. The first is a ‘business‘ issue – one of cost/benefit discrepancies; the second is a ‘technical‘ issue – conflicting perspectives on terminologies. I’m sure there are many more examples. I contend that a lack of progress in sharing diagnostic test results data electronically can, in large part, be attributed to these kinds of conflicting perspectives. And until we have proper e-health governance, we’ll continue to make slow and painful progress.
The ‘business’ issue is probably familiar to many. It is a recurring theme across much of healthcare. The people who are ‘expected’ to bear the cost are not the ones who reap the benefit. What value is there in a pathology lab spending more on improving it’s results messages? What benefit is there for a vendor of GP practice management software improving it’s processing of pathology results? Likewise for a vendor of a hospital clinical system? These vendors can often easily claim sufficient conformance to some HL7 messaging standard that their message processing capabilities are factored out of purchasing decisions. Could/would a single GP practice realistically change it’s practice software just on the basis of the way pathology results are handled? Ergo, the status quo prevails.
The second example issue is that of the terminology used for reporting results. A laboratory unit of reporting is an atomic test. In the absence of a better standard, it is sensible and justifiable for laboratories to adopt the LOINC coding system to name each discrete test. LOINC uses a single code to differentiate test descriptions on the basis of 6 characteristics, including such things as the system being investigated (e.g. blood serum vs blood plasma) , the property being measured ( e.g mass concentration vs concentration by volume ) , the test methodology (e.g. simple strip test vs electrophoresis). These parameters are important to testing laboratories, but of little interest to downstream users of test results, who are more interested in standardised names of observables like “urine albumin” or “HbA1c”. Unfortunately, LOINC is poor at providing these names in any consistent fashion. Its poor and inconsistent use of abbreviations, synonyms, specialisation/generalisation, case sensitivity, etc. are really inadequate for good clinical practice, let alone electronic decision support. For downstream users, the association of reference ranges to tests is important, but LOINC doesn’t provide such a link. It is left to the laboratories to provide reference range information on a result by result basis.
So whilst one terminology might meet most of the needs of the suppliers of test information, the same terminology fails the needs of other parts of the healthcare sector. The technical issue thus becomes a business issue. Such issues of conflict by viewpoint often militate against the ‘right’ decisions being made in e-health – perhaps even militate against any good decision being made?
I believe that there are two primary tests of value in healthcare. One is “does this innovation/treatment/practice improve outcomes for patients?“. The other is “does this innovation/treatment/practice improve the efficiency of healthcare delivery?“. If the answer to one or both of these is yes, then the costs need to be examined. If the costs of implementation are to be borne by other than those that reap the benefits, then we need a governance framework to intervene.
Sadly, in e-health in Australia, we don’t have one worthy of the name.