Healthbase Blog

Critical Safety Issue for the PCEHR

Australia is poised to produce a system of Personally Controlled Electronic Health Records (PCEHRs) from July 2012, some 6 months from now. According to the recently published Concept of Operations, each person’s PCEHR will comprise a set of electronic documents, the majority of which are to be based on HL7’s Clinical Document Architecture (CDA), in the form of “health summaries”, discharge summaries, referrals, and the like. Having studied both the HL7 specifications in detail as well as dozens, if not hundreds of examples of CDA documents from around the world over the past 5 years, I have come to the conclusion that there are significant safety and quality risks associated with relying on the structured clinical data in many of these electronic documents.

So concerned am I by this issue that I am notifying key stakeholders and urging all individuals and organisations who take safety and quality of clinical data seriously, to investigate this issue thoroughly before committing to any further involvement with the PCEHR system being rushed through by the federal Department of Health and Ageing.

One major problem with HL7 CDA, as currently specified for the PCEHR, is that data can be supplied simultaneously in two distinct, yet disconnected forms – one which is “human-readable”, narrative text displayable to a patient or clinician in a browser  panel;  the other comprising highly structured  and coded clinical “entries” destined for later computer processing. The latter is supposed to underpin clinical decision support, data aggregation, etc. which form much of the justification for the introduction of the PCEHR system in the first place. The narrative text may appear structured on the screen, though is not designed for machine processing beyond mere display for human consumption.

Each clinician is expected to attest the validity of any document prior to sharing it with other healthcare providers, consumers or systems, and she can do so by viewing the HTML rendition of the “human-readable” part of the document ( see the example discharge summary at ). However, the critical part of the document containing the structured, computer-processable data upon which decision support  is to be based is totally opaque to clinicians, and cannot be readily viewed or checked in any meaningful way. Moreover, I know of no software anywhere in the world that can compare the two distinct parts of these electronic documents to reassure the clinician that what is being sent in the highly structured and coded part matches the simple, narrative part of the document to which they attest. This is due almost entirely to the excessive complexity and design of the current HL7 CDA standard.

It seems to me that we are in grave danger of setting in train a collection of safety and quality time bombs, spread around Australia in a system of repositories, with no understanding of the clinical safety, quality and medico-legal issues that might be unleashed in the future.

As an illustration of the sort of problems  we might see arising, I proffer the following. I looked at 6 sample discharge summary CDA documents  provided by the National E-health Transition Authority recently. Each discharge summary looked fine when the human-readable part was displayed in a browser, yet unbeknownst to any clinician that might do the same, buried in the computer-processable part, I found that each patient was dead at the time of discharge. One patient had been flagged as having died on the day they had been born – 25 years prior to the date that they were purportedly discharged from hospital! Fortunately this was just test, not “live” data.

A second example is the sample electronic prescription document that has been provided with the package of NEHTA specifications currently being “fast-tracked” through Standards Australia to become standards for electronic transfer of prescriptions in Australia. Again, this Level 3 HL7 CDA document contains separate “human-readable” and coded, structured sections,  with no connection between the two. The former looks somewhat like a computer-generated printed prescription ( as shown at ). The computer processable, coded entries in this sample both contradict and also contain additional information to the human-viewable part, yet these coded entries are opaque to clinicians. Again, this was just example data, but the principle remains the same. Clinicians cannot see what is in the coded parts of the document.

I contend that it is nigh on impossible with the current HL7 CDA design, to build sufficient checks into the e-health system to ensure these sorts of errors won’t occur with real data, or to detect mismatch errors between the two parts of the documents once they have been sent to other providers or lodged in PCEHR repositories.

This situation must also be of potential concern to patients considering opting in to the PCEHR system. Consumers have been led to believe that they will be able to control what data is sent to the PCEHR and who can see it. But if neither they, nor their healthcare provider can view all the data to be sent and stored in the new system, then how can they possibly have confidence that they will be “in control” of their data?

Surely this must ring alarm bells to all involved!

To allay the concerns raised here, NEHTA should provide an application, or an algorithm,  that allows users to decode and view all the hidden, coded clinical contents of any of the PCEHR electronic document types, so that those contents can be compared with the human-readable part of the document.

Comments (9)

9 Responses to Critical Safety Issue for the PCEHR

  1. Agreed Eric, the narrative/structured split is a huge problem area for CDA. I think it shows a lack of core IT competence within HL7 – I can’t imagine going into any true IT organisation and proposing storing the same information constructed in two different ways in the same document, and with no computable requirement to have them synchronised

    (I’m not saying that there is noone at HL7 with IT smarts, clearly that is not true, but if you gave 100 computer scientists a vote on whether the dual narrative/structure is a good idea from an information accuracy perspective I can’t imagine you’d get many yes votes)

    At the very least it should be a requirement that where there is both a narrative and structured data in a section, that one is a direct computerised transform of the other. This is quite a hard problem – and presumably one they were trying to avoid in CDA – but the current alternative is a non solution.

  2. Anonymous says:

    This comes as no surprise to me. The majority of CDA documents will come from pathology or radiology systems. 90% of the pathology systems in Australia were built with HL7 as a tack-on to their paper reporting, with no real linkage between what is in the paper report and what is in the HL7 atomic section. Until there is a generational change in RIS/LIS, there is always going to be a risk that the atomic and full text reports will get out of sync.

  3. if you gave 100 computer scientists a vote on whether the dual narrative/structure is a good idea from an information accuracy perspective I can’t imagine you’d get many yes votes

    That’s perhaps the problem. Who designed these reports? Computer scientists/business IT personnel, or clinicians (especially those skilled in Medical Informatics and biomedical information science)?

    Often the latter’s role is not strong enough when pitted against the IT cognoscenti.

  4. Andrew McIntyre says:

    This is essentially a “dumb it down” problem, where supporting CDA becomes as simple as displaying a xlst transform to html, rather than any deep semantic understanding of the format.

    The same thing happens in HL7V2 in Australia where there is an insistance on a display segment, and often this is all that is processed.

    Until we start placing value on high quality implementations and the quality of the computer science rather than the buzz word xmlitis that so aflicts the “management” of organizations like Nehta we will not advance much. Doing health IT well is hard, but there are no short cuts to quality. CDA is not a solution to our problems, its just a distraction. We need a focus on quality and we should start with improving the quality of what we are currently doing rather than rushing off doing something new and trendy.

  5. Murfomurf says:

    Having been involved in the early stages of constructing large databases of health information I’ve always been wary of the IT professionals’ input as well as the clinicians’! The IT professionals seemed to be in favour of getting as much information as possible into the minimum amount of space. However, when I asked how easy it was to get a row x column output suitable for data checking and analysis the answer was always “we’ll have to write another program for that”. Indeed this is what happens every time I ask for a data extract from the state’s department of health database: it takes so much time and effort by one staff member to correctly code the output “program” [or extraction algorithm], that I only get my data long after the rest of a project is finished, or not at all. This impenetrable part of the PCEHR written in HLA7 sounds like a recipe for exactly the same tossed data salad. Therefore it’s impossible to do checksums and similar tests to make sure the new data being entered is sensible given the possible ranges of values and percentage tolerance in each data element.
    Since the IT professionals writing the code for another database [which has since become part of the health department’s record-keeping] took my concerns about ease of checkable output on board, that db can output every element into an r x c table quickly and accurately. Each input value has it’s own dedicated “spot” or spots in the output table. Then checksums etc can be performed soon after the data is entered at the treating location, giving the capacity to correct any elements that are out of range or magnitude for the parameters they represent. It doesn’t sound as though output format [and aggregation] has been taken into account during the design of the Australian EPCHR. I’m not looking forward to dealing with data from that system.

  6. eric says:

    Over on Woland’s Cat, Thomas Beale has elaborated on the CDA dual content conundrum and suggested some improvements to the current HL7 CDA standard. Thanks Thomas.

  7. anon says:

    What puzzles me is why this “dual content conundrum” is suddenly a revelation and a show stopper in January 2012!

    It isn’t like this was just added to CDA or NEHTA specifications recently. We all know its been there long before the PCEHR was a twinkle in…well lets just leave it at twinkle.

    So, naysayers, I’m not disagreeing with you on the issue but why all the fuss now? Why not earlier when the PCEHR was announced or earlier when NEHTA started publishing CDA based specifications?

    Why bring this up now, at what amounts to the 11th hour for the PCEHR?

    • eric says:

      I think that’s a pretty fair question, anon!

      It is still not clear how much of a safety issue it might be. The design of the PCEHR was not available in May 2010 when the PCEHR was announced – or if it was, then NEHTA and DoHA have been very deceptive going through some charade of public consultation over the past 18 months or so. There was no concept of operations to indicate the design back then. A draft concept of operations was released in May 2011, which suggested that the contributions to the PCEHR might mainly be CDA documents. It was not clear that the PCEHR would comprise some unknown quantity of repositories of CDA documents, nor to what extent Level 3 CDA documents would prevail for what types of documents. It still isn’t clear some 5 months out from the launch date! It is not clear what level of conformance testing will be undertaken. The whole implementation program is shrouded in secrecy.

      I have consistently said there are significant problems with HL7 CDA. I gave a paper and presentation at HIC2008 advising people not to use RIM-derived structures for CDA documents. I have written articles that describe the mismatch between narrative blocks and coded entries( e.g. ).
      In 2010 I established the Global CDA Challenge ( ) and offered $1000 to any person or organisation able to demonstrate that narrative blocks and coded entries could be programatically matched. I have issued two time extensions on this challenge, yet no-one has ever taken it up. The challenge still stands. If I could afford to offer a larger prize, I would.

      It was not until late last year, when some sample NEHTA documents became available, and the lack of progress on terminology infrastructure became abundantly clear, that I realised just what potential safety issues the proposed NEHTA PCEHR implementations actually pose.

      I do feel, anon, some pejorative tone in your comment, perhaps even implied accusation of malevolence on the part of ‘naysayers’. I can’t speak for any other ‘naysayers’, but the safety revelation is genuinely something new to me. I’d much rather err on the side of safety and quality than join some bandwagon of yeahsayers for ? reason.

  8. Thomas Beale says:

    It might be worth noting that the CDA was only ever designed as a document for transmission – logically a message, whose contents look like a ‘document’ mainly from the point of view of US hospital clinical notes. It was never designed to be persisted as any kind of EHR, and it won’t work properly for that purpose. The original designers always knew this, but failed to make it understood sufficiently in the minds of what I would call naive (but numerous) implementers, who really don’t understand basic principles of health information. This doesn’t really excuse design faults, but it might explain its architecture being more casual than EHR architects expect.

Leave a Reply

Your email address will not be published. Required fields are marked *

Powered by WordPress