.

Friday, March 29, 2019

Health and Social Care Essays red dot system

nearlyness and Social C ar Essays tearing exile organizationIntroductionIn the frequently frantic and universally pressu bolshie world of the AE de compositionments of this countrys hospitals, mistakes get made. This is a item of life. In any human nullifyeavour this is sadly legitimate. Until freshly, the blame burnish that was prevalent within the NHS, made certain protective behaviour patterns amongst supply al often or less endemic (Vincent, 1994). It is oneness of the characteristics of a professional person life that you withdraw to take function for your actions. If you take the wrong action, you give be criticised. This defensive attitude was, to a large extent, foste blood- passing by the professional health insurers who, upset about paying out large quantities of their funds, demanded secrecy, no apology and a defensive stance from those that they insu rubicund.(Clinical Services Committee)It became app arnt to those who were in a touch to check an ov erview of the stead that such(prenominal) a situation was genuinely in nobodys inte confront (Barley, 2000). Healthc ar professionals were practising defensive treat, tol termnts were macrocosm kept in the ominous when mistakes were made, and nearly outstanding of all, beca handling line of works were non meditated in an open and constructive way, robust lessons were non learnt. All that was happening was that defensive stances were becoming entrenched.The advent of the no-blame civilisation is easeing to erode these stances and attitudes (Aldridge 2000). It is allowing the development of puts which whitethorn help the efficiency of our hospitals and give the patient with a better service.The red height formation arose as a product of both of these factors. The constrict on the AE discussion section cater is often relentless and great. The structure of the establishment is that nigh determinations be interpreted by comparatively in fuckd module member s and often non the near becharm for the decision that needinesss to be taken. Huge numbers of roentgenograms ar seen by junior-grade fixates and decisions regarding treatment are initially made forrader a older specialist has a chance to oversee them. It would follow, by any common sense analysis of the situation, that any measure that could help in the decision making process should be welcomed.This argument is taken march on by the article by Vincent et al. (1988) . In the days forwards the red decimal arrest trunk was seriously considered, Vincent and his colleagues carried out a field of operation of the radiological misconducts made by junior hospital doctors. They set in motion an error array of 35% when the roentgenogram was assessed by the SHO alone. For errors with a clinically hearty impact the calculate was 39% (of abnormal films).The red dispel remains represents a mechanism to try to address this gap. It involves the radiographer usually, that non always, the one who has taken the film giving the clinician some feed back. Radiographers see many thousands of films and are well-nighly actually familiar with the structures that they show. Quite apart from their courtly bringing up, simply by everyday familiarity and experience, they get to know what is normal and what is not. The radiographer is and so hygienic placed to recognise an abnormality evening though they may not fully appreciate the full clinical significance of what is on the film. The same argument can be applied to the clinician, who can generally recognise pathology in a patient yet may not be so familiar with the X-Ray careens.The red break up brass requires the radiographers to examine the film after it has been ordered by the clinician. If they feel that in that location is an abnormality on it they will place a self-adhesive red breaker point on it to denote that they believe that it contains an abnormality. Clearly this does not relieve t he clinician of the responsibility of examining the film as, the legal responsibility for interpreting the film mustiness rest with him. This is solitary(prenominal) reasonable since even the most experienced radiologist would still give a report on what he could see on the film, the full significance of the changes seen can only be fully assessed by a healthcare professional who has also seen and assessed the patient. As we will handle later, the converse argument that the absence of a red dot does not imply that in that respect isnt an abnormality it only denotes that the radiographer hasnt seen one.The red dot governing bodyIn a letter to the BMJ Keith Piper (2003) outlined the case for the red dot placement and the radiographer insurance coverage ashes (See on). It was initially suggested by the size up Commission in 1993 that radiographers could be trained to interpret certain images and this was raise to be of extra interest in view of the difficulties that some d epartments shortly experience with the insurance coverage serviceThe first accredited course was eliminate in 1994 many radiographers deem since been reporting on primary osseous X-Rays in AE departmentsPiper points out that the system is designed to reduce errors in reporting X-Rays. It is ultimately totally reliant on the radiographs being lastly account by a senior radiologist in a timely fashion. Unfortunately, this is not always the case as Beggs pointed out in 1990 when it was raise that over 20% of UK t for each oneing hospitals did not report on all virgule and pinch filmsWith specific reference to the red dot system, the letter by Aldridge and Freeland (2000) passes explanation on the system which is in physical exercise in their hospital and, having audited it, they present their leaves. The system in use conforms to that on-line(prenominal)ly outlined by British Association of Accident and Emergencys inclinelines (1983). The important facets of their system includeThe rapid return of X-Rays to the requesting clinicianReporting of X-Rays by a adviser radiologist within 24 hrs.Telephone recall of patients who surrender mistakes picked upThe use of the red dot system by the radiographersThe use of such X-Rays for teaching purposes for staffAs far as the audit of the red dot system was concerned, they report the last audit showed an 1.5% dishonest domineering(p) result, 2.0% off negative result with the rest categorised as true positive or negative results. The authors felt that this represented an excellent come on to what they described as an error prone activity, reducing mistakes by accident and emergency staff (often junior), increasing patient satisfaction, and reducing long border patient morbidity and litigation. This letter is a significant piece of licence as it is written by two clinicians who are distinctly restless to assess the system and to suck up it work. They appreciate the problems, quantify them and address them by placing safeguards to minimise problems. Significantly, they suggest the use of the red dot system where it has picked up omissions by the clinical staff to be the basis of teaching junior staff in an attempt to further reduce potential problems.These results should be seen in the linguistic context of a study by de Lacey et al.(client to supply date) who considered the truth of adventure policemans interpretation or X-Rays in their departments. They found that by study the chance officers interpretation with that of a radiologist, it only compared favourably in 83% of cases. The 17% discrepancy understandably represents a major burden in terms of clinical implications for the patient, financial implications for the hospital and possibly litigation implications for the chance officer. The study also examines the implications of a decelerate reporting system (by the radiologist). It was found to reduce their workload by 25% by restricting their reporting to those film s which the casualty officer was unsure or thought may stand an abnormality. It clearly follows from this that any measure that is likely to increase efficiency inaccuracy of reporting is likely to stick out benefits of both economy and patient suffering. We therefore need to examine the premise that the red dot system does scarcely that.These figures are clearly worrying insofar as the 17% discrepancy is a wide margin. The figures still subscribe to be viewed in context that as, although they represent the interpretation of as specialist (the radiologist) as compared with that of the non-specialist (the clinician), the authorship does not draw any distinction amid the experience levels of the two groups. The clinicians may be comparatively inexperienced casualty officers and the radiologists probably are adviser grade. If that is the case, indeed the figures are much less alarming. This point is discussed in full stop further on in the piece (Williams et al 2000) where ra diologists in training are compared to radiologists of consultant grade. The point is brought into sharper focus by retainer of the next two papers.Before we consider this aspect however, we need to measure the accuracy of reporting in the AE Department environment. Benger and Lyburn (2003) attempted to investigate exactly that. They scrutinised the X-Ray output of an AE Department over a cardinal month period (nearly 12,000 films). They identified the films which had discrepancies in reporting among the X-Ray staff and the AE Department staff. From the 12,000 films they found (only) 175 discrepancies. In clinical terms, this equated to a rate of 0.3% of patients who needed a change of vigilance as a result. In all our deliberations on the subject, perhaps it is this that actually is the subjective criteria for whether a system works within tolerable limits or not. Different studies may pass divergent discrepancy survey in interpretation of X-Ray films, provided what is of practical value is the actual number of patients who require a change of management as a result. If a minor degree of subluxation of a proximal interphalangeal joint is missed by a casualty officer and subsequently picked up by a radiologist, it will appear on inventories of discrepancies such as the ones discussed above. In terms of patient care or treatment, it will not make a scrap of difference. This point is made, kinda much eloquently and in a different context, by Fineberg (1977) and the add of Medicine (1977).This point should not be taken lightly and indeed, it goes to the lens nucleus of this piece. Academic studies may show different abnormality detection rates between the different professional groups. While recognising that these are clearly important, they are not the yardstick by which we must judge the red dot system. We pitch already examined two papers on the subject that require inform differences in abnormality detection at each end of the spectrum one o f 17% and one of 1.5%. We should not be blinded by these figures themselves. What actually matters is the number of patients who have a change of management decision as a result of this discrepancy. The paper quoted above (Benger and Lyburn 2003) is one of the few which actually gives us this information. They quote an observed change of management in only 0.3% of patients which, for any system, is a very tolerable level of error. This is clearly a very native point and one that we need to examine further. The next paper that we should consider supposes at exactly this point and examines it in great detail.Taking a more academic approach Brealey and Scally (2001) rigging the difficult rationalize of meet how to interpret the findings of a study that purports to evaluate the reading of X-Rays by two or more different professional groups. This is a very technical paper and is included here for the interestingness of completeness. It examines all of the possible margins for error and bias when reporting a runnel. It throws footling direct light onto our deliberations here be set of its very technical nature, but it would be of considerable importance to one who wished to interpret the findings of a major trial independently. The point needs making that the trial design can learn the outcome of the trial (and therefore its usableness) to a great extent. As we have made the point above, the actual figures produced at the end of the trial must be interpreted in the light of the trial design. Actual find differences in readings between two groups of professionals may be of academic interest, but in the context of our test of the red dot system, they are not nearly as important as a critical test of the discrepancies which resulted in a change of patient menagement.On the direct issue of the red dot system, an almost immediate precursor to the system was reported in the BMJ in 1991 by Renwick et al. . He discussed a system that was time-tested out of get ting radiologists to indicate their diagnoses on the pre-reported X-Rays, in order to guide the casualty officers in their decisions. The conclusions of the study were that, because of the high rate of monstrous positive reporting (7%) and higher rate of false negatives (14%) it was appropriate for radiologists to offer useful advice but to take no more responsibility than that. We shall discuss the issues of false positives and false negatives further on in this piece and clearly they are an inherent problem with the system. It follows that we should, perhaps, address the reasons why there are these discrepancies and use them as a learning exercise to try to reduce the gap.In the excellent and concise article written by Touquet et al. (1995) the authors address the x Commandments of AE Department radiology. They discuss the red dot system in the pursuance terms.Inexperienced doctors will inevitably come across injuries that they have never seen before. In these cases it may not be possible to make a diagnosing but you will notice that the films do not look instead justifiedly. Good examples of this are lunate and perilunate dislocations of the hand. It is important to seek senior advice and also to listen to the radiographer. Many departments operate a red dot system, in which the radiographer flags up an abnormality. An experienced radiographer may be as straightforward as or even better than a junior doctor at interpreting films.The problem with this system is that the absence of a red dot does not necessary mean that there is no abnormality. This is important to remember because the final responsibility lies with the doctor, and not the radiographer. Therefore never judge poor quality or inadequate films.The most salient point of this article is in the last paragraph. The absence of a red dot does not mean the absence of an abnormality and the obligation lies with the doctor not the radiographer. This is clearly proper, as any experienced healthcar e professional will state, any investigation (particularly an X-Ray) is only an adjunct to diagnosis, it is the person who is clinically in charge of the patient who has to assimilate all the available evidence to make a diagnosis. The radiographer has not seen the patient to examine, and certainly will not have to hand all of the other potential diagnostic aids that are available in a modern AE Department. It is entirely reasonable to pack for his judgement on an X-Ray film, but it is not reasonable to acquit him responsible for its definitive interpretation when he has not seen it in the context of the patient.This statement is laughingstock the reasoning for the legal responsibility of X-Ray interpretation. It would be clearly inappropriate to ask a radiographer for his opinion on a film and accordingly make him responsible for any subsequent management decisions that were based on that opinion. Some commentators have criticised the red dot system for its clear lack of appor tionment of responsibility to the radiographer. We would suggest that this shows a fundamental lack of appreciation of the problems involved. The radiographers are trained to be experts in fetching X-Ray films. They are not, and do not pre scarper to be, trained in the biological sciences and their applications to pathology and the human disease processes. It is quite appropriate to ask their opinion in an area of their expertise (the interpretation of the X-Ray film), but it is quite inappropriate to ask them to make clinical management decisions. For this reason, all questions of liability always rest on the clinician in charge of the patient, and it is only right that this should be the case.It is fair to say that some of the views surveiled so far have been old school necessarily so, as the intention was to document the development of the red dot system. It is equally fair to state that we have only considered the use of the system in the AE Department. The truth of the matt er is that in the recent past, the status of the radiographer has increased in professionalism both within their own strong point and within the NHS as a whole. Many of the comments made in some of the earlier papers quoted will therefore, now seem rather change and not consistent with the modern experience of working in the NHS.To damages the balance we shall look at an article from Papworth hospital by Sonnex et al (2001) . The authors describe a system currently in use at an acute cardiothoracic unit. Radiographers were asked to assess all the X-Rays taken over a six month trial period. Those that were assessed as showing acute changes had a red dot placed on them to denote an abnormality and these were then assessed by a radiologist. The success or failure rate was then deliberate against this standard.The figures are rather different from the figures quoted in the studies that looked at otiose X-Ray in AE Departments. The reason for this is almost certainly that a chest X- Ray is notoriously hard to interpret, even more so when it is a back operative X-Ray. The results were reported as a total sample of 8614, of which 464 (5%) had red dots applied. Over 100 of these were considered inappropriate. 38 X-Rays which were abnormal were not picked up. It would appear that radiographers tend to err on the side of caution when reviewing an abnormal chest X-Ray, even more so when previous comparative films were not available for comparison. This particular study had a high false positive rate.One should not lose sight of the fact that the radiographers concerned were dealing with a different population to those that we were considering earlier. The patients were generally very ill and often in a post operative state making assessment far more critical than perhaps the colder X-Ray of the AE Department where decisions could more or less be delayed safely for 24-48 hrs. there was therefore perhaps far more pressure on them to report any possible abnormality. I t is also appropriate to comment that this was the first stage of a study which then went on to review the radiographers public presentation after a further period of training. One would reasonably anticipate a higher agreement rate after appropriate training.As we have already seen the red dot system has evolved in several different variants. The basic premise is the same in each case how is it possible to minimise the potential sources of error caused by inexperience? A further variant is outlined by Williams et al (2000). His paper title specifically involves the cost effectiveness of the turning away as well as the overall impact on patient management. In this scheme ( which was running at the Radcliffe Hospital in Oxford) the original AE Department films were reviewed by radiologists-in-training. They identified 684 incorrect diagnoses over a one year period. These were then called red reports and reviewed by a consultant radiologist. During this process 351 missed fractures were detect with ankle, finger and elbow fractures being the main areas where pathology was missed. Williams also reported 11 incidences of pathology on a chest X-Ray as being missed. This amplifies the point made earlier that the radiologists-in-training tended to produce false positives at a rate of about 18% when compared to the subsequent, more expert opinion.In this particular study, further action was taken by the AE Department staff in 42% of those cases although no operative intervention was required in any patient as a result of the missed diagnosis. Despite these figures, it must be noted that these cases form a very small percentage of the X-Rays taken in a busy AE DepartmentFalse positives and false negativesWe have looked at a number of studies that have compared radiographers interpretations of X-Ray films against that of a advisor Radiologist who has generally been used as the Gold Standard. The difference between the two sets of interpretations is then subdivided in to false positives and false negatives. This group is actually the most important as it is firstly an indication of the usefulness of the whole system of red dot reporting and secondly it is also an indication of how much more training any particular reader (radiographer or casualty officer ), of the films has to infrago, in order to make fully competent assessments.The false positive is the situation where the radiographer has identified a problem that is not there. Conversely, the false negative is when they have missed pathology that is there. In most of the assessments that we have seen, there are more false positives than negatives. This implies that the radiographers are being over buttoned-down when confronted with an equivocal film.Several of the papers that we have seen so far have stated (either explicitly or otherwise) that the absence of a red dot does not imply the absence of any pathology. Any common-sense analysis of the situation would suggest that this is clearly self-evident. It must be the case where two exceedingly trained but clearly not expert healthcare professionals are looking at a film for pathology, they are probably more likely to arrive at the right answer than one alone.Brealey (2005) produced a Meta-analysis of studies involving radiographers input in interpreting films and found that radiographers involved either in the red dot system of X-Ray reading improved with experience and with training, acquired an accuracy approaching that of radiologists when dealing with skeletal X-Rays.The red dot system is designed to utilise the expertise of specially trained radiographers to interpret knitwork X-Rays. From the evidence presented above we can say that there is evidence that radiographers are clearly more expert in interpreting unornamented skeletal X-Rays than chest X-Rays or visceral radiographs. The red dot system appears to be a growing movement within the profession. A paper by Brealey (2003) pointed out the fact that be tween 1968 and 1991 the radiologists workload increased by 322% but the number of posts increased by only 213%. As a result of this the number of films successfully reported within 48hrs fell to 60%. As a result of this effort the Royal College of Radiologists decided to endorse the trend of radiographers giving indications of pathology on X-Rays . Brealeys paper examines the initial cohort of radiographers who were trained under this scheme and found that, statistically, there was no significant difference between the reading of an X-Ray by a radiographer or a radiologist (in the case of plain skeletal X-Rays) which supports the view that the red dot system is viable.Any examination of this issue would be incomplete without a consideration of the detailed and analytical paper by Friedenberg (2000) which he provocatively empower The advent of the super applied scientist. It is particularly relevant to our consideration of the red dot system and the role of the radiographer as it looks at the background to the whole issue. Friedenberg uses the term readiness mix as a specific term to define the current trend in medicine away from specialisation and departmentalisation and towards the communal function of expertise from different individuals in related fields to complement or increase the expertise available to patients. He points out that this is not actually a new concept and cites the optician who relieves the workload of the ophthalmologist and the fellate specialist anaesthetist who relieves the anaesthesiologist by performing uncomplicated procedures. He quotes a whole host of paramedical providers who now assist the physician, in most cases without problemsLoughran et al (1996a, 1996b, 1992) have specifically looked at the practicality of utilising the skills of the radiographer to better advantage than just taking the films.He contrasts the difference in practice between the UK and the USA, citing the cause of the complete separation of the roles o f radiographer and radiologist in the USA as being due to the fact that in the USA, the radiologists still operate largely on a fee-per-service basis whereas in the UK the pressure is primarily on clinicians to fabricate more efficient and to keep costs down.Friedenberg, interestingly also examines the developing of the legality of the roles of radiographer and radiologist.Between 1900 and 1920, there was competition between radiographers and radiologists with regard to the performance of radiography and the interpretation of radiographs. In the middle 1920s in England, radiographers were prohibited from accepting patients for radiography except under the direction of a measure up medical practitioner (Quotes Larkin 1983)After this the professions came closer and by 1971 Swinburne (1971) was suggesting that radiographers could perfectly well separate normal from abnormal films, which after all is the basis behind the red dot system . As we have discussed earlier, this move then progressed into the first formal appearance of the red dot system in North Park Hospital in 1985. The first trials of the system found that approximately half of the abnormalities that were not picked up by the junior casualty officers were detected by the radiographers. The early safe guards were outlined by Loughran (1996) as follows1. It is made clear to the referring physician that the report is a locomotive engineers report. The physician is further to consult the radiologist if there is a lack of clinical correlation.2. The technologist must consult the radiologist if he or she is in doubt.3. The physicians, radiologists, and technologists have devised a set of guidelines to create a safe environment for this practice.4. Initially, the technologists practice is monitored on a regular basis. After the technologist is experienced, however, monitoring is no longer performed. Such monitoring should be performed if a new technologist enters this practice.Interestingly, Loughran a lso subsequently produced a set of guidelines for the radiographer 1. The technologist should be positive(p) in his or her report.2. In cases of doubt, a radiologists opinion should be obtained.3. In such cases, although the report may be issued by the reporting technologist, the consultants take a shit should be appended to the report.4. All reports by a technologist should be clearly designated as a technologists report.5. If the patient re-presents for radiography of the same body part within 2 months, this should be reported by a radiologist.6. Non-trauma examination findings should be reported by the radiologist.7. All accident department images in patients who are subsequently admitted as inpatients should be reported by the radiologist.8. Clinicians are to be advised to consult the radiologist if clinical findings do not match those in the technologists report.9. Regular combined reporting sessions are to be held with the consultant radiologist.Robinson (1999) Defines the i deal areas for radiographers and radiologists with the following definition between cognitive and adjective tasks thusProcedural tasks can be described, defined, taught, and subjected to performance standards that make them transferrable to other staff with appropriate training. Cognitive tasks that are related not only to the interpretation of images but also to decisions about differential diagnosis and appropriate choice of further investigations are more difficult.We have examined the maturation of the red dot system and there have been moves towards the logical improvement beyond the radiographer simply indicating that there may be a problem to the situation where radiographer who have undertaken further training have developed their skills in other ways as well, but this is beyond the scope of this piece. maybe we should leave the last thought to Friedenberg who envisages the future as being the era of the Supertechnologist and it is the specialist who is left to do a smal l number of very highly specialised procedures.References1. Jonathan Aldridge, Peter Freeland, (2000) Safety of systems can often be improved BMJ 2000321505 ( 19 princely )2. The Audit Commission (1995). Improving Your form How to manage Radiology Services More Effectively. London HMSO.19953. Victor Barley, graham Neale, Christopher Burns-Cox, Paul Savage, Sam Machin, Adel El-Sobky, Anne Savage (2000) Reducing error, improving safety BMJ 2000321505 ( 19 August )4. Beggs I, Davidson JK 1990. AE reporting in UK teaching departments. Clinical Radiology, 41, 264-267.5. J R Benger, I D Lyburn (2003) What is the effect of reporting all emergency department radiographs? Emerg Med J 2003 2040-43n.6. Benger JR. (2002) Can nurses working in removed units accurately request and interpret radiographs? Emerg Med J. 2002 jan19(1)68-707. S Brealey, A J Scally (2001) warp in plain film reading performance studies British journal of Radiology 74 (2001),307-3168. S Brealey, D G King, M T I Crow e, I Crawshaw, L Ford, N G Warnock, R A J Mannion, S Ethell,(2003) Accident and Emergency and General Practitioner plain radiograph reporting by radiographers and radiologists a quasi-randomised controlled trial British journal of Radiology (2003) 76, 57-619. Brealey S, Scally A, Hahn S, Thomas N, Godfrey C, Coomarasamy A. (2005) Accuracy of radiographer plain radiograph reporting in clinical practice a meta-analysis. Clin Radiol. 2005 Feb60(2)232-4110. Brennan TA, Leape LL, Laird NM, Herbert L, Localio AR, Lawthers AG, (1991) Incidence of adverse events and negligence in hospitalised patients results of the Harvard Medical Practice study. N Engl J Med 1991 324 370-37611. Clinical Services Committee, British Association for Accident and Emergency Medicine. X-ray reporting for accident and emergency departments. London BAEM, 1983. (Currently under revision.)12. C K Connolly (2000) Relation between reported mishaps and safety is unclearBMJ 2000321505 ( 19 August )13. Fineberg HV, Baum an R, Sosman M. (1997) Computerised cranial tomography effect on diagnostic and therapeutic plans. Institute of Medicine. Policy statement Computed tomographic scanning. Washington DC National Academy of Sciences, JAMA 1977238224-7.14. Richard M. Friedenberg, (2000) The Role of the Supertechnologist Radiology. 2000215630-633.)15. Johansson H, Rf L. (1997) A compilation of diagnostic errors in Swedish health care. Missed diagnosis is most often a fracture.Lakartidningen 1997 94 3848-385016. Pia Maria Jonsson, Gran Tomson, Lars Rf, (2000) No fault compensation protects patients in Nordic countries BMJ 2000321505 ( 19 August )17. G de Lacey, A Barker, J Harper and B Wignall An assessment of the clinical effects of reporting accident and emergency radiographs18. Larkin G. (1983) Occupational monopoly and modern medicine London, England Tavistock, 1983.19. DD Loughran CF, Alltree J, Raynor RB, (1996) Skill mix changes in departments of radiology impact on radiologists workloadreports of a scientific session.

No comments:

Post a Comment