Pilot Audit

  • Uploaded by: i can always make u smile :D
  • 0
  • 0
  • April 2020
  • PDF

This document was uploaded by user and they confirmed that they have the permission to share it. If you are author or own the copyright of this book, please report to us by using this DMCA report form. Report DMCA


Overview

Download & View Pilot Audit as PDF for free.

More details

  • Words: 5,765
  • Pages: 23
NATIONALN TLE

NATIONAL AUDIT OF CONTINENCE CARE: LAYING THE FOUNDATION. Sarah Mian BSc PhD, Project Manager, Adrian Wagg MB BS FRCP*, Associate Director Penny Irwin BA RGN SCM MSc, Derek Lowe MSc Cstat, Medical Statistician, Jonathan Potter MD FRCP, Programme Director Michael Pearson MD FRCP, Unit Director

Clinical Effectiveness and Evaluation Unit, Royal College of Physicians, London

Correspondence: Dr Jonathan Potter, Clinical Effectiveness and Evaluation Unit, Royal College of Physicians, 11 St Andrews Place, Regents Park, London [email protected]

1

ABSTRACT Introduction National audit provides a basis for establishing performance against national standards, benchmarking against other service providers and improving standards of care. For effective audit, clinical indicators are required that are valid, feasible to apply and reliable. This study describes the methods used to develop clinical indicators of continence care in preparation for a national audit. Aim To describe the methods used to develop and test clinical indicators of continence care with regard to validity, feasibility and reliability. Method A multi-disciplinary working group developed clinical indicators that measured the structure, process and outcome of care as well as case mix variables. Literature searching, consensus workshops and a Delphi process were used to develop the indicators. The indicators were tested in 15 secondary care sites, 15 primary care sites and 15 long-term care settings. Results The process of development produced indicators that received a high degree of consensus within the Delphi process. Testing of the indicators demonstrated an internal reliability of 0.7 and an external reliability of 0.6. Data collection equired significant investment in terms of staff time and training. Conclusion The method used produced indicators that achieved a high degree of acceptance from health care professionals. The reliability of data collection was high for this audit and was similar to the level seen in other successful national audits. Data collection for the indicators was feasible to collect, however issues of time and staffing were identified as limitations to such data collection. The study has described a systematic method for developing clinical indicators for national audit. The indicators proved robust and reliable in primary and secondary care as well as long-term care settings.

2

BACKGROUND Audit is a key component of the national strategy to improve the quality of care in the NHS (Department of Health 1998). It provides a basis for measuring the quality of care against national standards, benchmarking against other service providers and improving standards of care. Great care must be paid in preparing for an audit and in ensuring that the audit tool used is robust (Pruce et al 2000, NICE 2002). Specifically agreed standards must be defined and data collection must be valid, reliable and feasible (Potter et al 2000). Where possible standards should be derived from evidence based research. Where such evidence is not available formal consensus methods should be used to develop standards based on expert opinion (Murphy et al 1998). Previous work from the Clinical Effectiveness and Evaluation Unit in carrying out national audits that have had a significant impact on clinical practice has demonstrated the importance of careful development and piloting of audit indicators. (Rudd et al 1999, Grant et al 2002, Roberts et al 2002). These audits have been carried out in secondary care settings and the standards have been derived from sound evidence based research. This study describes the method used for developing and testing indicators for a national audit of continence care for older people, commissioned by the National Institute of Clinical Excellence. This national audit differs in two respects from other established clinical audit programmes. It will be carried out in primary and longterm care settings as well as secondary care and the clinical indicators have been derived in large part from consensus, as there is limited evidence for the appropriate management of continence problems. AIMS · ·

To define clinical indicators for monitoring the quality of care for older people with urinary and faecal incontinence in primary care, hospitals and in longterm care settings. To test the validity, feasibility and reliability of the clinical indicators.

METHOD The process for developing and testing the clinical indicators was overseen by a multi-professional working party (table 1). Each member had specific expertise either in continence care or in clinical effectiveness. The clinical indicators were developed in the following stages:

3

1. Literature search to identify previous work on clinical standards, criteria and indicators, 2. Consensus Workshops, 3. Delphi process, 4. Pre Pilot, 5. Pilot. 1. Literature search to identify clinical criteria A detailed analysis of the evidence-based literature on audit of urinary and/or faecal incontinence was undertaken (Abrams et al 1999, Burgio et al 1999, Cheater et al 1998, Hopkins et al 1992, The Continence Foundation 2000, Georgiou et al 2001, Peet et al 1995, Royal College of Physicians 1995, 1998, 1999, US Department of Health and Human Services 1996, Potter et al 2002). In addition, three recent policy documents were used; “Good practice in continence services” (Department of Health 2000), the National Service Framework for Older People (Department of Health 2001a) and The Essence of Care (Department of Health 2001b). Indicators covering aspects of “structure”, “process” and “outcome” reflecting the quality of care provided to older people with urinary and/or faecal incontinence were identified. “Structure” indicators included resources required, staff training, the provision of equipment and the environment for continence care. “Process” indicators included communication, assessment, education, investigation, surgical and other therapeutics interventions, evaluation, and documentation. “Outcome” indicators included measures of the physical or behavioural response to an intervention, reported health status, and level of knowledge and satisfaction. Many of the outcomes were derived from work carried out by the National Centre for Health Outcomes Development on Urinary Incontinence (National Centre for Health Outcomes Development 1997). Clinical indicators were drafted from these sources for discussion, development and agreement at the consensus workshops. 2 Consensus Workshops 2.1

Background

Two consensus development workshops were held (one for bladder problems, one for bowel problems) during which a group of approximately 20 specialists in the area of continence care met to discuss the proposed indicators. Participants were asked to complete a questionnaire prior to the workshop in which they commented on the validity, feasibility and acceptability of the proposed indicators and these were then discussed according to nominal group theory (Murphy et al 1998)(Table 2). The workshop was divided into 3 sessions during which case mix and outcomes, process and structure was discussed. Participants were asked to

4

highlight any key areas that had been omitted and comment on the proposed indicators. 2.2

Results

Key issues that were highlighted included: · Ethnicity needed to be addressed in case-mix as well as service evaluation and person-centred care. · The terms ‘continence specialist’ and ‘continence training’ needed to be clearly defined, with specific examples to aid understanding for authors. · The inclusion of a patient or carer completed validated and conditionspecific quality of life measure was considered important and this was included as a quality standard · The supply and demand of all types of continence products needed to be carefully assessed in order to gauge the impact of rationing. · The inclusion of indicators relating to the environment and patient dignity was imperative. From the workshops the clinical indicators were refined for testing by a Dephi process. In addition a “Help Booklet” was developed which provided a rationale for each indicator, definition of terms used and information as to how data should be collected. Workshop participants were invited to attend a second consensus workshop following the completion of the Delphi Exercise. 3. The Delphi consensus process. The Delphi technique has been defined as “a method of systematic solicitation and collation of ... informed judgment on a particular topic” (Turoff M 1970). The indicators developed by the consensus workshops were analysed and incorporated into a Delphi questionnaire. The questionnaire was responded to by 63 out of 95 invited clinicians with special expertise in the management of incontinence in the elderly. The sample included continence advisers, nurses, hospital doctors, general practitioners and physiotherapists. The scale used to rate the level of consensus ranged from 1 (strongly disagree) to 5 (strongly agree) as used in more recent studies in the UK (Gallagher et al 1996). The results of the Delphi exercise demonstrated a high level of consensus amongst the group (Figures 1 – 3). The percentage of Delphi participants agreeing or strongly agreeing with individual indicators ranged from 63% to 100% (Table 3). Comments returned by contributing participants were tabulated and fed back to the Workshop participants. Each indicator was reviewed and revised in the light of the degree of consensus from the Delphi exercise and feedback from the Delphi participants.

5

4

Pre-pilot

Prior to the full pilot, a pre-pilot was carried out to test the clinical indicators and the help booklet. Two sites were identified in secondary care, primary care and in longterm care settings. Each site was asked to complete each of the audit questionnaires (organizational structure, process and case mix, outcomes) and to provide feedback with regard to the practical usage of the paperwork. 5. Pilot 5.1 Background The full pilot was designed to recruit 15 primary care, 15 long-term care and 15 hospital settings. Each site audited patient records using a cross-sectional sample. The pilot was designed to test the robustness of the clinical indicators in terms of validity, feasibility of data collection and reliability. In addition, as reported elsewhere (Mian et al 2004a), the pilot tested the methods of selecting and recruiting a representative sample of patients from each setting and provided an initial indication as to the quality of continence care. With a minimum of 20 cases per institution the pilot sought to recruit at least 300 patients in each type of care setting. This sample size was selected to allow first estimates of the distribution of continence problems, accurate to within about +/- 5% (95% confidence interval). Each site was required to return completed questionnaires relating to 1) organizational structure, 2) process and case mix data for 20 returns in consecutive patients with urinary incontinence 3) process and case mix data for 10 returns in consecutive patients with faecal or double incontinence, 4) outcome. All sites were asked to collect data on older people aged 65 and over. To allow comparison between centres of the same type an individual and confidential code number was given to each centre. Individual patient records were irreversibly anonymised before any data left each centre. Auditors were mostly employees of the centre concerned and were from varying disciplines (Table 4). The project cocoordinator (SM) acted as an external auditor at some centres for the inter-rater reliability study. Completed forms were returned to the CEEU. Document Data Capture Ltd then scanned the data into SPSS databases. Analyses were performed within the CEEU using SPSS v11.5. 5.2

Results

A total of 30 hospitals were approached, with reminders as necessary in order to recruit 15 hospital sites (one site per hospital trust), of whom 13 participated. An initial total of 30 Primary Care Trusts (PCTs) were approached followed by a second wave to another 18 so as to obtain 15 interested sites, of whom 11 participated. A total of 90 Care Homes were approached of whom 8 participated. Three further homes were recruited via their PCTs. The total number completing the audit was: 13 hospitals, 11 GP practices, and 11 care homes.

6

5.2.1 Validity of data collected, and of the audit tool for use in each sector Sites were asked for their procedures for checking the accuracy of data and for correcting invalid data. Various methods were used (Table 5) some of which were more rigorous than others, though most sites were doing something. Primary care sites (7/7) thought the audit clinical questionnaires to be suitable for their sector though the views from secondary care (6/11) and the care homes (4/8) were more mixed. Various suggestions were made to improve the forms, in particular the combining of the two separate bladder and bowel questionnaires into one. Most sites felt that a national audit of this type would be useful to them in terms of measuring change in performance, with less enthusiasm though coming from the care home sector (Table 6). 5.2.2 Feasibility The whole audit took about 18 hours on average to complete, with considerable variation between sites (Table 7). A minority of sites found the questionnaires difficult to complete (Table 8) with the clinical audit being the most difficult and the organizational audit least difficult. 5.2.3 Difficulties encountered in the audit Sites were asked to rate various statements on a 1 to 5 scale of difficulty and to state what they did to resolve any problems. The statements were grouped according to audit development (5 statements), audit implementation (6), audit management (5), and audit collation (2). A summary of the difficulties encountered for each of these aspects is given below. Audit Development: Most general practices found it difficult to co-ordinate the various people required to collect data. Care homes had less people collecting data and consequently found co-ordination easier. Establishing a working balance between the continence lead and clinical audit was generally easy. A few sites found it difficult to get people to do the reliability study, but most had little difficulty. Hospitals had difficulty defining a data collection start date.

Audit Implementation: Responsibilities for completing the questions were clear. Questionnaire completion was seen as time-consuming and there was a learning curve. There were problems in identifying cases and also problems in completing questionnaires due to a lack of information in case-notes. Audit Management: Patient management was not that different than it would have been without the audit. Some leads lacked the time or resources to manage the audit, though most

7

did manage it. Some implied that problems could go unresolved for some time. For a few there was difficulty with the sense of ownership. Audit Collation: There were concerns about ambiguity in data recording from primary care, and to a lesser extent from care homes. Two general practices found it very difficult to trace staff completing the questionnaire to ask questions. Sites were asked what additional resources, would be required if the audit was rolled out nationally (Table 9) and virtually all of their answers were to do with wanting extra staff, time or resources. 5.2.4 Reliability Sites were asked to internally audit the first 5 cases of urinary incontinence and the first 5 cases of non-urinary incontinence twice using different auditors. It was suggested that the second auditor was of a clinical background with an interest in continence. If the local plan involved auditors working in pairs then it was also acceptable for the repeat audit to use (different) pairs. In a subset of sites an auditor external to the site (SM) visited the hospital to re-audit cases. The kappa statistic was used to measure agreement between first-time and duplicate auditors. Kappa values of 0.41 to 0.60 are said to indicate moderate agreement, values of 0.61 to 0.80 indicate good agreement, whilst values of over 0.80 are very good (Altman 1991) Table 10 indicates the amount of reliability data available for analysis, and given these numbers the main assessment of the reliability of each audit item combined all data for the three care sectors. The external reliability assessment was more limited in scope and data was also combined for the three sectors. Urinary incontinence Internal reliability kappa statistics of agreement were computed for 94 items. The median kappa was 0.70 (range –0.01 to 1.00). For 73/94 items kappa was 0.60 and above whilst for 10/94 kappa was below 0.50. Comparison with the external auditor suggested the data were generally less reliable than the internal comparisons, the Kappa median being 0.58 (range –0.11 to 1.00). Faecal incontinence Internal reliability kappa statistics of agreement were computed for 76 items. The median kappa was 0.73, (range 0.37 to 1.00). For 64/76 items kappa was 0.60 and above whilst for 8/76 items kappa was below 0.50. Comparison with the

8

external auditor suggested the data were generally less reliable than the internal comparisons, the kappa median being 0.61 (range –0.06 to 1.00).

DISCUSSION The study has outlined the method for producing clinical indicators for use in national audit. The indicators that have been derived cover all aspects of continence care in older people, in part derived from evidence-based research and in part from clinical consensus. The indicators have been successfully piloted in primary and secondary care settings as well as care homes. The work done suggests the indicators have face and content validity, and that the data are practical to collect. Quality of care may be perceived differently by different groups i.e. clinicians, managers, patients and carers. This study, while including representatives of patient groups on the working party, did not seek to include older patients in the consensus process. It is recognized that it can be difficult for patients and their carers to contribute within groups of specialist clinicians. A separate study is being conducted to determine the views of older people with regard to what constitutes quality in continence care (Mian et al 2004b). The workshops used to develop the indicators specifically required contributors to consider validity in terms of suitability for implementation. The face validity of the indicators was confirmed by the Delphi consensus. In the development of previous audit tools for other conditions at the Clinical Effectiveness and Evaluation Unit (CEEU) one Delphi round has been sufficient to obtain consensus with the backup of expert consensus workshops (from a mix of different disciplines) before and after the survey. The disadvantages of repeating several rounds of a Delphi survey include respondent fatigue, with the danger that, in trying to produce consensus in several rounds, dissent can be ignored, and opinion become centralised to the lowest common denominator. One round achieves external validation of the items, whilst highlighting the areas of disagreement. The second consensus workshops can then make decisions accordingly. The process demonstrated a high degree of agreement between clinicians from differing professional backgrounds. The finding in the pilot for most sites to regard the tool as suitable for rollout into a national audit adds further support to the face and content validity of the tool. The validity of the data collection within the pilot was supported by the checks done by the sites, though these were of variable thoroughness. We do not know for sure how accurate or how variable in quality the case-note extraction has been nor of the accuracy of the information documented in local record systems in the first place. Data reliability is a necessary pre-requisite for data accuracy but does not guarantee accuracy but this is an issue that faces all retrospective audits. The pilot experience has led to changes in the questionnaires and to the guidance notes and

9

these should further enhance the accuracy of data to be collected in the national audit. Considerable attention was paid to feasibility both of conducting the audit and of data collection. Recruitment of sites in the first place was a problem in primary care and care home settings. Feedback from sites indicated that in secondary settings the mechanism for audit is well established and operational with both dedicated audit staff and clinicians involved. In primary care, structures are in place for audit but there is not a well-established link between the clinicians and auditors. Hence in primary care no General Practitioners participated in data collection. In long-term care settings the structures are generally not in place to permit audit as part of routine daily practice. The introduction of an audit is therefore a significant extra workload for staff with little or no experience in the activity. Recruitment for the national audit will require addressing these underlying issues. In all health care settings audit must be seen as part of routine practice, staffing establishments must be designed to allow time for such activity and expertise must be built into establishments both amongst clinicians and specialist audit staff. The feasibility of data collection has also been explored in detail. The time taken to complete the audit (median time 18 hours) is similar to that of other national audits. The “Help Booklet” is a key component of the audit tool that will help simplify and speed up the process for data collectors in the national audit. Different problems arise in the various settings. In secondary care the required data is found in the case notes on the wards. In primary care relevant data might be in differing places including the records of the general practitioner, the district nurses and the continence advice team. Hence gathering the data becomes difficult. However, a continence specialist who had knowledge of the patients and an understanding of the information required usually carried out the data gathering. In long-term care settings, data again were stored in different places and were often not within the immediate patient records – being in the General Practitioner records or the continence advisors records. Furthermore, the staff was often not in a position from their experience to provide the information required. For example, an assessment of functional disability using the Barthel Index scores was one required measure. Many homes felt unqualified to determine individual Barthel Index scores. As a result of the feasibility feedback, some indicators were discarded and others were refined. The importance of the “Help Booklet” became apparent, and this was developed so the terms were more precise, the source of data more clearly indicated and other issues presenting difficulty addressed. Data collection is an additional task for staff and in some care settings local audit resources are often either limited or difficult to identify or both. Our experience is that primary care and care home settings were not able to generate local support and that participation depended on the personal commitment of individuals over and above their other duties. This is an issue that must be addressed to both ensure that complete and accurate data are collected and also to remove any

10

“excuse” for non-participation. Consideration should be given to the provision of an external researcher to visit sites that experience difficulty in data collection or to the provision of incentive payments – as happens in primary care - to ensure that audit occurs. Reliability of data collection is a great challenge. Interpretation of data within case records can vary greatly between individual data collectors. The results of this pilot demonstrated inter rater reliability compatible with other national audits (stroke, COPD). The external reliability was less good which may relate to factors such as the small numbers (sites & cases) for the external reliability assessment, the differing sector & site mix for the internal and external reliability assessments and having only the one external auditor increased the possibility of there being a systematic bias. In conclusion, the method used produced indicators , which achieved a high degree of acceptance from health care professionals. The reliability of data collection was high and was similar to the level seen in other successful national audits. Data collection for the indicators was feasible to collect, however issues of time and staffing were identified as limitations. The study has described a systematic method for clinical indicator development that proved robust and reliable in practice and lays the foundation for a national audit.

11

ACKNOWLEDGEMENT The authors acknowledge the support and commitment of the multi-professional working group who provided invaluable supervision for the project. The project was funded by a grant from the Health Foundation (previously the PPP Foundation) and was jointly commissioned by the Health Foundation and the National Institute of Clinical Excellence.

12

References Abrams P., Khoury S., Wein A. (eds). (1999) Incontinence: 1st International Consultation on Incontinence. World Health Organisation and International Union against cancer. Plymbridge Distributors, Plymouth, Altman D.G. (1991) Practical statistics for medical research. Chapman & Hall.

Burgio K., Ouslander J.G. (1999) Effects of urge urinary incontinence on quality of life in older people. J Am Geriat Soc 47:1032 – 1033 Cheater F., Lakhani M., Cawood C. (1998) Audit protocol. Assessment of Patients with urinary incontinence. Eli Lily National Clinical Audit Centre, Leicester. Continence Foundation (2000) Making the case for investment is an integrated continence service. The Continence Foundation. London. Department of Health (1998) A first class service: quality in the new NHS. HMSO: London Department of Health (2000) Good practice in continence services, Department of Health, London. Department of Health (2001a) National Service Framework for Older People, Department of Health, Leeds. Department of Health (2001b) The Essence of Care. Patient-Focused Benchmarking for Health Care Practitioners, Department of Health, London. Gallagher M., Bradshaw C., Nattress H. (1996) Policy priorities in diabetes care; a Delphi study. Qual Health Care. 5: 3 – 8. Georgiou A., Potter J.M., Brocklehurst J., Lowe D., Pearson M. (2001) Measuring the quality of continence care in long-term care facilities: an analysis of outcome indicators. Age Ageing 30(1):63-6 Grant R., Batty G., Aggarwal R., Lowe D., Potter J., Pearson M (2002) National sentinel clinical audit of evidence based prescribing for older people: methodology and development. J Eval Clin Pract, 8, 189 – 98.

13

Hopkins A., Brocklehurst J., Dickinson E. (1992) The CARE (Continuous Assessment Review and Evaluation) Scheme. Clinical audit of long-term care for elderly people. Royal College of Physicians, London. Mian S. Lowe D., Potter J., Wagg A., Pearson M. The national audit of continence care for older people: results of a pilot study. Neuro Urol Urodyn 2004; Mian S., Wagg A., Billings J. Older peoples views of continence services. Proceedings of the ICS Annual meeting, Paris 2004 Murphy M.K., Black N.A., Lamping D.L., McKee C.M., Sanderson C.F.B., Askham J. et al. (1998) Consensus development methods and their use in clinical guideline development. Health Technology Assessment, NHS R&D HTA Programme 2(3). Department of Health: London. National Centre for Health Outcomes Development (1997) Working group on outcome indicators for urinary incontinence. Report to the Department of Health. NICE (National Institute for Clinical Excellence) (2002) Principles for best practice in clinical audit. Radcliffe Medical press: Oxford. Peet S.M., Castleden S.M., Mc Grother C.W. (1995) Prevalence of urinary and faecal incontinence in hospitals and residential and nursing homes for older people. BMJ 311:1063 –1064 Potter J.M., Georgiou A., Pearson M. (2000) Measuring the quality of care for older people. Royal College of Physicians: London Potter J.M., Norton C., Cottenden A. eds. (2002) Bowel Care - Research and Practice. Royal College of Physicians, London. Pruce D., Aggarwal R (2000) National Clinical Audits. A handbook for good practice. Royal College of Physicians: London Roberts CM, Ryland I, Lowe D, Kelly Y, Bucknall CE, Pearson MG. (2002) Clinical audit indicators of outcome following admission to hospital with acute exacerbation of chronic obstructive pulmonary disease. Thorax 57: 137 – 141. Royal College of Physicians. (1995) Incontinence: causes, management and provision of services. The Royal College of Physicians, London. Royal College of Physicians. (1998) Promoting continence. Clinical audit scheme for the management of urinary and faecal incontinence. Royal College of Physicians, London.

14

Royal College of Physicians (1999) The Care Scheme. Clinical audit of long-term care of elderly people. Royal College of Physicians, London. Rudd A.G., Lowe D., Irwin P., Rutledge Z., Pearson M (2001) National stroke audit: a tool for change? Quality in health Care, 6, 194 – 8. Turoff M. (1970) The design of a policy Delphi. Technological Forecasting and Social Change 2: 149 – 171. US Department of Health and Human Services (1996) Urinary incontinence in adults: acute and chronic management. US Department of Health and Human Services. Clinical Practice Guideline No 2, Update. US Government Printing Office. Washington.

15

Table 1:

Members of multi professional working party

Background Continence Nurse Specialist – Bladder Continence Nurse Specialist - Bowel Patient Support Group – Lay representative - Professional representatives Geriatric Medicine General Practice Physiotherapist Occupational Therapist Clinical Effectiveness staff (methodological advice)

Number 1 1 1 2 1 1 1 1 2

16

Table 2: Example of questionnaire completed by experts at the consensus development workshops for urinary incontinence

URINARY INCONTINENCE Structure Indicator

IS THIS A GOOD THING TO MEASURE? Yes/No

IS IT MEASUREABLE?

Yes/No

Caveats

IS THIS AN ACCEPTABLE MEASURE OF SERVICE?

WHAT IS BEING MEASURED?

Yes/No

a. Service effectiveness b. Patients perspective c.Cost-effectiveness

Caveats

POLICIES & PROCEDURES Facility should have a written policy concerning the promotion of urinary continence.

STAFF There should be access to integrated continence services. There should be a structured and comprehensive programme of staff training on promoting urinary continence. ENVIRONMENT All bladder assessment and care is given in an environment that is private and promotes patient dignity. SERVICE EVALUATION There should be a mechanism for assessing patient and carer satisfaction with the treatment and continence services.

17

Table 3:

The range of Delphi participants agreeing or strongly agreeing with individual indicators. Participants agree/strongly agree with individual indicators Range

Structure indicators 18 indicators Urinary Incontinence Process/ outcome/ case mix 34 indicators Faecal incontinence Process/ outcome/case mix 34 indicators

84% - 100% 59% – 96% 63% - 96%

Table 4: Discipline of data collectors DATA COLLECTOR DISCIPLINE Doctor Nurse Therapist Manager

PRIMARY CARE (10) 8 2

SECONDARY CARE (13) 8 8 2

CARE HOMES (11) 7 1 5

18

Table 5: Checks on data accuracy in different settings Primary Care Secondary care Care Homes · All data collected from · Checking hospital · Audit completed with information and care clinical records. Some staff member who plans. cross checks esp knew that patient where uncertainty group. In addition · Clarification with a around data care pathways were colleague requirements. used to help complete · Correlation forms with “Bladder” · Checked cases with patients notes, questionnaires. ward leads and clarified with checked completed · Clarification of data colleague & liased forms. with GP with continence advisor from the PCT. · Checked nursing · First look through documents against district nursing notes · Correlation with medical notes. Used then practice clinical notes. And fluid balance & drug computer (Microtest). clarification of data charts & checked with with colleague. · Hand searched colleagues. patient records, · Double checking of · Completed forms continence service notes with colleague. blind then discussed records & computer · Each questionnaire with staff. records. completed · Systematic check with · Correcting nursing independently and and medical notes original notes checked against notes. · Discussed issues with · Two people double Stroke Sp Nurse/SpR · Forms double checking notes. to ensure standard checked and · Worked with a approach to data completed by colleague and collection. continence advisor. practice manager. · Phone up project co· Looked through ordinator & spoke with patients notes and colleague for care plans and clarification. evaluation sheets. · Studied notes carefully. Checked details with nurses records & notes with help of senior nurse on ward. Verified by consultant. · Used RCP help booklet. First 5 cases were double checked.

19

Table 6: Suitability Yes No

Suitability of audit tool for roll-out in a national audit Primary care Secondary care Care homes 6 10 5 1 3

Table 7: Time taken to complete the audit Healthcare Setting

Time taken in hours: Median (Range) Organisational Structure 2.00 (1.00 – 10.00) 2.00 (0.33 – 10.00) 2.25 (0.50 – 10.00)

Primary Care Secondary Care Care Homes

Case-mix & Process 11.00 (2.00 – 16.00) 18.00 (2.00 – 40.00) 12.00 (2.50-27.00)

Outcomes 2.00 (1.00 – 8.00) 2.00 (0.25 – 10.00) 2.00 (0.50 – 6.00)

Table 8: Ease of data collection for the organisational, process & casemix and outcomes audit tools Health setting

Organisation Primary Care Secondary care Care homes Process&casemix Primary care Secondary care Care homes Outcome Primary care Secondary care Care homes

Quite difficult 4

Very difficult 5

1

2

Neither easy or difficult 3

1 -

5 4 4

4 3

3 2 -

1

-

4 2 2

1 3 3

2 6 3

1 -

1 -

2 4 4

3 3 3

2 2 -

1

care Very easy

Quite Easy

20

Table 9: Additional resources required for national roll-out · ·

· ·

·

· ·

·

Primary Care Additional clerical hours. Dedicated time required for nurse to complete questionnaires. Reduce no. of cases required for audit. Internal auditor. Need training around incontinence & bladder training. Need a bladder scanner so problems can be picked up in surgery. Need new algorithms/pathways for faecal bowel incontinence Resource for education & training in primary care. Involvement from GPs and practice nurses Specified timelines for data completion The audit tool needs refining. Perhaps it should form par of falls management/medicin es management Very little- a few hours dedicated to audit

· · · ·

· · ·

· · ·

Secondary Care Audit support and improved documentation. Continence nurse to join the team on rehabilitation ward. Dedicated audit, admin & clinical time. If management of UI/FI were to improve more nurse time would be required. Regular voiding is labour intensive. Increase in continence advisor time. More support from audit dept and training of audit. Need planning in advance, need more notice period. More input from nursing (esp continence) just appointed continence advisor. Require considerable manpower Time Time – audit will have to build into annual program of national audits like Stroke

· · ·

· ·

Care homes Additional staff . Extra man hours. More information from hospital/social workers regarding patient's continence is required. None (X2) Time

21

Table 10:

Reliability study – number of sites and cases PRIMARY CARE

SECONDARY CARE

CARE HOMES

Internal reliability

8 sites 39 patients

12 sites 57 patients

10 sites 50 patients

External reliability

2 sites 20 patients

3 sites 14 patients

3 sites 10 patients

Internal reliability

6 sites 17 patients

12 sites 51 patients

10 sites 47 patients

External reliability

1 sites 2 patients

4 sites 14 patients

3 sites 13 patients

Audit

Urinary Incontinence

Faecal Incontinence

Mean Agreement

5 4 3 2 1

Structure Indicators

Figure 1: Mean level of Agreement for Structure Indicators for Urinary and Faecal Incontinence 5 = strongly agree an indicator should be included and 1 = strongly disagree. Each bar represents the mean from 63 Delphi respondents.

22

Urinary Incontinence

Mean Agreement

5

4

3

2

1

Process, Outcome and Casemix Indicators

Figure 2: Mean level of Agreement for Process, Outcome and Case mix Indicators for Urinary Incontinence 5 = strongly agree an indicator should be included and 1 = strongly disagree. Each bar represents the mean from 63 Delphi respondents.

Faecal Incontinence

Mean Agreement

5 4 3 2 1

Process, Outcome and Case mix Indicators

Figure 3: Mean level of Agreement for Process, Outcome and Case mix Indicators for Faecal Incontinence 5 = strongly agree an indicator should be included and 1 = strongly disagree. Each bar represents the mean from 63 Delphi respondents.

23

Related Documents

Pilot Audit
April 2020 24
Pilot
November 2019 57
Pilot
June 2020 31
Pilot
June 2020 21
Scrubs Pilot
November 2019 35
Audit
April 2020 45

More Documents from ""

Transf Pt Info
April 2020 12
Intx Rheum
April 2020 14
Icgp_cdinwomen
April 2020 5
Locomotor Exam
April 2020 7