601 resultados para Medical Monitoring
em Queensland University of Technology - ePrints Archive
Resumo:
Objective: The emergency medical system (EMS) can be defined as a comprehensive, coordinated and integrated system of care for patients suffering acute illness and injury. The aim of the present paper is to describe the evolution of the Queensland Emergency Medical System (QEMS) and to recommend a strategic national approach to EMS development. Methods: Following the formation of the Queensland Ambulance Service in 1991, a state EMS committee was formed. This committee led the development and approval of the cross portfolio QEMS policy framework that has resulted in dynamic policy development, system monitoring and evaluation. This framework is led by the Queensland Emergency Medical Services Advisory Committee. Results: There has been considerable progress in the development of all aspects of the EMS in Queensland. These developments have derived from the improved coordination and leadership that QEMS provides and has resulted in widespread satisfaction by both patients and stakeholders. Conclusions: The strategic approach outlined in the present paper offers a model for EMS arrangements throughout Australia. We propose that the Council of Australian Governments should require each state and Territory to maintain an EMS committee. These state EMS committees should have a broad portfolio of responsibilities. They should provide leadership and direction to the development of the EMS and ensure coordination and quality of outcomes. A national EMS committee with broad representation and broad scope should be established to coordinate the national development of Australia's EMS.
Resumo:
Background: Although the potential to reduce hospitalisation and mortality in chronic heart failure (CHF) is well reported, the feasibility of receiving healthcare by structured telephone support or telemonitoring is not. Aims: To determine; adherence, adaptation and acceptability to a national nurse-coordinated telephone-monitoring CHF management strategy. The Chronic Heart Failure Assistance by Telephone Study (CHAT). Methods: Triangulation of descriptive statistics, feedback surveys and qualitative analysis of clinical notes. Cohort comprised of standard care plus intervention (SC + I) participants who completed the first year of the study. Results: 30 GPs (70% rural) randomised to SC + I recruited 79 eligible participants, of whom 60 (76%) completed the full 12 month follow-up period. During this time 3619 calls were made into the CHAT system (mean 45.81 SD ± 79.26, range 0-369), Overall there was an adherence to the study protocol of 65.8% (95% CI 0.54-0.75; p = 0.001) however, of the 60 participants who completed the 12 month follow-up period the adherence was significantly higher at 92.3% (95% CI 0.82-0.97, p ≤ 0.001). Only 3% of this elderly group (mean age 74.7 ±9.3 years) were unable to learn or competently use the technology. Participants rated CHAT with a total acceptability rate of 76.45%. Conclusion: This study shows that elderly CHF patients can adapt quickly, find telephone-monitoring an acceptable part of their healthcare routine, and are able to maintain good adherence for a least 12 months. © 2007.
Resumo:
Background Despite its efficacy and cost-effectiveness, exercise-based cardiac rehabilitation is undertaken by less than one-third of clinically eligible cardiac patients in every country for which data is available. Reasons for non-participation include the unavailability of hospital-based rehabilitation programs, or excessive travel time and distance. For this reason, there have been calls for the development of more flexible alternatives. Methodology and Principal Findings We developed a system to enable walking-based cardiac rehabilitation in which the patient's single-lead ECG, heart rate, GPS-based speed and location are transmitted by a programmed smartphone to a secure server for real-time monitoring by a qualified exercise scientist. The feasibility of this approach was evaluated in 134 remotely-monitored exercise assessment and exercise sessions in cardiac patients unable to undertake hospital-based rehabilitation. Completion rates, rates of technical problems, detection of ECG changes, pre- and post-intervention six minute walk test (6 MWT), cardiac depression and Quality of Life (QOL) were key measures. The system was rated as easy and quick to use. It allowed participants to complete six weeks of exercise-based rehabilitation near their homes, worksites, or when travelling. The majority of sessions were completed without any technical problems, although periodic signal loss in areas of poor coverage was an occasional limitation. Several exercise and post-exercise ECG changes were detected. Participants showed improvements comparable to those reported for hospital-based programs, walking significantly further on the post-intervention 6 MWT, 637 m (95% CI: 565–726), than on the pre-test, 524 m (95% CI: 420–655), and reporting significantly reduced levels of cardiac depression and significantly improved physical health-related QOL. Conclusions and Significance The system provided a feasible and very flexible alternative form of supervised cardiac rehabilitation for those unable to access hospital-based programs, with the potential to address a well-recognised deficiency in health care provision in many countries. Future research should assess its longer-term efficacy, cost-effectiveness and safety in larger samples representing the spectrum of cardiac morbidity and severity.
Resumo:
Objective Although several validated nutritional screening tools have been developed to “triage” inpatients for malnutrition diagnosis and intervention, there continues to be debate in the literature as to which tool/tools clinicians should use in practice. This study compared the accuracy of seven validated screening tools in older medical inpatients against two validated nutritional assessment methods. Methods This was a prospective cohort study of medical inpatients at least 65 y old. Malnutrition screening was conducted using seven tools recommended in evidence-based guidelines. Nutritional status was assessed by an accredited practicing dietitian using the Subjective Global Assessment (SGA) and the Mini-Nutritional Assessment (MNA). Energy intake was observed on a single day during first week of hospitalization. Results In this sample of 134 participants (80 ± 8 y old, 50% women), there was fair agreement between the SGA and MNA (κ = 0.53), with MNA identifying more “at-risk” patients and the SGA better identifying existing malnutrition. Most tools were accurate in identifying patients with malnutrition as determined by the SGA, in particular the Malnutrition Screening Tool and the Nutritional Risk Screening 2002. The MNA Short Form was most accurate at identifying nutritional risk according to the MNA. No tool accurately predicted patients with inadequate energy intake in the hospital. Conclusion Because all tools generally performed well, clinicians should consider choosing a screening tool that best aligns with their chosen nutritional assessment and is easiest to implement in practice. This study confirmed the importance of rescreening and monitoring food intake to allow the early identification and prevention of nutritional decline in patients with a poor intake during hospitalization.
Resumo:
STUDY QUESTION: What is the self-reported use of in vitro fertilization (IVF) and ovulation induction (OI) in comparison with insurance claims by Australian women aged 28–36 years? SUMMARY ANSWER: The self-reported use of IVF is quite likely to be valid; however, the use of OI is less well reported. WHAT IS KNOWN AND WHAT THIS PAPER ADDS: Population-based research often relies on the self-reported use of IVF and OI because access to medical records can be difficult and the data need to include sufficient personal identifying information for linkage to other data sources. There have been few attempts to explore the reliability of the self-reported use of IVF and OI using the linkage to medical insurance claims for either treatment. STUDY DESIGN: This prospective, population-based, longitudinal study included the cohort of women born during 1973–1978 and participating in the Australian Longitudinal Study on Women's Health (ALSWH) (n = 14247). From 1996 to 2009, participants were surveyed up to five times. PARTICIPANTS AND SETTING: Participants self-reported their use of IVF or OI in two mailed surveys when aged 28–33 and 31–36 years (n = 7280), respectively. This study links self-report survey responses and claims for treatment or medication from the universal national health insurance scheme (i.e. Medicare Australia). MAIN RESULTS AND THE ROLE OF CHANCE: Comparisons between self-reports and claims data were undertaken for all women consenting to the linkage (n = 3375). The self-reported use of IVF was compared with claims for OI for IVF (Kappa, K = 0.83), oocyte collection (K = 0.82), sperm preparation (K = 0.83), intracytoplasmic sperm injection (K = 0.40), fresh embryo transfers (K = 0.82), frozen embryo transfers (K = 0.64) and OI for IVF medication (K = 0.17). The self-reported use of OI was compared with ovulation monitoring (K = 0.52) and OI medication (K = 0.71). BIAS, CONFOUNDING AND OTHER REASONS FOR CAUTION: There is a possibility of selection bias due to the inclusion criteria for participants in this study: (1) completion of the last two surveys in a series of five and (2) consent to the linkage of their responses with Medicare data. GENERALIZABILITY TO OTHER POPULATIONS: The results are relevant to questionnaire-based research studies with infertile women in developed countries. STUDY FUNDING/COMPETING INTEREST(S): ALSWH is funded by the Australian Government Department of Health and Ageing. This research is funded by a National Health and Medical Research Council Centre of Research Excellence grant.
Resumo:
Background: Nurses routinely use pulse oximetry (SpO2) monitoring equipment in acute care. Interpretation of the reading involves physical assessment and awareness of parameters including temperature, haemoglobin, and peripheral perfusion. However, there is little information on whether these clinical signs are routinely measured or used in pulse oximetry interpretation by nurses. Aim: The aim of this study was to review current practice of SpO2 measurement and the associated documentation of the physiological data that is required for accurate interpretation of the readings. The study reviewed the documentation practices relevant to SpO2 in five medical wards of a tertiary level metropolitan hospital. Method: A prospective casenote audit was conducted on random days over a three-month period. The audit tool had been validated in a previous study. Results: One hundred and seventy seven episodes of oxygen saturation monitoring were reviewed. Our study revealed a lack of parameters to validate the SpO2 readings. Only 10% of the casenotes reviewed had sufficient physiological data to meaningfully interpret the SpO2 reading and only 38% had an arterial blood gas as a comparator. Nursing notes rarely documented clinical interpretation of the results. Conclusion: The audits suggest that medical and nursing staff are not interpreting the pulse oximetry results in context and that the majority of the results were normal with no clinical indication for performing this observation. This reduces the usefulness of such readings and questions the appropriateness of performing “routine” SpO2 in this context.
Resumo:
Aim To provide an overview of key governance matters relating to medical device trials and practical advice for nurses wishing to initiate or lead them. Background Medical device trials, which are formal research studies that examine the benefits and risks of therapeutic, non-drug treatment medical devices, have traditionally been the purview of physicians and scientists. The role of nurses in medical device trials historically has been as data collectors or co-ordinators rather than as principal investigators. Nurses more recently play an increasing role in initiating and leading medical device trials. Review Methods A review article of nurse-led trials of medical devices. Discussion Central to the quality and safety of all clinical trials is adherence to the International Conference on Harmonization Guidelines for Good Clinical Practice, which is the internationally-agreed standard for the ethically- and scientifically-sound design, conduct and monitoring of a medical device trial, as well as the analysis, reporting and verification of the data derived from that trial. Key considerations include the class of the medical device, type of medical device trial, regulatory status of the device, implementation of standard operating procedures, obligations of the trial sponsor, indemnity of relevant parties, scrutiny of the trial conduct, trial registration, and reporting and publication of the results. Conclusion Nurse-led trials of medical devices are demanding but rewarding research enterprises. As nursing practice and research increasingly embrace technical interventions, it is vital that nurse researchers contemplating such trials understand and implement the principles of Good Clinical Practice to protect both study participants and the research team.
Resumo:
Advances in mobile telephone technology and available dermoscopic attachments for mobile telephones have created a unique opportunity for consumer-initiated mobile teledermoscopy. At least 2 companies market a dermoscope attachment for an iPhone (Apple), forming a mobile teledermoscope. These devices and the corresponding software applications (apps) enable (1) lesion magnification (at least ×20) and visualization with polarized light; (2) photographic documentation using the telephone camera; (3) lesion measurement (ruler); (4) adding of image and lesion details; and (5) e-mail data to a teledermatologist for review. For lesion assessment, the asymmetry-color (AC) rule has 94% sensitivity and 62 specificity for melanoma identification by consumers [1]. Thus, consumers can be educated to recognize asymmetry and color patterns in suspect lesions. However, we know little about consumers' use of mobile teledermoscopy for lesion assessment.
Resumo:
Background: Procedural sedation and analgesia (PSA) administered by nurses in the cardiac catheterisation laboratory (CCL) is unlikely to yield serious complications. However, the safety of this practice is dependent on timely identification and treatment of depressed respiratory function. Aim: Describe respiratory monitoring in the CCL. Methods: Retrospective medical record audit of adult patients who underwent a procedure in the CCLs of one private hospital in Brisbane during May and June 2010. An electronic database was used to identify subjects and an audit tool ensured data collection was standardised. Results: Nurses administered PSA during 172/473 (37%) procedures including coronary angiographies, percutaneous coronary interventions, electrophysiology studies, radiofrequency ablations, cardiac pacemakers, implantable cardioverter defibrillators, temporary pacing leads and peripheral vascular interventions. Oxygen saturations were recorded during 160/172 (23%) procedures, respiration rate was recorded during 17/172 (10%) procedures, use of oxygen supplementation was recorded during 40/172 (23%) procedures and 13/172 (7.5%; 95% CI=3.59–11.41%) patients experienced oxygen desaturation. Conclusion: Although oxygen saturation was routinely documented, nurses did not regularly record respiration observations. It is likely that surgical draping and the requirement to minimise radiation exposure interfered with nurses’ ability to observe respiration. Capnography could overcome these barriers to respiration assessment as its accurate measurement of exhaled carbon dioxide coupled with the easily interpretable waveform output it produces, which displays a breath-by-breath account of ventilation, enables identification of respiratory depression in real-time. Results of this audit emphasise the need to ascertain the clinical benefits associated with using capnography to assess ventilation during PSA in the CCL.
Resumo:
Airborne particles have been shown to be associated with a wide range of adverse health effects, which has led to a recent increase in medical research to gain better insight into their health effects. However, accurate evaluation of the exposure-dose-response relationship is highly dependent on the ability to track actual exposure levels of people to airborne particles. This is quite a complex task, particularly in relation to submicrometer and ultrafine particles, which can vary quite significantly in terms of particle surface area and number concentrations. Therefore, suitable monitors that can be worn for measuring personal exposure to these particles are needed. This paper presents an evaluation of the metrological performance of six diffusion charger sensors, NanoTracer (Philips Aerasense) monitors, when measuring particle number and surface area concentrations, as well as particle number distribution mean when compared to reference instruments. Tests in the laboratory (by generating monodisperse and polydisperse aerosols) and in the field (using natural ambient particles) were designed to evaluate the response of these devices under both steady-state and dynamics conditions. Results showed that the NanoTracers performed well when measuring steady state aerosols, however they strongly underestimated actual concentrations during dynamic response testing. The field experiments also showed that, when the majority of the particles were smaller than 20 nm, which occurs during particle formation events in the atmosphere, the NanoTracer underestimated number concentration quite significantly. Even though the NanoTracer can be used for personal monitoring of exposure to ultrafine particles, it also has limitations which need to be considered in order to provide meaningful results.
Resumo:
Food prices and food affordability are important determinants of food choices, obesity and non-communicable diseases. As governments around the world consider policies to promote the consumption of healthier foods, data on the relative price and affordability of foods, with a particular focus on the difference between ‘less healthy’ and ‘healthy’ foods and diets, are urgently needed. This paper briefly reviews past and current approaches to monitoring food prices, and identifies key issues affecting the development of practical tools and methods for food price data collection, analysis and reporting. A step-wise monitoring framework, including measurement indicators, is proposed. ‘Minimal’ data collection will assess the differential price of ‘healthy’ and ‘less healthy’ foods; ‘expanded’ monitoring will assess the differential price of ‘healthy’ and ‘less healthy’ diets; and the ‘optimal’ approach will also monitor food affordability, by taking into account household income. The monitoring of the price and affordability of ‘healthy’ and ‘less healthy’ foods and diets globally will provide robust data and benchmarks to inform economic and fiscal policy responses. Given the range of methodological, cultural and logistical challenges in this area, it is imperative that all aspects of the proposed monitoring framework are tested rigorously before implementation.
Resumo:
Non-communicable diseases (NCDs) dominate disease burdens globally and poor nutrition increasingly contributes to this global burden. Comprehensive monitoring of food environments, and evaluation of the impact of public and private sector policies on food environments is needed to strengthen accountability systems to reduce NCDs. The International Network for Food and Obesity/NCDs Research, Monitoring and Action Support (INFORMAS) is a global network of public-interest organizations and researchers that aims to monitor, benchmark and support public and private sector actions to create healthy food environments and reduce obesity, NCDs and their related inequalities. The INFORMAS framework includes two ‘process’ modules, that monitor the policies and actions of the public and private sectors, seven ‘impact’ modules that monitor the key characteristics of food environments and three ‘outcome’ modules that monitor dietary quality, risk factors and NCD morbidity and mortality. Monitoring frameworks and indicators have been developed for 10 modules to provide consistency, but allowing for stepwise approaches (‘minimal’, ‘expanded’, ‘optimal’) to data collection and analysis. INFORMAS data will enable benchmarking of food environments between countries, and monitoring of progress over time within countries. Through monitoring and benchmarking, INFORMAS will strengthen the accountability systems needed to help reduce the burden of obesity, NCDs and their related inequalities.