762 resultados para patient-centred care
Resumo:
Little is known about the types of incidents that occur to aged care clients in the community. This limits the development of effective strategies to improve client safety. The objective of the study was to present a profile of incidents reported in Australian community aged care settings. All incident reports made by community care workers employed by one of the largest community aged care provider organizations in Australia during the period November 1, 2012, to August 8, 2013, were analyzed. A total of 356 reports were analyzed, corresponding to a 7.5% incidence rate per client year. Falls and medication incidents were the most prevalent incident types. Clients receiving high-level care and those who attended day therapy centers had the highest rate of incidents with 14% to 20% of these clients having a reported incident. The incident profile indicates that clients on higher levels of care had higher incident rates. Incident data represent an opportunity to improve client safety in community aged care.
Resumo:
Background Prevention of foot ulcers in patients with diabetes is extremely important to help reduce the enormous burden of foot ulceration on both patient and health resources. A comprehensive analysis of reported interventions is not currently available, but is needed to better inform caregivers about effective prevention. The aim of this systematic review is to investigate the effectiveness of interventions to prevent first and recurrent foot ulcers in persons with diabetes who are at risk for ulceration. Methods The available medical scientific literature in PubMed, EMBASE, CINAHL and the Cochrane database was searched for original research studies on preventative interventions. Both controlled and non-controlled studies were selected. Data from controlled studies were assessed for methodological quality by two independent reviewers. Results From the identified records, a total of 30 controlled studies (of which 19 RCTs) and another 44 non-controlled studies were assessed and described. Few controlled studies, of generally low to moderate quality, were identified on the prevention of a first foot ulcer. For the prevention of recurrent plantar foot ulcers, multiple RCTs with low risk of bias show the benefit for the use of daily foot skin temperature measurements and consequent preventative actions, as well as for therapeutic footwear that demonstrates to relieve plantar pressure and that is worn by the patient. To prevent recurrence, some evidence exists for integrated foot care when it includes a combination of professional foot treatment, therapeutic footwear and patient education; for just a single session of patient education, no evidence exists. Surgical interventions can be effective in selected patients, but the evidence base is small. Conclusion The evidence base to support the use of specific self-management and footwear interventions for the prevention of recurrent plantar foot ulcers is quite strong, but is small for the use of other, sometimes widely applied, interventions and is practically nonexistent for the prevention of a first foot ulcer and non-plantar foot ulcer.
Resumo:
Diabetic foot ulceration poses a heavy burden on the patient and the healthcare system, but prevention thereof receives little attention. For every euro spent on ulcer prevention, ten are spent on ulcer healing, and for every randomized controlled trial conducted on prevention, ten are conducted on healing. In this article, we argue that a shift in priorities is needed. For the prevention of a first foot ulcer, we need more insight into the effect of interventions and practices already applied globally in many settings. This requires systematic recording of interventions and outcomes, and well-designed randomized controlled trials that include analysis of cost-effectiveness. After healing of a foot ulcer, the risk of recurrence is high. For the prevention of a recurrent foot ulcer, home monitoring of foot temperature, pressure-relieving therapeutic footwear, and certain surgical interventions prove to be effective. The median effect size found in a total of 23 studies on these interventions is large, over 60%, and further increases when patients are adherent to treatment. These interventions should be investigated for efficacy as a state-of-the-art integrated foot care approach, where attempts are made to assure treatment adherence. Effect sizes of 75-80% may be expected. If such state-of-the-art integrated foot care is implemented, the majority of problems with foot ulcer recurrence in diabetes can be resolved. It is therefore time to act and to set a new target in diabetic foot care. This target is to reduce foot ulcer incidence with at least 75%.
Resumo:
OBJECTIVE To report the cost-effectiveness of a tailored handheld computerized procedural preparation and distraction intervention (Ditto) used during pediatric burn wound care in comparison to standard practice. METHODS An economic evaluation was performed alongside a randomized controlled trial of 75 children aged 4 to 13 years who presented with a burn to the Royal Children's Hospital, Brisbane, Australia. Participants were randomized to either the Ditto intervention (n = 35) or standard practice (n = 40) to measure the effect of the intervention on days taken for burns to re-epithelialize. Direct medical, direct nonmedical, and indirect cost data during burn re-epithelialization were extracted from the randomized controlled trial data and combined with scar management cost data obtained retrospectively from medical charts. Nonparametric bootstrapping was used to estimate statistical uncertainty in cost and effect differences and cost-effectiveness ratios. RESULTS On average, the Ditto intervention reduced the time to re-epithelialize by 3 days at AU$194 less cost for each patient compared with standard practice. The incremental cost-effectiveness plane showed that 78% of the simulated results were within the more effective and less costly quadrant and 22% were in the more effective and more costly quadrant, suggesting a 78% probability that the Ditto intervention dominates standard practice (i.e., cost-saving). At a willingness-to-pay threshold of AU$120, there is a 95% probability that the Ditto intervention is cost-effective (or cost-saving) against standard care. CONCLUSIONS This economic evaluation showed the Ditto intervention to be highly cost-effective against standard practice at a minimal cost for the significant benefits gained, supporting the implementation of the Ditto intervention during burn wound care.
Resumo:
Background Poor clinical handover has been associated with inaccurate clinical assessment and diagnosis, delays in diagnosis and test ordering, medication errors and decreased patient satisfaction in the acute care setting. Research on the handover process in the residential aged care sector is very limited. Purpose The aims of this study were to: (i) Develop an in-depth understanding of the handover process in aged care by mapping all the key activities and their information dynamics, (ii) Identify gaps in information exchange in the handover process and analyze implications for resident safety, (iii) Develop practical recommendations on how information communication technology (ICT) can improve the process and resident safety. Methods The study was undertaken at a large metropolitan facility in NSW with more than 300 residents and a staff including 55 registered nurses (RNs) and 146 assistants in nursing (AINs). A total of 3 focus groups, 12 interviews and 3 observation sessions were conducted over a period from July to October 2010. Process mapping was undertaken by translating the qualitative data via a five-category code book that was developed prior to the analysis. Results Three major sub-processes were identified and mapped. The three major stages are Handover process (HOP) I “Information gathering by RN”, HOP II “Preparation of preliminary handover sheet” and HOP III “Execution of handover meeting”. Inefficient processes were identified in relation to the handover including duplication of information, utilization of multiple communication modes and information sources, and lack of standardization. Conclusion By providing a robust process model of handover this study has made two critical contributions to research in aged care: (i) a means to identify important, possibly suboptimal practices; and (ii) valuable evidence to plan and improve ICT implementation in residential aged care. The mapping of this process enabled analysis of gaps in information flow and potential impacts on resident safety. In addition it offers the basis for further studies into a process that, despite its importance for securing resident safety and continuity of care, lacks research.
Resumo:
The purpose of this study was to estimate the prevalence and distribution of reduced visual acuity, major chronic eye diseases, and subsequent need for eye care services in the Finnish adult population comprising persons aged 30 years and older. In addition, we analyzed the effect of decreased vision on functioning and need for assistance using the World Health Organization’s (WHO) International Classification of Functioning, Disability, and Health (ICF) as a framework. The study was based on the Health 2000 health examination survey, a nationally representative population-based comprehensive survey of health and functional capacity carried out in 2000 to 2001 in Finland. The study sample representing the Finnish population aged 30 years and older was drawn by a two-stage stratified cluster sampling. The Health 2000 survey included a home interview and a comprehensive health examination conducted at a nearby screening center. If the invited participants did not attend, an abridged examination was conducted at home or in an institution. Based on our finding in participants, the great majority (96%) of Finnish adults had at least moderate visual acuity (VA ≥ 0.5) with current refraction correction, if any. However, in the age group 75–84 years the prevalence decreased to 81%, and after 85 years to 46%. In the population aged 30 years and older, the prevalence of habitual visual impairment (VA ≤ 0.25) was 1.6%, and 0.5% were blind (VA < 0.1). The prevalence of visual impairment increased significantly with age (p < 0.001), and after the age of 65 years the increase was sharp. Visual impairment was equally common for both sexes (OR 1.20, 95% CI 0.82 – 1.74). Based on self-reported and/or register-based data, the estimated total prevalences of cataract, glaucoma, age-related maculopathy (ARM), and diabetic retinopathy (DR) in the study population were 10%, 5%, 4%, and 1%, respectively. The prevalence of all of these chronic eye diseases increased with age (p < 0.001). Cataract and glaucoma were more common in women than in men (OR 1.55, 95% CI 1.26 – 1.91 and OR 1.57, 95% CI 1.24 – 1.98, respectively). The most prevalent eye diseases in people with visual impairment (VA ≤ 0.25) were ARM (37%), unoperated cataract (27%), glaucoma (22%), and DR (7%). One-half (58%) of visually impaired people had had a vision examination during the past five years, and 79% had received some vision rehabilitation services, mainly in the form of spectacles (70%). Only one-third (31%) had received formal low vision rehabilitation (i.e., fitting of low vision aids, receiving patient education, training for orientation and mobility, training for activities of daily living (ADL), or consultation with a social worker). People with low vision (VA 0.1 – 0.25) were less likely to have received formal low vision rehabilitation, magnifying glasses, or other low vision aids than blind people (VA < 0.1). Furthermore, low cognitive capacity and living in an institution were associated with limited use of vision rehabilitation services. Of the visually impaired living in the community, 71% reported a need for assistance and 24% had an unmet need for assistance in everyday activities. Prevalence of ADL, instrumental activities of daily living (IADL), and mobility increased with decreasing VA (p < 0.001). Visually impaired persons (VA ≤ 0.25) were four times more likely to have ADL disabilities than those with good VA (VA ≥ 0.8) after adjustment for sociodemographic and behavioral factors and chronic conditions (OR 4.36, 95% CI 2.44 – 7.78). Limitations in IADL and measured mobility were five times as likely (OR 4.82, 95% CI 2.38 – 9.76 and OR 5.37, 95% CI 2.44 – 7.78, respectively) and self-reported mobility limitations were three times as likely (OR 3.07, 95% CI 1.67 – 9.63) as in persons with good VA. The high prevalence of age-related eye diseases and subsequent visual impairment in the fastest growing segment of the population will result in a substantial increase in the demand for eye care services in the future. Many of the visually impaired, especially older persons with decreased cognitive capacity or living in an institution, have not had a recent vision examination and lack adequate low vision rehabilitation. This highlights the need for regular evaluation of visual function in the elderly and an active dissemination of information about rehabilitation services. Decreased VA is strongly associated with functional limitations, and even a slight decrease in VA was found to be associated with limited functioning. Thus, continuous efforts are needed to identify and treat eye diseases to maintain patients’ quality of life and to alleviate the social and economic burden of serious eye diseases.
Resumo:
The adequacy of anesthesia has been studied since the introduction of balanced general anesthesia. Commercial monitors based on electroencephalographic (EEG) signal analysis have been available for monitoring the hypnotic component of anesthesia from the beginning of the 1990s. Monitors measuring the depth of anesthesia assess the cortical function of the brain, and have gained acceptance during surgical anesthesia with most of the anesthetic agents used. However, due to frequent artifacts, they are considered unsuitable for monitoring consciousness in intensive care patients. The assessment of analgesia is one of the cornerstones of general anesthesia. Prolonged surgical stress may lead to increased morbidity and delayed postoperative recovery. However, no validated monitoring method is currently available for evaluating analgesia during general anesthesia. Awareness during anesthesia is caused by an inadequate level of hypnosis. This rare but severe complication of general anesthesia may lead to marked emotional stress and possibly posttraumatic stress disorder. In the present series of studies, the incidence of awareness and recall during outpatient anesthesia was evaluated and compared with that of in inpatient anesthesia. A total of 1500 outpatients and 2343 inpatients underwent a structured interview. Clear intraoperative recollections were rare the incidence being 0.07% in outpatients and 0.13% in inpatients. No significant differences emerged between outpatients and inpatients. However, significantly smaller doses of sevoflurane were administered to outpatients with awareness than those without recollections (p<0.05). EEG artifacts in 16 brain-dead organ donors were evaluated during organ harvest surgery in a prospective, open, nonselective study. The source of the frontotemporal biosignals in brain-dead subjects was studied, and the resistance of bispectral index (BIS) and Entropy to the signal artifacts was compared. The hypothesis was that in brain-dead subjects, most of the biosignals recorded from the forehead would consist of artifacts. The original EEG was recorded and State Entropy (SE), Response Entropy (RE), and BIS were calculated and monitored during solid organ harvest. SE differed from zero (inactive EEG) in 28%, RE in 29%, and BIS in 68% of the total recording time (p<0.0001 for all). The median values during the operation were SE 0.0, RE 0.0, and BIS 3.0. In four of the 16 organ donors, EEG was not inactive, and unphysiologically distributed, nonreactive rhythmic theta activity was present in the original EEG signal. After the results from subjects with persistent residual EEG activity were excluded, SE, RE, and BIS differed from zero in 17%, 18%, and 62% of the recorded time, respectively (p<0.0001 for all). Due to various artifacts, the highest readings in all indices were recorded without neuromuscular blockade. The main sources of artifacts were electrocauterization, electromyography (EMG), 50-Hz artifact, handling of the donor, ballistocardiography, and electrocardiography. In a prospective, randomized study of 26 patients, the ability of Surgical Stress Index (SSI) to differentiate patients with two clinically different analgesic levels during shoulder surgery was evaluated. SSI values were lower in patients with an interscalene brachial plexus block than in patients without an additional plexus block. In all patients, anesthesia was maintained with desflurane, the concentration of which was targeted to maintain SE at 50. Increased blood pressure or heart rate (HR), movement, and coughing were considered signs of intraoperative nociception and treated with alfentanil. Photoplethysmographic waveforms were collected from the contralateral arm to the operated side, and SSI was calculated offline. Two minutes after skin incision, SSI was not increased in the brachial plexus block group and was lower (38 ± 13) than in the control group (58 ± 13, p<0.005). Among the controls, one minute prior to alfentanil administration, SSI value was higher than during periods of adequate antinociception, 59 ± 11 vs. 39 ± 12 (p<0.01). The total cumulative need for alfentanil was higher in controls (2.7 ± 1.2 mg) than in the brachial plexus block group (1.6 ± 0.5 mg, p=0.008). Tetanic stimulation to the ulnar region of the hand increased SSI significantly only among patients with a brachial plexus block not covering the site of stimulation. Prognostic value of EEG-derived indices was evaluated and compared with Transcranial Doppler Ultrasonography (TCD), serum neuron-specific enolase (NSE) and S-100B after cardiac arrest. Thirty patients resuscitated from out-of-hospital arrest and treated with induced mild hypothermia for 24 h were included. Original EEG signal was recorded, and burst suppression ratio (BSR), RE, SE, and wavelet subband entropy (WSE) were calculated. Neurological outcome during the six-month period after arrest was assessed with the Glasgow-Pittsburgh Cerebral Performance Categories (CPC). Twenty patients had a CPC of 1-2, one patient had a CPC of 3, and nine patients died (CPC 5). BSR, RE, and SE differed between good (CPC 1-2) and poor (CPC 3-5) outcome groups (p=0.011, p=0.011, p=0.008, respectively) during the first 24 h after arrest. WSE was borderline higher in the good outcome group between 24 and 48 h after arrest (p=0.050). All patients with status epilepticus died, and their WSE values were lower (p=0.022). S-100B was lower in the good outcome group upon arrival at the intensive care unit (p=0.010). After hypothermia treatment, NSE and S-100B values were lower (p=0.002 for both) in the good outcome group. The pulsatile index was also lower in the good outcome group (p=0.004). In conclusion, the incidence of awareness in outpatient anesthesia did not differ from that in inpatient anesthesia. Outpatients are not at increased risk for intraoperative awareness relative to inpatients undergoing general anesthesia. SE, RE, and BIS showed non-zero values that normally indicate cortical neuronal function, but were in these subjects mostly due to artifacts after clinical brain death diagnosis. Entropy was more resistant to artifacts than BIS. During general anesthesia and surgery, SSI values were lower in patients with interscalene brachial plexus block covering the sites of nociceptive stimuli. In detecting nociceptive stimuli, SSI performed better than HR, blood pressure, or RE. BSR, RE, and SE differed between the good and poor neurological outcome groups during the first 24 h after cardiac arrest, and they may be an aid in differentiating patients with good neurological outcomes from those with poor outcomes after out-of-hospital cardiac arrest.
Resumo:
The Vantaa Primary Care Depression Study (PC-VDS) is a naturalistic and prospective cohort study concerning primary care patients with depressive disorders. It forms a collaborative research project between the Department of Mental and Alcohol Research of the National Public Health Institute, and the Primary Health Care Organization of the City of Vantaa. The aim is to obtain a comprehensive view on clinically significant depression in primary care, and to compare depressive patients in primary care and in secondary level psychiatric care in terms of clinical characteristics. Consecutive patients (N=1111) in three primary care health centres were screened for depression with the PRIME-MD, and positive cases interviewed by telephone. Cases with current depressive symptoms were diagnosed face-to-face with the Structured Clinical Interview for DSM-IV Axis I Disorders (SCID-I/P). A cohort of 137 patients with unipolar depressive disorders, comprising all patients with at least two depressive symptoms and clinically significant distress or disability, was recruited. The Structured Clinical Interview for DSM-IV Axis II Disorders (SCID-II), medical records, rating scales, interview and a retrospective life-chart were used to obtain comprehensive cross-sectional and retrospective longitudinal information. For investigation of suicidal behaviour the Scale for Suicidal Ideation (SSI), patient records and the interview were used. The methodology was designed to be comparable to The Vantaa Depression Study (VDS) conducted in secondary level psychiatric care. Comparison of major depressive disorder (MDD) patients aged 20-59 from primary care in PC-VDS (N=79) was conducted with new psychiatric outpatients (N =223) and inpatients (N =46) in VDS. The PC-VDS cohort was prospectively followed up at 3, 6 and 18 months. Altogether 123 patients (90%) completed the follow-up. Duration of the index episode and the timing of relapses or recurrences were examined using a life-chart. The retrospective investigation revealed current MDD in most (66%), and lifetime MDD in nearly all (90%) cases of clinically significant depressive syndromes. Two thirds of the “subsyndromal” cases had a history of major depressive episode (MDE), although they were currently either in partial remission or a potential prodromal phase. Recurrences and chronicity were common. The picture of depression was complicated by Axis I co-morbidity in 59%, Axis II in 52% and chronic Axis III disorders in 47%; only 12% had no co-morbidity. Within their lifetimes, one third (37%) had seriously considered suicide, and one sixth (17%) had attempted it. Suicidal behaviour clustered in patients with moderate to severe MDD, co-morbidity with personality disorders, and a history of treatment in psychiatric care. The majority had received treatment for depression, but suicidal ideation had mostly remained unrecognised. The comparison of patients with MDD in primary care to those in psychiatric care revealed that the majority of suicidal or psychotic patients were receiving psychiatric treatment, and the patients with the most severe symptoms and functional limitations were hospitalized. In other clinical aspects, patients with MDD in primary care were surprisingly similar to psychiatric outpatients. Mental health contacts earlier in the current MDE were common among primary care patients. The 18-month prospective investigation with a life-chart methodology verified the chronic and recurrent nature of depression in primary care. Only one-quarter of patients with MDD achieved and maintained full remission during the follow-up, while another quarter failed to remit at all. The remaining patients suffered either from residual symptoms or recurrences. While severity of depression was the strongest predictor of recovery, presence of co-morbid substance use disorders, chronic medical illness and cluster C personality disorders all contributed to an adverse outcome. In clinical decision making, beside severity of depression and co-morbidity, history of previous MDD should not be ignored by primary care doctors while depression there is usually severe enough to indicate at least follow-up, and concerning those with residual symptoms, evaluation of their current treatment. Moreover, recognition of suicidal behaviour among depressed patients should also be improved. In order to improve outcome of depression in primary care, the often chronic and recurrent nature of depression should be taken into account in organizing the care. According to literature management programs of a chronic disease, with enhancement of the role of case managers and greater integration of primary and specialist care, have been successful. Optimum ways of allocating resources between treatment providers as well as within health centres should be found.
Resumo:
Purpose In the oncology population where malnutrition prevalence is high, more descriptive screening tools can provide further information to assist triaging and capture acute change. The Patient-Generated Subjective Global Assessment Short Form (PG-SGA SF) is a component of a nutritional assessment tool which could be used for descriptive nutrition screening. The purpose of this study was to conduct a secondary analysis of nutrition screening and assessment data to identify the most relevant information contributing to the PG-SGA SF to identify malnutrition risk with high sensitivity and specificity. Methods This was an observational, cross-sectional study of 300 consecutive adult patients receiving ambulatory anti-cancer treatment at an Australian tertiary hospital. Anthropometric and patient descriptive data were collected. The scored PG-SGA generated a score for nutritional risk (PG-SGA SF) and a global rating for nutrition status. Receiver operating characteristic curves (ROC) were generated to determine optimal cut-off scores for combinations of the PG-SGA SF boxes with the greatest sensitivity and specificity for predicting malnutrition according to scored PG-SGA global rating. Results The additive scores of boxes 1–3 had the highest sensitivity (90.2 %) while maintaining satisfactory specificity (67.5 %) and demonstrating high diagnostic value (AUC = 0.85, 95 % CI = 0.81–0.89). The inclusion of box 4 (PG-SGA SF) did not add further value as a screening tool (AUC = 0.85, 95 % CI = 0.80–0.89; sensitivity 80.4 %; specificity 72.3 %). Conclusions The validity of the PG-SGA SF in chemotherapy outpatients was confirmed. The present study however demonstrated that the functional capacity question (box 4) does not improve the overall discriminatory value of the PG-SGA SF.
Resumo:
Background: The national resuscitation guidelines were published in Finland in 2002 and are based on international guidelines published in 2000. The main goal of the national guidelines, available on the Internet free of charge, is early defibrillation by nurses in an institutional setting. Aim: To study possible changes in cardiopulmonary resuscitation (CPR) practices, especially concerning early defibrillation, nurses and students attitudes of guideline implementation and nurses and students ability to implement the guideline recommendations in clinical practices after publication of the Current Care (CC) guidelines for CPR 2002. Material and methods: CPR practices in Finnish health centres; especially concerning rapid defibrillation programmes, as well as the implementation of CC guidelines for CPR was studied in a mail survey to chief physicians of every health centre in Finland (Study I). The CPR skills using an automated external defibrillator (AED) were compared in a study including Objective stuctured clinical examination (OSCE) of resuscitation skills of nurses and nursing students in Finnish and Swedish hospital and institution (Studies II, III). Attitudes towards CPR-D and CPR guidelines among medical and nursing students and secondary hospital nurses were studied in surveys (Studies IV, V). The nurses receiving different CPR training were compared in a randomized trial including OSCE of CPR skills of nurses in Finnish Hospital (Study VI). Results: Two years after the publication, 40.7% of Finnish health centres used national resuscitation guidelines. The proportion of health centres having at least one AED (66%) and principle of nurse-performed defibrillation without the presence of a physician (42%) had increased. The CPR-D training was estimated to be insufficient regarding basic life support and advanced life support in the majority of health centres (Study I). CPR-D skills of nurses and nursing students in two specific Swedish and Finnish hospitals and institutions (Study II and III) were generally inadequate. The nurses performed better than the students and the Swedish nurses surpassed the Finnish ones. Geriatric nurses receiving traditional CPR-D training performed better than those receiving an Internet-based course but both groups failed to defibrillate within 60 s. Thus, the performance was not satisfactory even two weeks after traditional training (Study VI). Unlike the medical students, the nursing students did not feel competent to perform procedures recommended in the cardiopulmonary resuscitation guidelines including the defibrillation. However, the majority of nursing students felt confident about their ability to perform basic life support. The perceived ability to defibrillate correlated significantly with a positive attitude towards nurse-performed defibrillation and negatively with fear of damaging the patient s heart by defibrillation (Study IV). After the educational intervention, the nurses found their level of CPR-D capability more sufficient than before and felt more confident about their ability to perform defibrillation themselves. A negative attitude toward defibrillation correlated with perceived negative organisational attitudes toward cardiopulmonary resuscitation guidelines. After CPR-D education in the hospital, the majority (64%) of nurses hesitated to perform defibrillation because of anxiety and 27 % hesitated because of fear of injuring the patient. Also a negative personal attitude towards guidelines increased markedly after education (Study V). Conclusions: Although a significant change had occurred in resuscitation practices in primary health care after publication of national cardiopulmonary resuscitation guidelines the participants CPR-D skills were not adequate according to the CPR guidelines. The current way of teaching is unlikely to result in participants being able to perform adequate and rapid CPR-D. More information and more frequent training are needed to diminish anxiety concerning defibrillation. Negative beliefs and attitudes toward defibrillation affect the nursing students and nurses attitudes toward cardiopulmonary resuscitation guidelines. CPR-D education increased the participants self-confidence concerning CPR-D skills but it did not reduce their anxiety. AEDs have replaced the manual defibrillators in most institutions, but in spite of the modern devices the anxiety still exists. Basic education does not provide nursing students with adequate CPR-D skills. Thus, frequent training in the workplace has vital importance. This multi-professional program supported by the administration might provide better CPR-D skills. Distance learning alone cannot substitute for traditional small-group learning, tutored hands-on training is needed to learn practical CPR-D skills. Standardized testing would probably help controlling the quality of learning. Training of group-working skills might improve CPR performance.