942 resultados para woman centered care
Resumo:
Background Poor clinical handover has been associated with inaccurate clinical assessment and diagnosis, delays in diagnosis and test ordering, medication errors and decreased patient satisfaction in the acute care setting. Research on the handover process in the residential aged care sector is very limited. Purpose The aims of this study were to: (i) Develop an in-depth understanding of the handover process in aged care by mapping all the key activities and their information dynamics, (ii) Identify gaps in information exchange in the handover process and analyze implications for resident safety, (iii) Develop practical recommendations on how information communication technology (ICT) can improve the process and resident safety. Methods The study was undertaken at a large metropolitan facility in NSW with more than 300 residents and a staff including 55 registered nurses (RNs) and 146 assistants in nursing (AINs). A total of 3 focus groups, 12 interviews and 3 observation sessions were conducted over a period from July to October 2010. Process mapping was undertaken by translating the qualitative data via a five-category code book that was developed prior to the analysis. Results Three major sub-processes were identified and mapped. The three major stages are Handover process (HOP) I “Information gathering by RN”, HOP II “Preparation of preliminary handover sheet” and HOP III “Execution of handover meeting”. Inefficient processes were identified in relation to the handover including duplication of information, utilization of multiple communication modes and information sources, and lack of standardization. Conclusion By providing a robust process model of handover this study has made two critical contributions to research in aged care: (i) a means to identify important, possibly suboptimal practices; and (ii) valuable evidence to plan and improve ICT implementation in residential aged care. The mapping of this process enabled analysis of gaps in information flow and potential impacts on resident safety. In addition it offers the basis for further studies into a process that, despite its importance for securing resident safety and continuity of care, lacks research.
Resumo:
Background Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. Methods The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May–September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. Results The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Conclusions Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding the dynamics of the cognitive process can inform the design of interventions to manage errors and improve residents’ safety.
Resumo:
This thesis is to establish a framework to guide the development of a simulated, multimedia-enriched, immersive, learning environment (SMILE) framework. This framework models essential media components used to describe a scenario applied in healthcare (in a dementia context), demonstrates interactions between the components, and enables scalability of simulation implementation. The thesis outcomes also include a simulation system developed in accordance with the guidance framework and a preliminary evaluation through a user study involving ten nursing students and practicioners. The results show that the proposed framework is feasible and effective for designing a simulation system in dementia healthcare training.
Resumo:
The unknown future is a challenge to educators in preparing young people for life post school. While history can be said to repeat itself, the reality is that each generation is faced with new challenges and threats. Therefore, the challenge for contemporary schooling is to prepare students to live in a fast paced, complex world where threats such as terrorism, cyberbullying and depleted resources are juggled with high stakes testing and curriculum accountability. This presentation draws on the notion of a future of supercomplexity while critically examining current pastoral care delivery in schools to develop a new model of practice in preparing students for an unknown future.
Resumo:
Adult day care centers provide a means whereby frail or disabled older people can remain living at home particularly when their family care-givers engage in waged work. In Taiwan, adult day care services appear to meet the cultural needs of both older people and their families for whom filial care is vital. Little research attention has been paid to the use of day care services in Taiwan, the uptake rate of which is low. This grounded theory study explored the ways in which older people and family care-givers construct meanings around the use of day care services in Taiwan. The research methodology drew on the theoretical tenets of symbolic interactionism and methods were informed by the grounded theory. In-depth interviews with 30 participants were undertaken. Reconstructing identity in a shifting world is the core category of the study and reflects a process of reframing whereby older people came to new definitions of social responsibility and independence within the context of the day care center. The implications of the findings is that the older people, rather than seeking to be relieved of social responsibilities, worked very hard to frame and reframe a social role. Rather than letting the institutions undermine or disrupt their identity, the older people worked to actively negotiate and redefine the meaning of self. Thus, although reluctant to come to use the services at the outset, they found a way to manage their lives independently. Social roles and responsibilities as older parents were retained. This study explored the process of meaning construction of day care use and the ways in which this process entailed a reconstruction of the identities of the participants. The evidence from this study underlines the importance of recognizing and acknowledging subjectively conceived identities as work that older people undertake, when in care, to render their lives meaningful.
Resumo:
The study deals with the breakup behavior of swirling liquid sheets discharging from gas-centered swirl coaxial atomizers with attention focused toward the understanding of the role of central gas jet on the liquid sheet breakup. Cold flow experiments on the liquid sheet breakup were carried out by employing custom fabricated gas-centered swirl coaxial atomizers using water and air as experimental fluids. Photographic techniques were employed to capture the flow behavior of liquid sheets at different flow conditions. Quantitative variation on the breakup length of the liquid sheet and spray width were obtained from the measurements deduced from the images of liquid sheets. The sheet breakup process is significantly influenced by the central air jet. It is observed that low inertia liquid sheets are more vulnerable to the presence of the central air jet and develop shorter breakup lengths at smaller values of the air jet Reynolds number Re-g. High inertia liquid sheets ignore the presence of the central air jet at smaller values of Re-g and eventually develop shorter breakup lengths at higher values of Re-g. The experimental evidences suggest that the central air jet causes corrugations on the liquid sheet surface, which may be promoting the production of thick liquid ligaments from the sheet surface. The level of surface corrugations on the liquid sheet increases with increasing Re-g. Qualitative analysis of experimental observations reveals that the entrainment process of air established between the inner surface of the liquid sheet and the central air jet is the primary trigger for the sheet breakup.
Resumo:
Much of what we know regarding the long-term course and outcome of major depressive disorder (MDD) is based on studies of mostly inpatient tertiary level cohorts and samples predating the era of the current antidepressants and the use of maintenance therapies. In addition, there is a lack of studies investigating the comprehensive significance of comorbid axis I and II disorders on the outcome of MDD. The present study forms a part of the Vantaa Depression Study (VDS), a regionally representative prospective and naturalistic cohort study of 269 secondary-level care psychiatric out- and inpatients (aged 20-59) with a new episode of DSM-IV MDD, and followed-up up to five years (n=182) with a life-chart and semistructured interviews. The aim was to investigate the long-term outcome of MDD and risk factors for poor recovery, recurrences, suicidal attempts and diagnostic switch to bipolar disorder, and the association of a family history of different psychiatric disorders on the outcome. The effects of comorbid disorders together with various other predictors from different domains on the outcome were comprehensively investigated. According to this study, the long-term outcome of MDD appears to be more variable when its outcome is investigated among modern, community-treated, secondary-care outpatients compared to previous mostly inpatient studies. MDD was also highly recurrent in these settings, but the recurrent episodes seemed shorter, and the outcome was unlikely to be uniformly chronic. Higher severity of MDD predicted significantly the number of recurrences and longer time spent ill. In addition, longer episode duration, comorbid dysthymic disorder, cluster C personality disorders and social phobia predicted a worse outcome. The incidence rate of suicide attempts varied robustly de¬pending on the level of depression, being 21-fold during major depressive episodes (MDEs), and 4-fold during partial remission compared to periods of full remission. Although a history of previous attempts and poor social support also indicated risk, time spent depressed was the central factor determining overall long-term risk. Switch to bipolar disorder occurred mainly to type II, earlier to type I, and more gradually over time to type II. Higher severity of MDD, comorbid social phobia, obsessive compulsive disorder, and cluster B personality disorder features predicted the diagnostic switch. The majority of patients were also likely to have positive family histories not exclusively of mood, but also of other mental disorders. Having a positive family history of severe mental disorders was likely to be clinically associated with a significantly more adverse outcome.
Resumo:
This study is part of an ongoing collaborative bipolar research project, the Jorvi Bipolar Study (JoBS). The JoBS is run by the Department of Mental Health and Alcohol Research of the National Public Health Institute, Helsinki, and the Department of Psychiatry, Jorvi Hospital, Helsinki University Central Hospital (HUCH), Espoo, Finland. It is a prospective, naturalistic cohort study of secondary level care psychiatric in- and outpatients with a new episode of bipolar disorder (BD). The second report also included 269 major depressive disorder (MDD) patients from the Vantaa Depression Study (VDS). The VDS was carried out in collaboration with the Department of Psychiatry of the Peijas Medical Care District. Using the Mood Disorder Questionnaire (MDQ), all in- and outpatients at the Department of Psychiatry at Jorvi Hospital who currently had a possible new phase of DSM-IV BD were sought. Altogether, 1630 psychiatric patients were screened, and 490 were interviewed using a semistructured interview (SCID-I/P). The patients included in the cohort (n=191) had at intake a current phase of BD. The patients were evaluated at intake and at 6- and 18-month interviews. Based on this study, BD is poorly recognized even in psychiatric settings. Of the BD patients with acute worsening of illness, 39% had never been correctly diagnosed. The classic presentations of BD with hospitalizations, manic episodes, and psychotic symptoms lead clinicians to correct diagnosis of BD I in psychiatric care. Time of follow-up elapsed in psychiatric care, but none of the clinical features, seemed to explain correct diagnosis of BD II, suggesting reliance on cross- sectional presentation of illness. Even though BD II was clearly less often correctly diagnosed than BD I, few other differences between the two types of BD were detected. BD I and II patients appeared to differ little in terms of clinical picture or comorbidity, and the prevalence of psychiatric comorbidity was strongly related to the current illness phase in both types. At the same time, the difference in outcome was clear. BD II patients spent about 40% more time depressed than BD I patients. Patterns of psychiatric comorbidity of BD and MDD differed somewhat qualitatively. Overall, MDD patients were likely to have more anxiety disorders and cluster A personality disorders, and bipolar patients to have more cluster B personality disorders. The adverse consequences of missing or delayed diagnosis are potentially serious. Thus, these findings strongly support the value of screening for BD in psychiatric settings, especially among the major depressive patients. Nevertheless, the diagnosis must be based on a clinical interview and follow-up of mood. Comorbidity, present in 59% of bipolar patients in a current phase, needs concomitant evaluation, follow-up, and treatment. To improve outcome in BD, treatment of bipolar depression is a major challenge for clinicians.
Resumo:
The purpose of this study was to estimate the prevalence and distribution of reduced visual acuity, major chronic eye diseases, and subsequent need for eye care services in the Finnish adult population comprising persons aged 30 years and older. In addition, we analyzed the effect of decreased vision on functioning and need for assistance using the World Health Organization’s (WHO) International Classification of Functioning, Disability, and Health (ICF) as a framework. The study was based on the Health 2000 health examination survey, a nationally representative population-based comprehensive survey of health and functional capacity carried out in 2000 to 2001 in Finland. The study sample representing the Finnish population aged 30 years and older was drawn by a two-stage stratified cluster sampling. The Health 2000 survey included a home interview and a comprehensive health examination conducted at a nearby screening center. If the invited participants did not attend, an abridged examination was conducted at home or in an institution. Based on our finding in participants, the great majority (96%) of Finnish adults had at least moderate visual acuity (VA ≥ 0.5) with current refraction correction, if any. However, in the age group 75–84 years the prevalence decreased to 81%, and after 85 years to 46%. In the population aged 30 years and older, the prevalence of habitual visual impairment (VA ≤ 0.25) was 1.6%, and 0.5% were blind (VA < 0.1). The prevalence of visual impairment increased significantly with age (p < 0.001), and after the age of 65 years the increase was sharp. Visual impairment was equally common for both sexes (OR 1.20, 95% CI 0.82 – 1.74). Based on self-reported and/or register-based data, the estimated total prevalences of cataract, glaucoma, age-related maculopathy (ARM), and diabetic retinopathy (DR) in the study population were 10%, 5%, 4%, and 1%, respectively. The prevalence of all of these chronic eye diseases increased with age (p < 0.001). Cataract and glaucoma were more common in women than in men (OR 1.55, 95% CI 1.26 – 1.91 and OR 1.57, 95% CI 1.24 – 1.98, respectively). The most prevalent eye diseases in people with visual impairment (VA ≤ 0.25) were ARM (37%), unoperated cataract (27%), glaucoma (22%), and DR (7%). One-half (58%) of visually impaired people had had a vision examination during the past five years, and 79% had received some vision rehabilitation services, mainly in the form of spectacles (70%). Only one-third (31%) had received formal low vision rehabilitation (i.e., fitting of low vision aids, receiving patient education, training for orientation and mobility, training for activities of daily living (ADL), or consultation with a social worker). People with low vision (VA 0.1 – 0.25) were less likely to have received formal low vision rehabilitation, magnifying glasses, or other low vision aids than blind people (VA < 0.1). Furthermore, low cognitive capacity and living in an institution were associated with limited use of vision rehabilitation services. Of the visually impaired living in the community, 71% reported a need for assistance and 24% had an unmet need for assistance in everyday activities. Prevalence of ADL, instrumental activities of daily living (IADL), and mobility increased with decreasing VA (p < 0.001). Visually impaired persons (VA ≤ 0.25) were four times more likely to have ADL disabilities than those with good VA (VA ≥ 0.8) after adjustment for sociodemographic and behavioral factors and chronic conditions (OR 4.36, 95% CI 2.44 – 7.78). Limitations in IADL and measured mobility were five times as likely (OR 4.82, 95% CI 2.38 – 9.76 and OR 5.37, 95% CI 2.44 – 7.78, respectively) and self-reported mobility limitations were three times as likely (OR 3.07, 95% CI 1.67 – 9.63) as in persons with good VA. The high prevalence of age-related eye diseases and subsequent visual impairment in the fastest growing segment of the population will result in a substantial increase in the demand for eye care services in the future. Many of the visually impaired, especially older persons with decreased cognitive capacity or living in an institution, have not had a recent vision examination and lack adequate low vision rehabilitation. This highlights the need for regular evaluation of visual function in the elderly and an active dissemination of information about rehabilitation services. Decreased VA is strongly associated with functional limitations, and even a slight decrease in VA was found to be associated with limited functioning. Thus, continuous efforts are needed to identify and treat eye diseases to maintain patients’ quality of life and to alleviate the social and economic burden of serious eye diseases.
Resumo:
The artists at Studio REV-, along with their allies in the broader non-profit sector, address domestic workers’ rights in the United States. As a social practice art project, NannyVan works to improve how information about domestic rights is disseminated to these workers, whether nannies, elder caregivers or others. As part of a larger project named CareForce, the NannyVan project shows an ethics of care by using design traces as tactics and transversal methods as strategies.
Resumo:
The adequacy of anesthesia has been studied since the introduction of balanced general anesthesia. Commercial monitors based on electroencephalographic (EEG) signal analysis have been available for monitoring the hypnotic component of anesthesia from the beginning of the 1990s. Monitors measuring the depth of anesthesia assess the cortical function of the brain, and have gained acceptance during surgical anesthesia with most of the anesthetic agents used. However, due to frequent artifacts, they are considered unsuitable for monitoring consciousness in intensive care patients. The assessment of analgesia is one of the cornerstones of general anesthesia. Prolonged surgical stress may lead to increased morbidity and delayed postoperative recovery. However, no validated monitoring method is currently available for evaluating analgesia during general anesthesia. Awareness during anesthesia is caused by an inadequate level of hypnosis. This rare but severe complication of general anesthesia may lead to marked emotional stress and possibly posttraumatic stress disorder. In the present series of studies, the incidence of awareness and recall during outpatient anesthesia was evaluated and compared with that of in inpatient anesthesia. A total of 1500 outpatients and 2343 inpatients underwent a structured interview. Clear intraoperative recollections were rare the incidence being 0.07% in outpatients and 0.13% in inpatients. No significant differences emerged between outpatients and inpatients. However, significantly smaller doses of sevoflurane were administered to outpatients with awareness than those without recollections (p<0.05). EEG artifacts in 16 brain-dead organ donors were evaluated during organ harvest surgery in a prospective, open, nonselective study. The source of the frontotemporal biosignals in brain-dead subjects was studied, and the resistance of bispectral index (BIS) and Entropy to the signal artifacts was compared. The hypothesis was that in brain-dead subjects, most of the biosignals recorded from the forehead would consist of artifacts. The original EEG was recorded and State Entropy (SE), Response Entropy (RE), and BIS were calculated and monitored during solid organ harvest. SE differed from zero (inactive EEG) in 28%, RE in 29%, and BIS in 68% of the total recording time (p<0.0001 for all). The median values during the operation were SE 0.0, RE 0.0, and BIS 3.0. In four of the 16 organ donors, EEG was not inactive, and unphysiologically distributed, nonreactive rhythmic theta activity was present in the original EEG signal. After the results from subjects with persistent residual EEG activity were excluded, SE, RE, and BIS differed from zero in 17%, 18%, and 62% of the recorded time, respectively (p<0.0001 for all). Due to various artifacts, the highest readings in all indices were recorded without neuromuscular blockade. The main sources of artifacts were electrocauterization, electromyography (EMG), 50-Hz artifact, handling of the donor, ballistocardiography, and electrocardiography. In a prospective, randomized study of 26 patients, the ability of Surgical Stress Index (SSI) to differentiate patients with two clinically different analgesic levels during shoulder surgery was evaluated. SSI values were lower in patients with an interscalene brachial plexus block than in patients without an additional plexus block. In all patients, anesthesia was maintained with desflurane, the concentration of which was targeted to maintain SE at 50. Increased blood pressure or heart rate (HR), movement, and coughing were considered signs of intraoperative nociception and treated with alfentanil. Photoplethysmographic waveforms were collected from the contralateral arm to the operated side, and SSI was calculated offline. Two minutes after skin incision, SSI was not increased in the brachial plexus block group and was lower (38 ± 13) than in the control group (58 ± 13, p<0.005). Among the controls, one minute prior to alfentanil administration, SSI value was higher than during periods of adequate antinociception, 59 ± 11 vs. 39 ± 12 (p<0.01). The total cumulative need for alfentanil was higher in controls (2.7 ± 1.2 mg) than in the brachial plexus block group (1.6 ± 0.5 mg, p=0.008). Tetanic stimulation to the ulnar region of the hand increased SSI significantly only among patients with a brachial plexus block not covering the site of stimulation. Prognostic value of EEG-derived indices was evaluated and compared with Transcranial Doppler Ultrasonography (TCD), serum neuron-specific enolase (NSE) and S-100B after cardiac arrest. Thirty patients resuscitated from out-of-hospital arrest and treated with induced mild hypothermia for 24 h were included. Original EEG signal was recorded, and burst suppression ratio (BSR), RE, SE, and wavelet subband entropy (WSE) were calculated. Neurological outcome during the six-month period after arrest was assessed with the Glasgow-Pittsburgh Cerebral Performance Categories (CPC). Twenty patients had a CPC of 1-2, one patient had a CPC of 3, and nine patients died (CPC 5). BSR, RE, and SE differed between good (CPC 1-2) and poor (CPC 3-5) outcome groups (p=0.011, p=0.011, p=0.008, respectively) during the first 24 h after arrest. WSE was borderline higher in the good outcome group between 24 and 48 h after arrest (p=0.050). All patients with status epilepticus died, and their WSE values were lower (p=0.022). S-100B was lower in the good outcome group upon arrival at the intensive care unit (p=0.010). After hypothermia treatment, NSE and S-100B values were lower (p=0.002 for both) in the good outcome group. The pulsatile index was also lower in the good outcome group (p=0.004). In conclusion, the incidence of awareness in outpatient anesthesia did not differ from that in inpatient anesthesia. Outpatients are not at increased risk for intraoperative awareness relative to inpatients undergoing general anesthesia. SE, RE, and BIS showed non-zero values that normally indicate cortical neuronal function, but were in these subjects mostly due to artifacts after clinical brain death diagnosis. Entropy was more resistant to artifacts than BIS. During general anesthesia and surgery, SSI values were lower in patients with interscalene brachial plexus block covering the sites of nociceptive stimuli. In detecting nociceptive stimuli, SSI performed better than HR, blood pressure, or RE. BSR, RE, and SE differed between the good and poor neurological outcome groups during the first 24 h after cardiac arrest, and they may be an aid in differentiating patients with good neurological outcomes from those with poor outcomes after out-of-hospital cardiac arrest.