801 resultados para exclusion criteria
Resumo:
In the last few years, reflections around knowledge building in the museology area have increased considerably, allowing us to cast many gazes over our actions, and, consequently, enabling us to a wider debate around our professional action field, decreasing our exclusion from the academic environment – museologists reproducing the knowledge produced in other areas. In the present work, we shall approach some issues related to the museological process, taking as a reference several studies about the subject, which, due to the time given to us in this round table, could not be re-presented here for discussion. Besides, we have dedicated a chapter to such approach in our publication titled “Museological Process and Education: building a didactic-community museum”. So we have opted instead to carry out a reflection about exclusion, looking into the museum institution and into the application of museological processes; in other words, we shall carry out a self-criticism, in which I include myself, affecting an analysis that will be debated here, considering, additionally, that the museums and museological practices are in relation to the other social global practices, therefore, they are the result of human relations at each historical moment. Finally, based on our lived experience, we shall give continuity to our reflection process, highlighting the importance of knowledge production for the area of museology and the relevance of the theory-practice relation, punctuating some aspects we think that may contribute to the construction of a museological action that may serve as a historical elaboration in securing a space for self- determination.
Resumo:
The clinical validity of at-risk criteria of psychosis had been questioned based on epidemiological studies that have reported much higher prevalence and annual incidence rates of psychotic-like experiences (PLEs as assessed by either self rating questionnaires or layperson interviews) in the general population than of the clinical phenotype of psychotic disorders (van Os et al., 2009). Thus, it is unclear whether “current at-risk criteria reflect behaviors so common among adolescents and young adults that a valid distinction between ill and non-ill persons is difficult” (Carpenter, 2009). We therefore assessed the 3-month prevalence of at-risk criteria by means of telephone interviews in a randomly drawn general population sample from the at-risk age segment (age 16–35 years) in the Canton Bern, Switzerland. Eighty-five of 102 subjects had valid phone numbers, 21 of these subjects refused (although 6 of them signaled willingness to participate at a later time), 4 could not be contacted. Sixty subjects (71% of the enrollment fraction) participated. Two participants met exclusion criteria (one for being psychotic, one for lack of language skills). Twenty-two at-risk symptoms were assessed for their prevalence and severity within the 3 months prior to the interview by trained clinical raters using (i) the Structured Interview for Prodromal Syndromes (SIPS; Miller et al., 2002) for the evaluation of 5 attenuated psychotic and 3 brief limited intermittent psychotic symptoms (APS, BLIPS) as well as state-trait criteria of the ultra-high-risk (UHR) criteria and (ii) the Schizophrenia Proneness Instrument, Adult version (SPI-A; Schultze-Lutter et al., 2007) for the evaluation of the 14 basic symptoms included in COPER and COGDIS (Schultze-Lutter et al., 2008). Further, psychiatric axis I diagnoses were assessed by means of the Mini-International Neuropsychiatric Interview, M.I.N.I. (Sheehan et al., 1998), and psychosocial functioning by the Scale of Occupational and Functional Assessment (SOFAS; APA, 1994). All interviewees felt ‘rather’ or ‘very’ comfortable with the interview. Of the 58 included subjects, only 1 (2%) fulfilled APS criteria by reporting the attenuated, non-delusional idea of his mind being literally read by others at a frequency of 2–3 times a week that had newly occurred 6 weeks ago. BLIPS, COPER, COGDIS or state-trait UHR criteria were not reported. Yet, twelve subjects (21%) described sub-threshold at-risk symptoms: 7 (12%) reported APS relevant symptoms but did not meet time/frequency criteria of APS, and 9 (16%) reported COPER and/or COGDIS relevant basic symptoms but at an insufficient frequency or as a trait lacking increase in severity; 4 of these 12 subjects reported both sub-threshold APS and sub-threshold basic symptoms. Table 1 displays type and frequency of the sub-threshold at-risk symptoms.
Resumo:
BACKGROUND AND PURPOSE Eligibility criteria are a key factor for the feasibility and validity of clinical trials. We aimed to develop an online tool to assess the potential effect of inclusion and exclusion criteria on the proportion of patients eligible for an acute stroke trial. METHODS We identified relevant inclusion and exclusion criteria of acute stroke trials. Based on these criteria and using a cohort of 1537 consecutive patients with acute ischemic stroke from 3 stroke centers, we developed a web portal feasibility platform for stroke studies (FePASS) to estimate proportions of eligible patients for acute stroke trials. We applied the FePASS resource to calculate the proportion of patients eligible for 4 recent stroke studies. RESULTS Sixty-one eligibility criteria were derived from 30 trials on acute ischemic stroke. FePASS, publicly available at http://fepass.uni-muenster.de, displays the proportion of patients in percent to assess the effect of varying values of relevant eligibility criteria, for example, age, symptom onset time, National Institutes of Health Stroke Scale, and prestroke modified Rankin Scale, on this proportion. The proportion of eligible patients for 4 recent stroke studies ranged from 2.1% to 11.3%. Slight variations of the inclusion criteria could substantially increase the proportion of eligible patients. CONCLUSIONS FePASS is an open access online resource to assess the effect of inclusion and exclusion criteria on the proportion of eligible patients for a stroke trial. FePASS can help to design stroke studies, optimize eligibility criteria, and to estimate the potential recruitment rate.
Resumo:
Objective: To determine whether differences existed in lower-extremity joint biomechanics during self-selected walking cadence (SW) and fast walking cadence (FW) in overweight- and normal-weight children.---------- Design: Survey.---------- Setting: Institutional gait study center.---------- Participants: Participants (N=20; mean age ± SD, 10.4±1.6y) from referred and volunteer samples were classified based on body mass index percentiles and stratified by age and sex. Exclusion criteria were a history of diabetes, neuromuscular disorder, or recent lower-extremity injury.---------- Main Outcome Measures: Sagittal, frontal, and transverse plane angular displacements (degrees) and peak moments (newton meters) at the hip, knee, and ankle joints.---------- Results: The level of significance was set at P less than .008. Compared with normal-weight children, overweight children had greater absolute peak joint moments at the hip (flexor, extensor, abductor, external rotator), the knee (flexor, extensor, abductor, adductor, internal rotator), and the ankle (plantarflexor, inverter, external/internal rotators). After including body weight as a covariate, overweight children had greater peak ankle dorsiflexor moments than normal-weight children. No kinematic differences existed between groups. Greater peak hip extensor moments and less peak ankle inverter moments occurred during FW than SW. There was greater angular displacement during hip flexion as well as less angular displacement at the hip (extension, abduction), knee (flexion, extension), and ankle (plantarflexion, inversion) during FW than SW.---------- Conclusions: Overweight children experienced increased joint moments, which can have long-term orthopedic implications and suggest a need for more nonweight-bearing activities within exercise prescription. The percent of increase in joint moments from SW to FW was not different for overweight and normal-weight children. These findings can be used in developing an exercise prescription that must involve weight-bearing activity.
Resumo:
Objective: With growing recognition of the role of inflammation in the development of chronic and acute disease, fish oil is increasingly used as a therapeutic agent, but the nature of the intervention may pose barriers to adherence in clinical populations. Our objective was to investigate the feasibility of using a fish oil supplement in hemodialysis patients. ---------- Design: This was a nonrandomized intervention study.---------- Setting: Eligible patients were recruited at the Hemodialysis Unit of Wesley Hospital, Brisbane, Queensland, Australia. Patients The sample included 28 maintenance hemodialysis patients out of 43 eligible patients in the unit. Exclusion criteria included patients regularly taking a fish oil supplement at baseline, receiving hemodialysis for less than 3 months, or being unable to give informed consent.---------- Intervention: Eicosapentaenoic acid (EPA) was administered at 2000 mg/day (4 capsules) for 12 weeks. Adherence was measured at baseline and weekly throughout the study according to changes in plasma EPA, and was further measured subjectively by self-report.---------- Results: Twenty patients (74%) adhered to the prescription based on changes in plasma EPA, whereas an additional two patients self-reported good adherence. There was a positive relationship between fish oil intake and change in plasma EPA. Most patients did not report problems with taking the fish oil. Using the baseline data, it was not possible to characterize adherent patients.---------- Conclusions: Despite potential barriers, including the need to take a large number of prescribed medications already, 74% of hemodialysis patients adhered to the intervention. This study demonstrated the feasibility of using fish oil in a clinical population.
Resumo:
Objective: Diarrhoea in the enterally tube fed (ETF) intensive care unit (ICU) patient is a multifactorial problem. Diarrhoeal aetiologies in this patient cohort remain debatable; however, the consequences of diarrhoea have been well established and include electrolyte imbalance, dehydration, bacterial translocation, peri anal wound contamination and sleep deprivation. This study examined the incidence of diarrhoea and explored factors contributing to the development of diarrhoea in the ETF, critically ill, adult patient. ---------- Method: After institutional ethical review and approval, a single centre medical chart audit was undertaken to examine the incidence of diarrhoea in ETF, critically ill patients. Retrospective, non-probability sequential sampling was used of all emergency admission adult ICU patients who met the inclusion/exclusion criteria. ---------- Results: Fifty patients were audited. Faecal frequency, consistency and quantity were considered important criteria in defining ETF diarrhoea. The incidence of diarrhoea was 78%. Total patient diarrhoea days (r = 0.422; p = 0.02) and total diarrhoea frequency (r = 0.313; p = 0.027) increased when the patient was ETF for longer periods of time. Increased severity of illness, peripheral oxygen saturation (Sp02), glucose control, albumin and white cell count were found to be statistically significant factors for the development of diarrhoea. ---------- Conclusion: Diarrhoea in ETF critically ill patients is multi-factorial. The early identification of diarrhoea risk factors and the development of a diarrhoea risk management algorithm is recommended.
Resumo:
Objective: The aim of this literature review is to identify the role of probiotics in the management of enteral tube feeding (ETF) diarrhoea in critically ill patients.---------- Background: Diarrhoea is a common gastrointestinal problem seen in ETF patients. The incidence of diarrhoea in tube fed patients varies from 2% to 68% across all patients. Despite extensive investigation, the pathogenesis surrounding ETF diarrhoea remains unclear. Evidence to support probiotics to manage ETF diarrhoea in critically ill patients remains sparse.---------- Method: Literature on ETF diarrhoea and probiotics in critically ill, adult patients was reviewed from 1980 to 2010. The Cochrane Library, Pubmed, Science Direct, Medline and the Cumulative Index of Nursing and Allied Health Literature (CINAHL) electronic databases were searched using specific inclusion/exclusion criteria. Key search terms used were: enteral nutrition, diarrhoea, critical illness, probiotics, probiotic species and randomised clinical control trial (RCT).---------- Results: Four RCT papers were identified with two reporting full studies, one reporting a pilot RCT and one conference abstract reporting an RCT pilot study. A trend towards a reduction in diarrhoea incidence was observed in the probiotic groups. However, mortality associated with probiotic use in some severely and critically ill patients must caution the clinician against its use.---------- Conclusion: Evidence to support probiotic use in the management of ETF diarrhoea in critically ill patients remains unclear. This paper argues that probiotics should not be administered to critically ill patients until further research has been conducted to examine the causal relationship between probiotics and mortality, irrespective of the patient's disease state or projected prophylactic benefit of probiotic administration.
Resumo:
Objective: Adherence to Continuous Positive Airway Pressure Therapy (CPAP) for Obstructive Sleep Apnoea (OSA) is poor. We assessed the effectiveness of a motivational interviewing intervention (MINT) in addition to best practice standard care to improve acceptance and adherence to CPAP therapy in people with a new diagnosis of OSA. Method: 106 Australian adults (69% male) with a new diagnosis of obstructive sleep apnoea and clinical recommendation for CPAP treatment were recruited from a tertiary sleep disorders centre. Participants were randomly assigned to receive either three sessions of a motivational interviewing intervention ‘MINT’ (n=53; mean age=55.4 years), or no intervention ‘Control’ (n=53; mean age=57.74). The primary outcome was the difference between the groups in objective CPAP adherence at 1 month, 2 months, 3 months and 12 months follow-up. Results: Fifty (94%) participants in the MINT group and 50 (94%) of participants in the control group met all inclusion and exclusion criteria and were included in the primary analysis. The hours of CPAP use per night in the MINT group at 3 months was 4.63 hours and 3.16 hours in the control group (p=0.005). This represents almost 50% better adherence in the MINT group relative to the control group. Patients in the MINT group were substantially more likely to accept CPAP treatment. Conclusions: MINT is a brief, manualized, effective intervention which improves CPAP acceptance and objective adherence rates as compared to standard care alone.
Resumo:
Background Older people have higher rates of hospital admission than the general population and higher rates of readmission due to complications and falls. During hospitalisation, older people experience significant functional decline which impairs their future independence and quality of life. Acute hospital services comprise the largest section of health expenditure in Australia and prevention or delay of disease is known to produce more effective use of services. Current models of discharge planning and follow-up care, however, do not address the need to prevent deconditioning or functional decline. This paper describes the protocol of a randomised controlled trial which aims to evaluate innovative transitional care strategies to reduce unplanned readmissions and improve functional status, independence, and psycho-social well-being of community-based older people at risk of readmission. Methods/Design The study is a randomised controlled trial. Within 72 hours of hospital admission, a sample of older adults fitting the inclusion/exclusion criteria (aged 65 years and over, admitted with a medical diagnosis, able to walk independently for 3 meters, and at least one risk factor for readmission) are randomised into one of four groups: 1) the usual care control group, 2) the exercise and in-home/telephone follow-up intervention group, 3) the exercise only intervention group, or 4) the in-home/telephone follow-up only intervention group. The usual care control group receive usual discharge planning provided by the health service. In addition to usual care, the exercise and in-home/telephone follow-up intervention group receive an intervention consisting of a tailored exercise program, in-home visit and 24 week telephone follow-up by a gerontic nurse. The exercise only and in-home/telephone follow-up only intervention groups, in addition to usual care receive only the exercise or gerontic nurse components of the intervention respectively. Data collection is undertaken at baseline within 72 hours of hospital admission, 4 weeks following hospital discharge, 12 weeks following hospital discharge, and 24 weeks following hospital discharge. Outcome assessors are blinded to group allocation. Primary outcomes are emergency hospital readmissions and health service use, functional status, psychosocial well-being and cost effectiveness. Discussion The acute hospital sector comprises the largest component of health care system expenditure in developed countries, and older adults are the most frequent consumers. There are few trials to demonstrate effective models of transitional care to prevent emergency readmissions, loss of functional ability and independence in this population following an acute hospital admission. This study aims to address that gap and provide information for future health service planning which meets client needs and lowers the use of acute care services.
Resumo:
Background: Sleepiness is a direct contributor to a substantial proportion of fatal and severe road cashes. A number of technological solutions designed to detect sleepiness have been developed, but self-awareness of increasing sleepiness remains a critical component in on-road strategies for mitigating this risk. In order to take appropriate action when sleepy, drivers’ perceptions of their level of sleepiness must be accurate. Aims: This study aimed to assess capacity to accurately identify sleepiness and self-regulate driving cessation during a validated driving simulator task. Participants: Participants comprised 26 young adult drivers (20-28 years). The drivers had open licenses but no other exclusion criteria where used. Methods: Participants woke at 5am, and took part in a laboratory-based hazard perception driving simulation, either at mid-morning or mid-afternoon. Established physiological measures (including EEG) and subjective measures (sleepiness ratings) previously found sensitive to changes in sleepiness levels were utilised. Participants were instructed to ‘drive’ until they believed that sleepiness had impaired their ability to drive safely. They were then offered a nap opportunity. Results: The mean duration of the drive before cessation was 39 minutes (±18 minutes). Almost all (23/26) of the participants then achieved sleep during the nap opportunity. These data suggest that the participants’ perceptions of sleepiness were specific. However, EEG data from a number of participants suggested very high levels of sleepiness prior to driving cessation, suggesting poor sensitivity. Conclusions: Participants reported high levels of sleepiness while driving after very moderate sleep restriction. They were able to identify increasing sleepiness during the test period, could decide to cease driving and in most cases were sufficiently sleepy to achieve sleep during the daytime session. However, the levels of sleepiness achieved prior to driving cessation suggest poor accuracy in self-perception and regulation. This presents practical issues for the implementation of fatigue and sleep-related strategies to improve driver safety.
Resumo:
Introduction: Sleepiness contributes to a substantial proportion of fatal and severe road crashes. Efforts to reduce the incidence of sleep-related crashes have largely focussed on driver education to promote self-regulation of driving behaviour. However, effective self-regulation requires accurate self-perception of sleepiness. The aim of this study was to assess capacity to accurately identify sleepiness, and self-regulate driving cessation, during a validated driving simulator task. Methods: Participants comprised 26 young adult drivers (20-28 years) who had open licenses. No other exclusion criteria where used. Participants were partially sleep deprived (05:00 wake up) and completed a laboratory-based hazard perception driving simulation, counterbalanced to either at mid-morning or mid-afternoon. Established physiological measures (i.e., EEG, EOG) and subjective measures (Karolinska Sleepiness Scale), previously found sensitive to changes in sleepiness levels, were utilised. Participants were instructed to ‘drive’ on the simulator until they believed that sleepiness had impaired their ability to drive safely. They were then offered a nap opportunity. Results: The mean duration of the drive before cessation was 36.1 minutes (±17.7 minutes). Subjective sleepiness increased significantly from the beginning (KSS=6.6±0.7) to the end (KSS=8.2±0.5) of the driving period. No significant differences were found for EEG spectral power measures of sleepiness (i.e., theta or alpha spectral power) from the start of the driving task to the point of cessation of driving. During the nap opportunity, 88% of the participants (23/26) were able to reach sleep onset with an average latency of 9.9 minutes (±7.5 minutes). The average nap duration was 15.1 minutes (±8.1 minutes). Sleep architecture during the nap was predominately comprised of Stages I and II (combined 92%). Discussion: Participants reported high levels of sleepiness during daytime driving after very moderate sleep restriction. They were able to report increasing sleepiness during the test period despite no observed change in standard physiological indices of sleepiness. This increased subjective sleepiness had behavioural validity as the participants had high ‘napability’ at the point of driving cessation, with most achieving some degree of subsequent sleep. This study suggests that the nature of a safety instruction (i.e. how to view sleepiness) can be a determinant of driver behaviour.
Resumo:
Background Lower extremity amputation results in significant global morbidity and mortality. Australia appears to have a paucity of studies investigating lower extremity amputation. The primary aim of this retrospective study was to investigate key conditions associated with lower extremity amputations in an Australian population. Secondary objectives were to determine the influence of age and sex on lower extremity amputations, and the reliability of hospital coded amputations. Methods: Lower extremity amputation cases performed at the Princess Alexandra Hospital (Brisbane, Australia) between July 2006 and June 2007 were identified through the relevant hospital discharge dataset (n = 197). All eligible clinical records were interrogated for age, sex, key condition associated with amputation, amputation site, first ever amputation status and the accuracy of the original hospital coding. Exclusion criteria included records unavailable for audit and cases where the key condition was unable to be determined. Chi-squared, t-tests, ANOVA and post hoc tests were used to determine differences between groups. Kappa statistics were used to measure reliability between coded and audited amputations. A minimum significance level of p < 0.05 was used throughout. Results: One hundred and eighty-six cases were eligible and audited. Overall 69% were male, 56% were first amputations, 54% were major amputations, and mean age was 62 ± 16 years. Key conditions associated included type 2 diabetes (53%), peripheral arterial disease (non-diabetes) (18%), trauma (8%), type 1 diabetes (7%) and malignant tumours (5%). Differences in ages at amputation were associated with trauma 36 ± 10 years, type 1 diabetes 52 ± 12 years and type 2 diabetes 67 ± 10 years (p < 0.01). Reliability of original hospital coding was high with Kappa values over 0.8 for all variables. Conclusions: This study, the first in over 20 years to report on all levels of lower extremity amputations in Australia, found that people undergoing amputation are more likely to be older, male and have diabetes. It is recommended that large prospective studies are implemented and national lower extremity amputation rates are established to address the large preventable burden of lower extremity amputation in Australia.
Resumo:
Background/aims: Remote monitoring for heart failure has not only been evaluated in a large number of randomised controlled trials, but also in many systematic reviews and meta-analyses. The aim of this meta-review was to identify, appraise and synthesise existing systematic reviews that have evaluated the effects of remote monitoring in heart failure. Methods: Using a Cochrane methodology, we electronically searched all relevant online databases and search engines, performed a forward citation search as well as hand-searched bibliographies. Only fully published systematic reviews of invasive and/or non-invasive remote monitoring interventions were included. Two reviewers independently extracted data. Results: Sixty-five publications from 3333 citations were identified. Seventeen fulfilled the inclusion and exclusion criteria. Quality varied with A Measurement Tool to Assess Systematic Reviews (AMSTAR scores) ranging from 2 to 11 (mean 5.88). Seven reviews (41%) pooled results from individual studies for meta-analysis. Eight (47%) considered all non-invasive remote monitoring strategies. Four (24%) focused specifically on telemonitoring. Four (24%) included studies investigating both non-invasive and invasive technologies. Population characteristics of the included studies were not reported consistently. Mortality and hospitalisations were the most frequently reported outcomes 12 (70%). Only five reviews (29%) reported healthcare costs and compliance. A high degree of heterogeneity was reported in many of the meta-analyses. Conclusions: These results should be considered in context of two negative RCTs of remote monitoring for heart failure that have been published since the meta-analyses (TIM-HF and Tele-HF). However, high quality reviews demonstrated improved mortality, quality of life, reduction in hospitalisations and healthcare costs.
Resumo:
BACKGROUND/OBJECTIVE: To investigate the extent of baseline psychosocial characterisation of subjects in published dietary randomised controlled trials (RCTs) for weight loss. SUBJECTS/METHODS: Systematic review of adequately sized (nX10) RCTs comprising X1 diet-alone arm for weight loss were included for this systematic review. More specifically, trials included overweight (body mass index 425 kg/m2) adults, were of duration X8 weeks and had body weight as the primary outcome. Exclusion criteria included specific psychological intervention (for example, Cognitive Behaviour Therapy (CBT)), use of web-based tools, use of supplements, liquid diets, replacement meals and very-low calorie diets. Physical activity intervention was restricted to general exercise only (not supervised or prescribed, for example, VO2 maximum level). RESULTS: Of 176 weight-loss RCTs published during 2008–2010, 15 met selection criteria and were assessed for reported psychological characterisation of subjects. All studies reported standard characterisation of clinical and biochemical characteristics of subjects. Eleven studies reported no psychological attributes of subjects (three of these did exclude those taking psychoactive medication). Three studies collected data on particular aspects of psychology related to specific research objectives (figure scale rating, satiety and quality-of-life). Only one study provided a comprehensive background on psychological attributes of subjects. CONCLUSION: Better characterisation in behaviour-change interventions will reduce potential confounding and enhance generalisability of such studies.
Hepatitis C, mental health and equity of access to antiviral therapy : a systematic narrative review
Resumo:
Introduction Access to hepatitis C (hereafter HCV) antiviral therapy has commonly excluded populations with mental health and substance use disorders because they were considered as having contraindications to treatment, particularly due to the neuropsychiatric effects of interferon that can occur in some patients. In this review we examined access to HCV interferon antiviral therapy by populations with mental health and substance use problems to identify the evidence and reasons for exclusion. Methods We searched the following major electronic databases for relevant articles: PsycINFO, Medline, CINAHL, Scopus, Google Scholar. The inclusion criteria comprised studies of adults aged 18 years and older, peer-reviewed articles, date range of (2002--2012) to include articles since the introduction of pegylated interferon with ribarvirin, and English language. The exclusion criteria included articles about HCV populations with medical co-morbidities, such as hepatitis B (hereafter HBV) and human immunodeficiency virus (hereafter HIV), because the clinical treatment, pathways and psychosocial morbidity differ from populations with only HCV. We identified 182 articles, and of these 13 met the eligibility criteria. Using an approach of systematic narrative review we identified major themes in the literature. Results Three main themes were identified including: (1) pre-treatment and preparation for antiviral therapy, (2) adherence and treatment completion, and (3) clinical outcomes. Each of these themes was critically discussed in terms of access by patients with mental health and substance use co-morbidities demonstrating that current research evidence clearly demonstrates that people with HCV, mental health and substance use co-morbidities have similar clinical outcomes to those without these co-morbidities. Conclusions While research evidence is largely supportive of increased access to interferon by people with HCV, mental health and substance use co-morbidities, there is substantial further work required to translate evidence into clinical practice. Further to this, we conclude that a reconsideration of the appropriateness of the tertiary health service model of care for interferon management is required and exploration of the potential for increased HCV care in primary health care settings.