702 resultados para Residential Settings
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20% or 40% of patients in seven cohorts of patients starting ART in South Africa, and plotted cut-offs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia and the Asia-Pacific. FINDINGS 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African, from 64% to 93% in the Zambian and from 73% to 96% in the Asia-Pacific cohorts. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia and from 37% to 71% in Asia-Pacific. The area under the receiver-operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia and from 0.77 to 0.92 in Asia Pacific. INTERPRETATION CD4-based risk charts with optimal cut-offs for targeted VL testing may be useful to monitor ART in settings where VL capacity is limited.
Resumo:
Background: There is evidence that drinking during residential treatment is related to various factors, such as patients’ general control beliefs and self-efficacy, as well as to external control of alcohol use by program’s staff and situations where there is temptation to drink. As alcohol use during treatment has been shown to be associated with the resumption of alcohol use after discharge from residential treatment, we aimed to investigate how these variables are related to alcohol use during abstinenceoriented residential treatment programs for alcohol use disorders (AUD). Methods: In total, 509 patients who entered 1 of 2 residential abstinence-oriented treatment programs for AUD were included in the study. After detoxification, patients completed a standardized diagnostic procedure including interviews and questionnaires. Drinking was assessed by patients’ selfreport of at least 1 standard drink or by positive breathalyzer testing. The 2 residential programs were categorized as high or low control according to the average number of tests per patient. Results: Regression analysis revealed a significant interaction effect between internal and external control suggesting that patients with high internal locus of control and high frequency of control by staff demonstrated the least alcohol use during treatment (16.7%) while patients with low internal locus of control in programs with low external control were more likely to use alcohol during Treatment (45.9%). No effects were found for self-efficacy and temptation. Conclusions: As alcohol use during treatment is most likely associated with poor treatment outcomes, external control may improve treatment outcomes and particularly support patients with low internal locus of control, who show the highest risk for alcohol use during treatment. High external control may complement high internal control to improve alcohol use prevention while in treatment. Key Words: Alcohol Dependence, Alcohol Use, Locus of Control, Alcohol Testing.
Resumo:
Children living near highways are exposed to higher concentrations of traffic-related carcinogenic pollutants. Several studies reported an increased risk of childhood cancer associated with traffic exposure, but the published evidence is inconclusive. We investigated whether cancer risk is associated with proximity of residence to highways in a nation-wide cohort study including all children aged <16 years from Swiss national censuses in 1990 and 2000. Cancer incidence was investigated in time to event analyses (1990-2008) using Cox proportional hazards models and incidence density analyses (1985-2008) using Poisson regression. Adjustments were made for socio-economic factors, ionising background radiation and electromagnetic fields. In time to event analysis based on 532 cases the adjusted hazard ratio for leukaemia comparing children living <100 m from a highway with unexposed children (≥500 m) was 1.43 (95 % CI 0.79, 2.61). Results were similar in incidence density analysis including 1367 leukaemia cases (incidence rate ratio (IRR) 1.57; 95 % CI 1.09, 2.25). Associations were similar for acute lymphoblastic leukaemia (IRR 1.64; 95 % CI 1.10, 2.43) and stronger for leukaemia in children aged <5 years (IRR 1.92; 95 % CI 1.22, 3.04). Little evidence of association was found for other tumours. Our study suggests that young children living close to highways are at increased risk of developing leukaemia.
Resumo:
There is growing evidence indicating a positive effect of acute physical activity on cognitive performance in children. Most of the evidence originates, however, from studies in highly controlled laboratory settings. The aim of the present study was to investigate whether the same effects can be found in more real-world settings. We examined the effects of qualitatively different acute physical activity interventions on the three core dimensions of executive functions (updating, inhibition, shifting). In an experimental between-subject design, 219 ten to twelve year-olds were assigned to one of four conditions which varied systematically in physical activation and cognitive engagement. Executive functions were measured before and immediately after the intervention. Contrary to the hypothesis, no effects of acute physical activity with and without cognitive engagement were found on executive functions in the overall sample. Only children with higher fitness and/or higher academic achievement benefitted from the interventions in terms of their updating performance. Thus, the results indicate that it may be more difficult to attain positive effects through acute physical activity in real-world settings than in laboratory settings and that physiological and cognitive requirements may have to be adjusted to individual capacity to make an intervention effective.
Resumo:
Twenty-five public supply wells throughout the hydrogeologically diverse region of Scania, southern Sweden are subjected to environmental tracer analysis (³H–³He,⁴He, CFCs, SF₆ and for one well only also ⁸⁵Kr and ³⁹Ar) to study well and aquifer vulnerability and evaluate possibilities of groundwater age distribution assessment. We find CFC and SF₆ concentrations well above solubility equilibrium with modern atmosphere, indicating local contamination, as well as indications of CFC degradation. The tracer-specific complications considerably constrain possibilities for sound quantitative regional ground- water age distribution assessment and demonstrate the importance of initial qualitative assessment of tracer-specific reliability, as well a need for additional, complementary tracers (e.g. ⁸⁵Kr,³⁹Ar and potentially also ¹⁴C). Lumped parameter modelling yields credible age distribution assessments for representative wells in four type aquifers. Pollution vulnerability of the aquifer types was based on the selected LPM models and qualitative age characterisation. Most vulnerable are unconfined dual porosity and fractured bedrock aquifers, due to a large component of very young groundwater. Unconfined sedimentary aquifers are vulnerable due to young groundwater and a small pre-modern component. Less vulnerable are semi-confined sedimentary or dual-porosity aquifers, due to older age of the modern component and a larger pre-modern component. Confined aquifers appear least vulnerable, due an entirely pre-modern groundwater age distribution (recharged before 1963). Tracer complications aside, environmental tracer analyses and lumped parameter modelling aid in vulnerability assessment and protection of regional groundwater resources.
Resumo:
BACKGROUND In resource-limited settings, clinical parameters, including body weight changes, are used to monitor clinical response. Therefore, we studied body weight changes in patients on antiretroviral treatment (ART) in different regions of the world. METHODS Data were extracted from the "International Epidemiologic Databases to Evaluate AIDS," a network of ART programmes that prospectively collects routine clinical data. Adults on ART from the Southern, East, West, and Central African and the Asia-Pacific regions were selected from the database if baseline data on body weight, gender, ART regimen, and CD4 count were available. Body weight change over the first 2 years and the probability of body weight loss in the second year were modeled using linear mixed models and logistic regression, respectively. RESULTS Data from 205,571 patients were analyzed. Mean adjusted body weight change in the first 12 months was higher in patients started on tenofovir and/or efavirenz; in patients from Central, West, and East Africa, in men, and in patients with a poorer clinical status. In the second year of ART, it was greater in patients initiated on tenofovir and/or nevirapine, and for patients not on stavudine, in women, in Southern Africa and in patients with a better clinical status at initiation. Stavudine in the initial regimen was associated with a lower mean adjusted body weight change and with weight loss in the second treatment year. CONCLUSIONS Different ART regimens have different effects on body weight change. Body weight loss after 1 year of treatment in patients on stavudine might be associated with lipoatrophy.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20%, or 40% of patients in 7 cohorts of patients starting ART in South Africa, and plotted cutoffs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia, and the Asia-Pacific. RESULTS In total, 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African cohort, from 64% to 93% in the Zambian cohort, and from 73% to 96% in the Asia-Pacific cohort. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia, and from 37% to 71% in Asia-Pacific. The area under the receiver operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia, and from 0.77 to 0.92 in Asia-Pacific. CONCLUSIONS CD4-based risk charts with optimal cutoffs for targeted VL testing maybe useful to monitor ART in settings where VL capacity is limited.
Resumo:
BACKGROUND AND OBJECTIVES Multiple-breath washout (MBW) is an attractive test to assess ventilation inhomogeneity, a marker of peripheral lung disease. Standardization of MBW is hampered as little data exists on possible measurement bias. We aimed to identify potential sources of measurement bias based on MBW software settings. METHODS We used unprocessed data from nitrogen (N2) MBW (Exhalyzer D, Eco Medics AG) applied in 30 children aged 5-18 years: 10 with CF, 10 formerly preterm, and 10 healthy controls. This setup calculates the tracer gas N2 mainly from measured O2 and CO2concentrations. The following software settings for MBW signal processing were changed by at least 5 units or >10% in both directions or completely switched off: (i) environmental conditions, (ii) apparatus dead space, (iii) O2 and CO2 signal correction, and (iv) signal alignment (delay time). Primary outcome was the change in lung clearance index (LCI) compared to LCI calculated with the settings as recommended. A change in LCI exceeding 10% was considered relevant. RESULTS Changes in both environmental and dead space settings resulted in uniform but modest LCI changes and exceeded >10% in only two measurements. Changes in signal alignment and O2 signal correction had the most relevant impact on LCI. Decrease of O2 delay time by 40 ms (7%) lead to a mean LCI increase of 12%, with >10% LCI change in 60% of the children. Increase of O2 delay time by 40 ms resulted in mean LCI decrease of 9% with LCI changing >10% in 43% of the children. CONCLUSIONS Accurate LCI results depend crucially on signal processing settings in MBW software. Especially correct signal delay times are possible sources of incorrect LCI measurements. Algorithms of signal processing and signal alignment should thus be optimized to avoid susceptibility of MBW measurements to this significant measurement bias.
Resumo:
BACKGROUND Patients after primary hip or knee replacement surgery can benefit from postoperative treatment in terms of improvement of independence in ambulation, transfers, range of motion and muscle strength. After discharge from hospital, patients are referred to different treatment destination and modalities: intensive inpatient rehabilitation (IR), cure (medically prescribed stay at a convalescence center), or ambulatory treatment (AT) at home. The purpose of this study was to 1) measure functional health (primary outcome) and function relevant factors in patients with hip or knee arthroplasty and to compare them in relation to three postoperative management strategies: AT, Cure and IR and 2) compare the post-operative changes in patient's health status (between preoperative and the 6 month follow-up) for three rehabilitation settings. METHODS Natural observational, prospective two-center study with follow-up. Sociodemographic data and functional mobility tests, Timed Up and Go (TUG) and Iowa Level of Assistance Scale (ILOAS) of 201 patients were analysed before arthroplasty and at the end of acute hospital stay (mean duration of stay: 9.7 days +/- 3.9). Changes in health state were measured with the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) before and 6 months after arthroplasty. RESULTS Compared to patients referred for IR and Cure, patients referred for AT were significantly younger and less comorbid. Patients admitted to IR had the highest functional disability before arthroplasty. Before rehabilitation, mean TUG was 40.0 s in the IR group, 33.9 s in the Cure group, and 27.5 s in the AT group, and corresponding mean ILOAS was 16.0, 13.0 and 12.2 (50.0 = worst). At the 6 months follow-up, the corresponding effect sizes of the WOMAC global score were 1.32, 1.87, and 1.51 (>0 means improvement). CONCLUSIONS Age, comorbidity and functional disability are associated with referral for intensive inpatient rehabilitation after hip or knee arthroplasty and partly affect health changes after rehabilitation.
Resumo:
BACKGROUND Survival after diagnosis is a fundamental concern in cancer epidemiology. In resource-rich settings, ambient clinical databases, municipal data and cancer registries make survival estimation in real-world populations relatively straightforward. In resource-poor settings, given the deficiencies in a variety of health-related data systems, it is less clear how well we can determine cancer survival from ambient data. METHODS We addressed this issue in sub-Saharan Africa for Kaposi's sarcoma (KS), a cancer for which incidence has exploded with the HIV epidemic but for which survival in the region may be changing with the recent advent of antiretroviral therapy (ART). From 33 primary care HIV Clinics in Kenya, Uganda, Malawi, Nigeria and Cameroon participating in the International Epidemiologic Databases to Evaluate AIDS (IeDEA) Consortia in 2009-2012, we identified 1328 adults with newly diagnosed KS. Patients were evaluated from KS diagnosis until death, transfer to another facility or database closure. RESULTS Nominally, 22% of patients were estimated to be dead by 2 years, but this estimate was clouded by 45% cumulative lost to follow-up with unknown vital status by 2 years. After adjustment for site and CD4 count, age <30 years and male sex were independently associated with becoming lost. CONCLUSIONS In this community-based sample of patients diagnosed with KS in sub-Saharan Africa, almost half became lost to follow-up by 2 years. This precluded accurate estimation of survival. Until we either generally strengthen data systems or implement cancer-specific enhancements (e.g., tracking of the lost) in the region, insights from cancer epidemiology will be limited.