56 resultados para SETTINGS
Resumo:
Twenty-five public supply wells throughout the hydrogeologically diverse region of Scania, southern Sweden are subjected to environmental tracer analysis (³H–³He,⁴He, CFCs, SF₆ and for one well only also ⁸⁵Kr and ³⁹Ar) to study well and aquifer vulnerability and evaluate possibilities of groundwater age distribution assessment. We find CFC and SF₆ concentrations well above solubility equilibrium with modern atmosphere, indicating local contamination, as well as indications of CFC degradation. The tracer-specific complications considerably constrain possibilities for sound quantitative regional ground- water age distribution assessment and demonstrate the importance of initial qualitative assessment of tracer-specific reliability, as well a need for additional, complementary tracers (e.g. ⁸⁵Kr,³⁹Ar and potentially also ¹⁴C). Lumped parameter modelling yields credible age distribution assessments for representative wells in four type aquifers. Pollution vulnerability of the aquifer types was based on the selected LPM models and qualitative age characterisation. Most vulnerable are unconfined dual porosity and fractured bedrock aquifers, due to a large component of very young groundwater. Unconfined sedimentary aquifers are vulnerable due to young groundwater and a small pre-modern component. Less vulnerable are semi-confined sedimentary or dual-porosity aquifers, due to older age of the modern component and a larger pre-modern component. Confined aquifers appear least vulnerable, due an entirely pre-modern groundwater age distribution (recharged before 1963). Tracer complications aside, environmental tracer analyses and lumped parameter modelling aid in vulnerability assessment and protection of regional groundwater resources.
Resumo:
BACKGROUND In resource-limited settings, clinical parameters, including body weight changes, are used to monitor clinical response. Therefore, we studied body weight changes in patients on antiretroviral treatment (ART) in different regions of the world. METHODS Data were extracted from the "International Epidemiologic Databases to Evaluate AIDS," a network of ART programmes that prospectively collects routine clinical data. Adults on ART from the Southern, East, West, and Central African and the Asia-Pacific regions were selected from the database if baseline data on body weight, gender, ART regimen, and CD4 count were available. Body weight change over the first 2 years and the probability of body weight loss in the second year were modeled using linear mixed models and logistic regression, respectively. RESULTS Data from 205,571 patients were analyzed. Mean adjusted body weight change in the first 12 months was higher in patients started on tenofovir and/or efavirenz; in patients from Central, West, and East Africa, in men, and in patients with a poorer clinical status. In the second year of ART, it was greater in patients initiated on tenofovir and/or nevirapine, and for patients not on stavudine, in women, in Southern Africa and in patients with a better clinical status at initiation. Stavudine in the initial regimen was associated with a lower mean adjusted body weight change and with weight loss in the second treatment year. CONCLUSIONS Different ART regimens have different effects on body weight change. Body weight loss after 1 year of treatment in patients on stavudine might be associated with lipoatrophy.
Resumo:
BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20%, or 40% of patients in 7 cohorts of patients starting ART in South Africa, and plotted cutoffs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia, and the Asia-Pacific. RESULTS In total, 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African cohort, from 64% to 93% in the Zambian cohort, and from 73% to 96% in the Asia-Pacific cohort. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia, and from 37% to 71% in Asia-Pacific. The area under the receiver operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia, and from 0.77 to 0.92 in Asia-Pacific. CONCLUSIONS CD4-based risk charts with optimal cutoffs for targeted VL testing maybe useful to monitor ART in settings where VL capacity is limited.
Resumo:
BACKGROUND AND OBJECTIVES Multiple-breath washout (MBW) is an attractive test to assess ventilation inhomogeneity, a marker of peripheral lung disease. Standardization of MBW is hampered as little data exists on possible measurement bias. We aimed to identify potential sources of measurement bias based on MBW software settings. METHODS We used unprocessed data from nitrogen (N2) MBW (Exhalyzer D, Eco Medics AG) applied in 30 children aged 5-18 years: 10 with CF, 10 formerly preterm, and 10 healthy controls. This setup calculates the tracer gas N2 mainly from measured O2 and CO2concentrations. The following software settings for MBW signal processing were changed by at least 5 units or >10% in both directions or completely switched off: (i) environmental conditions, (ii) apparatus dead space, (iii) O2 and CO2 signal correction, and (iv) signal alignment (delay time). Primary outcome was the change in lung clearance index (LCI) compared to LCI calculated with the settings as recommended. A change in LCI exceeding 10% was considered relevant. RESULTS Changes in both environmental and dead space settings resulted in uniform but modest LCI changes and exceeded >10% in only two measurements. Changes in signal alignment and O2 signal correction had the most relevant impact on LCI. Decrease of O2 delay time by 40 ms (7%) lead to a mean LCI increase of 12%, with >10% LCI change in 60% of the children. Increase of O2 delay time by 40 ms resulted in mean LCI decrease of 9% with LCI changing >10% in 43% of the children. CONCLUSIONS Accurate LCI results depend crucially on signal processing settings in MBW software. Especially correct signal delay times are possible sources of incorrect LCI measurements. Algorithms of signal processing and signal alignment should thus be optimized to avoid susceptibility of MBW measurements to this significant measurement bias.
Resumo:
BACKGROUND Patients after primary hip or knee replacement surgery can benefit from postoperative treatment in terms of improvement of independence in ambulation, transfers, range of motion and muscle strength. After discharge from hospital, patients are referred to different treatment destination and modalities: intensive inpatient rehabilitation (IR), cure (medically prescribed stay at a convalescence center), or ambulatory treatment (AT) at home. The purpose of this study was to 1) measure functional health (primary outcome) and function relevant factors in patients with hip or knee arthroplasty and to compare them in relation to three postoperative management strategies: AT, Cure and IR and 2) compare the post-operative changes in patient's health status (between preoperative and the 6 month follow-up) for three rehabilitation settings. METHODS Natural observational, prospective two-center study with follow-up. Sociodemographic data and functional mobility tests, Timed Up and Go (TUG) and Iowa Level of Assistance Scale (ILOAS) of 201 patients were analysed before arthroplasty and at the end of acute hospital stay (mean duration of stay: 9.7 days +/- 3.9). Changes in health state were measured with the Western Ontario and McMaster Universities Osteoarthritis Index (WOMAC) before and 6 months after arthroplasty. RESULTS Compared to patients referred for IR and Cure, patients referred for AT were significantly younger and less comorbid. Patients admitted to IR had the highest functional disability before arthroplasty. Before rehabilitation, mean TUG was 40.0 s in the IR group, 33.9 s in the Cure group, and 27.5 s in the AT group, and corresponding mean ILOAS was 16.0, 13.0 and 12.2 (50.0 = worst). At the 6 months follow-up, the corresponding effect sizes of the WOMAC global score were 1.32, 1.87, and 1.51 (>0 means improvement). CONCLUSIONS Age, comorbidity and functional disability are associated with referral for intensive inpatient rehabilitation after hip or knee arthroplasty and partly affect health changes after rehabilitation.
Resumo:
BACKGROUND Survival after diagnosis is a fundamental concern in cancer epidemiology. In resource-rich settings, ambient clinical databases, municipal data and cancer registries make survival estimation in real-world populations relatively straightforward. In resource-poor settings, given the deficiencies in a variety of health-related data systems, it is less clear how well we can determine cancer survival from ambient data. METHODS We addressed this issue in sub-Saharan Africa for Kaposi's sarcoma (KS), a cancer for which incidence has exploded with the HIV epidemic but for which survival in the region may be changing with the recent advent of antiretroviral therapy (ART). From 33 primary care HIV Clinics in Kenya, Uganda, Malawi, Nigeria and Cameroon participating in the International Epidemiologic Databases to Evaluate AIDS (IeDEA) Consortia in 2009-2012, we identified 1328 adults with newly diagnosed KS. Patients were evaluated from KS diagnosis until death, transfer to another facility or database closure. RESULTS Nominally, 22% of patients were estimated to be dead by 2 years, but this estimate was clouded by 45% cumulative lost to follow-up with unknown vital status by 2 years. After adjustment for site and CD4 count, age <30 years and male sex were independently associated with becoming lost. CONCLUSIONS In this community-based sample of patients diagnosed with KS in sub-Saharan Africa, almost half became lost to follow-up by 2 years. This precluded accurate estimation of survival. Until we either generally strengthen data systems or implement cancer-specific enhancements (e.g., tracking of the lost) in the region, insights from cancer epidemiology will be limited.
Resumo:
AIMS Polypharmacy is associated with adverse events and multimorbidity, but data are limited on its association with specific comorbidities in primary care settings. We measured the prevalence of polypharmacy and inappropriate prescribing, and assessed the association of polypharmacy with specific comorbidities. METHODS We did a cross-sectional analysis of 1002 patients aged 50-80years followed in Swiss university primary care settings. We defined polypharmacy as ≥5 long-term prescribed drugs and multimorbidity as ≥2 comorbidities. We used logistic mixed-effects regression to assess the association of polypharmacy with the number of comorbidities, multimorbidity, specific sets of comorbidities, potentially inappropriate prescribing (PIP) and potential prescribing omission (PPO). We used multilevel mixed-effects Poisson regression to assess the association of the number of drugs with the same parameters. RESULTS Patients (mean age 63.5years, 67.5% ≥2 comorbidities, 37.0% ≥5 drugs) had a mean of 3.9 (range 0-17) drugs. Age, BMI, multimorbidity, hypertension, diabetes mellitus, chronic kidney disease, and cardiovascular diseases were independently associated with polypharmacy. The association was particularly strong for hypertension (OR 8.49, 95%CI 5.25-13.73), multimorbidity (OR 6.14, 95%CI 4.16-9.08), and oldest age (75-80years: OR 4.73, 95%CI 2.46-9.10 vs.50-54years). The prevalence of PPO was 32.2% and PIP was more frequent among participants with polypharmacy (9.3% vs. 3.2%, p<0.006). CONCLUSIONS Polypharmacy is common in university primary care settings, is strongly associated with hypertension, diabetes mellitus, chronic kidney disease and cardiovascular diseases, and increases potentially inappropriate prescribing. Multimorbid patients should be included in further trials for developing adapted guidelines and avoiding inappropriate prescribing.
Resumo:
BACKGROUND The Kato-Katz technique is recommended for the diagnosis of helminth infections in epidemiological surveys, drug efficacy studies and monitoring of control interventions. We assessed the comparability of the average amount of faeces generated by three Kato-Katz templates included in test kits from two different providers. METHODS Nine hundred Kato-Katz thick smear preparations were done; 300 per kit. Empty slides, slides plus Kato-Katz template filled with stool and slides plus stool after careful removal of the template were weighed to the nearest 0.1 mg. The average amount of stool that was generated on the slide was calculated for each template, stratified by standard categories of stool consistency (i.e. mushy, soft, sausage-shaped, hard and clumpy). RESULTS The average amount of stool generated on slides was 40.7 mg (95 % confidence interval (CI): 40.0-41.4 mg), 40.3 mg (95 % CI: 39.7-40.9 mg) and 42.8 mg (95 % CI: 42.2-43.3 mg) for the standard Vestergaard Frandsen template, and two different templates from the Chinese Center for Disease Control and Prevention (China CDC), respectively. Mushy stool resulted in considerably lower average weights when the Vestergaard Frandsen (37.0 mg; 95 % CI: 34.9-39.0 mg) or new China CDC templates (37.4 mg; 95 % CI: 35.9-38.9 mg) were used, compared to the old China CDC template (42.2 mg; 95 % CI: 40.7-43.7 mg) and compared to other stool consistency categories. CONCLUSION The average amount of stool generated by three specific Kato-Katz templates was similar (40.3-42.8 mg). Since the multiplication factor is somewhat arbitrary and small changes only have little effect on infection intensity categories, it is suggested that the standard multiplication factor of 24 should be kept for the calculation of eggs per gram of faeces for all investigated templates.
Resumo:
PURPOSE To evaluate image contrast and color setting on assessment of retinal structures and morphology in spectral-domain optical coherence tomography. METHODS Two hundred and forty-eight Spectralis spectral-domain optical coherence tomography B-scans of 62 patients were analyzed by 4 readers. B-scans were extracted in 4 settings: W + N = white background with black image at normal contrast 9; W + H = white background with black image at maximum contrast 16; B + N = black background with white image at normal contrast 12; B + H = black background with white image at maximum contrast 16. Readers analyzed the images to identify morphologic features. Interreader correlation was calculated. Differences between Fleiss-kappa correlation coefficients were examined using bootstrap method. Any setting with significantly higher correlation coefficient was deemed superior for evaluating specific features. RESULTS Correlation coefficients differed among settings. No single setting was superior for all respective spectral-domain optical coherence tomography parameters (P = 0.3773). Some variables showed no differences among settings. Hard exudates and subretinal fluid were best seen with B + H (κ = 0.46, P = 0.0237 and κ = 0.78, P = 0.002). Microaneurysms were best seen with W + N (κ = 0.56, P = 0.025). Vitreomacular interface, enhanced transmission signal, and epiretinal membrane were best identified using all color/contrast settings together (κ = 0.44, P = 0.042, κ = 0.57, P = 0.01, and κ = 0.62, P ≤ 0.0001). CONCLUSION Contrast and background affect the evaluation of retinal structures on spectral-domain optical coherence tomography images. No single setting was superior for all features, though certain changes were best seen with specific settings.