65 resultados para Field testing and monitoring,
Resumo:
Potential desiccation polygons (PDPs) are polygonal surface patterns that are a common feature in Noachian-to-Hesperian-aged phyllosilicate- and chloride-bearing terrains and have been observed with size scales that range from cm-wide (by current rovers) to 10s of meters-wide. The global distribution of PDPs shows that they share certain traits in terms of morphology and geologic setting that can aid identification and distinction from fracturing patterns caused by other processes. They are mostly associated with sedimentary deposits that display spectral evidence for the presence of Fe/Mg smectites, Al-rich smectites or less commonly kaolinites, carbonates, and sulfates. In addition, PDPs may indicate paleolacustrine environments, which are of high interest for planetary exploration, and their presence implies that the fractured units are rich in smectite minerals that may have been deposited in a standing body of water. A collective synthesis with new data, particularly from the HiRISE camera suggests that desiccation cracks may be more common on the surface of Mars than previously thought. A review of terrestrial research on desiccation processes with emphasis on the theoretical background, field studies, and modeling constraints is presented here as well and shown to be consistent with and relevant to certain polygonal patterns on Mars.
Resumo:
BACKGROUND HIV infection is a known risk factor for cancer but little is known about HIV testing patterns and the burden of HIV infection in cancer patients. We did a cross-sectional analysis to identify predictors of prior HIV testing and to quantify the burden of HIV in black cancer patients in Johannesburg, South Africa. METHODS The Johannesburg Cancer Case-control Study (JCCCS) recruits newly-diagnosed black cancer patients attending public referral hospitals for oncology and radiation therapy in Johannesburg . All adult cancer patients enrolled into the JCCCS from November 2004 to December 2009 and interviewed on previous HIV testing were included in the analysis. Patients were independently tested for HIV-1 using a single ELISA test . The prevalence of prior HIV testing, of HIV infection and of undiagnosed HIV infection was calculated. Multivariate logistic regression models were fitted to identify factors associated with prior HIV testing. RESULTS A total of 5436 cancer patients were tested for HIV of whom 1833[33.7% (95% CI=32.5-35.0)] were HIV-positive. Three-quarters of patients (4092 patients) had ever been tested for HIV. The total prevalence of undiagnosed HIV infection was 11.5% (10.7-12.4) with 34% (32.0-36.3) of the 1833 patients who tested HIV-positive unaware of their infection. Men >49 years [OR 0.49(0.39-0.63)] and those residing in rural areas [OR 0.61(0.39-0.97)] were less likely to have been previously tested for HIV. Men with at least a secondary education [OR 1.79(1.11-2.90)] and those interviewed in recent years [OR 4.13(2.62 - 6.52)] were likely to have prior testing. Women >49 years [OR 0.33(0.27-0.41)] were less likely to have been previously tested for HIV. In women, having children <5 years [OR 2.59(2.04-3.29)], hormonal contraceptive use [OR 1.33(1.09-1.62)], having at least a secondary education [OR:2.08(1.45-2.97)] and recent year of interview [OR 6.04(4.45-8.2)] were independently associated with previous HIV testing. CONCLUSIONS In a study of newly diagnosed black cancer patients in Johannesburg, over a third of HIV-positive patients were unaware of their HIV status. In South Africa black cancer patients should be targeted for opt-out HIV testing.
Resumo:
BACKGROUND: We evaluated the feasibility of an augmented robotics-assisted tilt table (RATT) for incremental cardiopulmonary exercise testing (CPET) and exercise training in dependent-ambulatory stroke patients. METHODS: Stroke patients (Functional Ambulation Category ≤ 3) underwent familiarization, an incremental exercise test (IET) and a constant load test (CLT) on separate days. A RATT equipped with force sensors in the thigh cuffs, a work rate estimation algorithm and real-time visual feedback to guide the exercise work rate was used. Feasibility assessment considered technical feasibility, patient tolerability, and cardiopulmonary responsiveness. RESULTS: Eight patients (4 female) aged 58.3 ± 9.2 years (mean ± SD) were recruited and all completed the study. For IETs, peak oxygen uptake (V'O2peak), peak heart rate (HRpeak) and peak work rate (WRpeak) were 11.9 ± 4.0 ml/kg/min (45 % of predicted V'O2max), 117 ± 32 beats/min (72 % of predicted HRmax) and 22.5 ± 13.0 W, respectively. Peak ratings of perceived exertion (RPE) were on the range "hard" to "very hard". All 8 patients reached their limit of functional capacity in terms of either their cardiopulmonary or neuromuscular performance. A ventilatory threshold (VT) was identified in 7 patients and a respiratory compensation point (RCP) in 6 patients: mean V'O2 at VT and RCP was 8.9 and 10.7 ml/kg/min, respectively, which represent 75 % (VT) and 85 % (RCP) of mean V'O2peak. Incremental CPET provided sufficient information to satisfy the responsiveness criteria and identification of key outcomes in all 8 patients. For CLTs, mean steady-state V'O2 was 6.9 ml/kg/min (49 % of V'O2 reserve), mean HR was 90 beats/min (56 % of HRmax), RPEs were > 2, and all patients maintained the active work rate for 10 min: these values meet recommended intensity levels for bouts of training. CONCLUSIONS: The augmented RATT is deemed feasible for incremental cardiopulmonary exercise testing and exercise training in dependent-ambulatory stroke patients: the approach was found to be technically implementable, acceptable to the patients, and it showed substantial cardiopulmonary responsiveness. This work has clinical implications for patients with severe disability who otherwise are not able to be tested.
Resumo:
We address ethical consumption using a natural field experiment on the actual purchase of Fair Trade (FT) coffee in three supermarkets in Germany. Based on a quasi-experimental before-and-after design the effects of three different treatments – information, 20% price reduction, and a moral appeal – are analyzed. Sales data cover actual ethical purchase behavior and avoid problems of social desirability. But they offer only limited insights into the motivations of individual consumers. We therefore complemented the field experiment with a customer survey that allows us to contrast observed (ethical) buying behavior with self-reported FT consumption. Results from the experiment suggest that only the price reduction had the expected positive and statistically significant effect on FT consumption.
Resumo:
OBJECTIVES Primary care physicians (PCPs) should prescribe faecal immunochemical testing (FIT) or colonoscopy for colorectal cancer screening based on their patient's values and preferences. However, there are wide variations between PCPs in the screening method prescribed. The objective was to assess the impact of an educational intervention on PCPs' intent to offer FIT or colonoscopy on an equal basis. DESIGN Survey before and after training seminars, with a parallel comparison through a mailed survey to PCPs not attending the training seminars. SETTING All PCPs in the canton of Vaud, Switzerland. PARTICIPANTS Of 592 eligible PCPs, 133 (22%) attended a seminar and 106 (80%) filled both surveys. 109 (24%) PCPs who did not attend the seminars returned the mailed survey. INTERVENTION A 2 h-long interactive seminar targeting PCP knowledge, skills and attitudes regarding offering a choice of colorectal cancer (CRC) screening options. OUTCOME MEASURES The primary outcome was PCP intention of having their patients screened with FIT and colonoscopy in equal proportions (between 40% and 60% each). Secondary outcomes were the perceived role of PCPs in screening decisions (from paternalistic to informed decision-making) and correct answer to a clinical vignette. RESULTS Before the seminars, 8% of PCPs reported that they had equal proportions of their patients screened for CRC by FIT and colonoscopy; after the seminar, 33% foresaw having their patients screened in equal proportions (p<0.001). Among those not attending, there was no change (13% vs 14%, p=0.8). Of those attending, there was no change in their perceived role in screening decisions, while the proportion responding correctly to a clinical vignette increased (88-99%, p<0.001). CONCLUSIONS An interactive training seminar increased the proportion of physicians with the intention to prescribe FIT and colonoscopy in equal proportions.
Resumo:
Bovine spongiform encephalopathy (BSE) rapid tests and routine BSE-testing laboratories underlie strict regulations for approval. Due to the lack of BSE-positive control samples, however, full assay validation at the level of individual test runs and continuous monitoring of test performance on-site is difficult. Most rapid tests use synthetic prion protein peptides, but it is not known to which extend they reflect the assay performance on field samples, and whether they are sufficient to indicate on-site assay quality problems. To address this question we compared the test scores of the provided kit peptide controls to those of standardized weak BSE-positive tissue samples in individual test runs as well as continuously over time by quality control charts in two widely used BSE rapid tests. Our results reveal only a weak correlation between the weak positive tissue control and the peptide control scores. We identified kit-lot related shifts in the assay performances that were not reflected by the peptide control scores. Vice versa, not all shifts indicated by the peptide control scores indeed reflected a shift in the assay performance. In conclusion these data highlight that the use of the kit peptide controls for continuous quality control purposes may result in unjustified rejection or acceptance of test runs. However, standardized weak positive tissue controls in combination with Shewhart-CUSUM control charts appear to be reliable in continuously monitoring assay performance on-site to identify undesired deviations.
Resumo:
Contemporary models of self-regulated learning emphasize the role of distal motivational factors for student's achievement, on the one side, and the proximal role of metacognitive monitoring and control for learning and test outcomes, on the other side. In the present study, two larger samples of elementary school children (9- and 11-year-olds) were included and their mastery-oriented motivation, metacognitive monitoring and control skills were integrated into structural equation models testing and comparing the relative impact of these different constituents for self-regulated learning. For one, results indicate that the factorial structure of monitoring, control and mastery motivation was invariant across the two age groups. Of specific interest was the finding that there were age-dependent structural links between monitoring, control, and test performance (closer links in the older compared to the younger children), with high confidence yielding a direct and positive effect on test performance and a direct and negative effect on adequate control behavior in the achievement test. Mastery-oriented motivation was not found to be substantially associated with monitoring (confidence), control (detection and correction of errors), or test performance underlining the importance of proximal, metacognitive factors for test performance in elementary school children.
Resumo:
Background Existing lower-limb, region-specific, patient-reported outcome measures have clinimetric limitations, including limitations in psychometric characteristics (eg, lack of internal consistency, lack of responsiveness, measurement error) and the lack of reported practical and general characteristics. A new patient-reported outcome measure, the Lower Limb Functional Index (LLFI), was developed to address these limitations. Objective The purpose of this study was to overcome recognized deficiencies in existing lower-limb, region-specific, patient-reported outcome measures through: (1) development of a new lower-extremity outcome scale (ie, the LLFI) and (2) evaluation of the clinimetric properties of the LLFI using the Lower Extremity Functional Scale (LEFS) as a criterion measure. Design This was a prospective observational study. Methods The LLFI was developed in a 3-stage process of: (1) item generation, (2) item reduction with an expert panel, and (3) pilot field testing (n=18) for reliability, responsiveness, and sample size requirements for a larger study. The main study used a convenience sample (n=127) from 10 physical therapy clinics. Participants completed the LLFI and LEFS every 2 weeks for 6 weeks and then every 4 weeks until discharge. Data were used to assess the psychometric, practical, and general characteristics of the LLFI and the LEFS. The characteristics also were evaluated for overall performance using the Measurement of Outcome Measures and Bot clinimetric assessment scales. Results The LLFI and LEFS demonstrated a single-factor structure, comparable reliability (intraclass correlation coefficient [2,1]=.97), scale width, and high criterion validity (Pearson r=.88, with 95% confidence interval [CI]). Clinimetric performance was higher for the LLFI compared with the LEFS on the Measurement of Outcome Measures scale (96% and 95%, respectively) and the Bot scale (100% and 83%, respectively). The LLFI, compared with the LEFS, had improved responsiveness (standardized response mean=1.75 and 1.64, respectively), minimal detectable change with 90% CI (6.6% and 8.1%, respectively), and internal consistency (α=.91 and .95, respectively), as well as readability with reduced user error and completion and scoring times. Limitations Limitations of the study were that only participants recruited from outpatient physical therapy clinics were included and that no specific conditions or diagnostic subgroups were investigated. Conclusion The LLFI demonstrated sound clinimetric properties. There was lower response error, efficient completion and scoring, and improved responsiveness and overall performance compared with the LEFS. The LLFI is suitable for assessment of lower-limb function.
Resumo:
This publication offers concrete suggestions for implementing an integrative and learning-oriented approach to agricultural extension with the goal of fostering sustainable development. It targets governmental and non-governmental organisations, development agencies, and extension staff working in the field of rural development. The book looks into the conditions and trends that influence extension today, and outlines new challenges and necessary adaptations. It offers a basic reflection on the goals, the criteria for success and the form of a state-of-the-art approach to extension. The core of the book consists of a presentation of Learning for Sustainability (LforS), an example of an integrative, learning-oriented approach that is based on three crucial elements: stakeholder dialogue, knowledge management, and organizational development. Awareness raising and capacity building, social mobilization, and monitoring & evaluation are additional building blocks. The structure and organisation of the LforS approach as well as a selection of appropriate methods and tools are presented. The authors also address key aspects of developing and managing a learning-oriented extension approach. The book illustrates how LforS can be implemented by presenting two case studies, one from Madagascar and one from Mongolia. It addresses conceptual questions and at the same time it is practice-oriented. In contrast to other extension approaches, LforS does not limit its focus to production-related aspects and the development of value chains: it also addresses livelihood issues in a broad sense. With its focus on learning processes LforS seeks to create a better understanding of the links between different spheres and different levels of decision-making; it also seeks to foster integration of the different actors’ perspectives.
Resumo:
Mapping and monitoring are believed to provide an early warning sign to determine when to stop tumor removal to avoid mechanical damage to the corticospinal tract (CST). The objective of this study was to systematically compare subcortical monopolar stimulation thresholds (1-20 mA) with direct cortical stimulation (DCS)-motor evoked potential (MEP) monitoring signal abnormalities and to correlate both with new postoperative motor deficits. The authors sought to define a mapping threshold and DCS-MEP monitoring signal changes indicating a minimal safe distance from the CST.
Resumo:
Determination of an 'anaerobic threshold' plays an important role in the appreciation of an incremental cardiopulmonary exercise test and describes prominent changes of blood lactate accumulation with increasing workload. Two lactate thresholds are discerned during cardiopulmonary exercise testing and used for physical fitness estimation or training prescription. A multitude of different terms are, however, found in the literature describing the two thresholds. Furthermore, the term 'anaerobic threshold' is synonymously used for both, the 'first' and the 'second' lactate threshold, bearing a great potential of confusion. The aim of this review is therefore to order terms, present threshold concepts, and describe methods for lactate threshold determination using a three-phase model with reference to the historical and physiological background to facilitate the practical application of the term 'anaerobic threshold'.
Resumo:
BACKGROUND: The provision of highly active antiretroviral therapy (HAART) in resource-limited settings follows a public health approach, which is characterised by a limited number of regimens and the standardisation of clinical and laboratory monitoring. In industrialized countries doctors prescribe from the full range of available antiretroviral drugs, supported by resistance testing and frequent laboratory monitoring. We compared virologic response, changes to first-line regimens, and mortality in HIV-infected patients starting HAART in South Africa and Switzerland. METHODS AND FINDINGS: We analysed data from the Swiss HIV Cohort Study and two HAART programmes in townships of Cape Town, South Africa. We included treatment-naïve patients aged 16 y or older who had started treatment with at least three drugs since 2001, and excluded intravenous drug users. Data from a total of 2,348 patients from South Africa and 1,016 patients from the Swiss HIV Cohort Study were analysed. Median baseline CD4+ T cell counts were 80 cells/mul in South Africa and 204 cells/mul in Switzerland. In South Africa, patients started with one of four first-line regimens, which was subsequently changed in 514 patients (22%). In Switzerland, 36 first-line regimens were used initially, and these were changed in 539 patients (53%). In most patients HIV-1 RNA was suppressed to 500 copies/ml or less within one year: 96% (95% confidence interval [CI] 95%-97%) in South Africa and 96% (94%-97%) in Switzerland, and 26% (22%-29%) and 27% (24%-31%), respectively, developed viral rebound within two years. Mortality was higher in South Africa than in Switzerland during the first months of HAART: adjusted hazard ratios were 5.90 (95% CI 1.81-19.2) during months 1-3 and 1.77 (0.90-3.50) during months 4-24. CONCLUSIONS: Compared to the highly individualised approach in Switzerland, programmatic HAART in South Africa resulted in similar virologic outcomes, with relatively few changes to initial regimens. Further innovation and resources are required in South Africa to both achieve more timely access to HAART and improve the prognosis of patients who start HAART with advanced disease.