60 resultados para Monitoring methods
Resumo:
Most butterfly monitoring protocols rely on counts along transects (Pollard walks) to generate species abundance indices and track population trends. It is still too often ignored that a population count results from two processes: the biological process (true abundance) and the statistical process (our ability to properly quantify abundance). Because individual detectability tends to vary in space (e.g., among sites) and time (e.g., among years), it remains unclear whether index counts truly reflect population sizes and trends. This study compares capture-mark-recapture (absolute abundance) and count-index (relative abundance) monitoring methods in three species (Maculinea nausithous and Iolana iolas: Lycaenidae; Minois dryas: Satyridae) in contrasted habitat types. We demonstrate that intraspecific variability in individual detectability under standard monitoring conditions is probably the rule rather than the exception, which questions the reliability of count-based indices to estimate and compare specific population abundance. Our results suggest that the accuracy of count-based methods depends heavily on the ecology and behavior of the target species, as well as on the type of habitat in which surveys take place. Monitoring programs designed to assess the abundance and trends in butterfly populations should incorporate a measure of detectability. We discuss the relative advantages and inconveniences of current monitoring methods and analytical approaches with respect to the characteristics of the species under scrutiny and resources availability.
Resumo:
BACKGROUND The cost-effectiveness of routine viral load (VL) monitoring of HIV-infected patients on antiretroviral therapy (ART) depends on various factors that differ between settings and across time. Low-cost point-of-care (POC) tests for VL are in development and may make routine VL monitoring affordable in resource-limited settings. We developed a software tool to study the cost-effectiveness of switching to second-line ART with different monitoring strategies, and focused on POC-VL monitoring. METHODS We used a mathematical model to simulate cohorts of patients from start of ART until death. We modeled 13 strategies (no 2nd-line, clinical, CD4 (with or without targeted VL), POC-VL, and laboratory-based VL monitoring, with different frequencies). We included a scenario with identical failure rates across strategies, and one in which routine VL monitoring reduces the risk of failure. We compared lifetime costs and averted disability-adjusted life-years (DALYs). We calculated incremental cost-effectiveness ratios (ICER). We developed an Excel tool to update the results of the model for varying unit costs and cohort characteristics, and conducted several sensitivity analyses varying the input costs. RESULTS Introducing 2nd-line ART had an ICER of US$1651-1766/DALY averted. Compared with clinical monitoring, the ICER of CD4 monitoring was US$1896-US$5488/DALY averted and VL monitoring US$951-US$5813/DALY averted. We found no difference between POC- and laboratory-based VL monitoring, except for the highest measurement frequency (every 6 months), where laboratory-based testing was more effective. Targeted VL monitoring was on the cost-effectiveness frontier only if the difference between 1st- and 2nd-line costs remained large, and if we assumed that routine VL monitoring does not prevent failure. CONCLUSION Compared with the less expensive strategies, the cost-effectiveness of routine VL monitoring essentially depends on the cost of 2nd-line ART. Our Excel tool is useful for determining optimal monitoring strategies for specific settings, with specific sex-and age-distributions and unit costs.
Resumo:
BACKGROUND AND AIMS The structured IBD Ahead 'Optimised Monitoring' programme was designed to obtain the opinion, insight and advice of gastroenterologists on optimising the monitoring of Crohn's disease activity in four settings: (1) assessment at diagnosis, (2) monitoring in symptomatic patients, (3) monitoring in asymptomatic patients, and (4) the postoperative follow-up. For each of these settings, four monitoring methods were discussed: (a) symptom assessment, (b) endoscopy, (c) laboratory markers, and (d) imaging. Based on literature search and expert opinion compiled during an international consensus meeting, recommendations were given to answer the question 'which diagnostic method, when, and how often'. The International IBD Ahead Expert Panel advised to tailor this guidance to the healthcare system and the special prerequisites of each country. The IBD Ahead Swiss National Steering Committee proposes best-practice recommendations adapted for Switzerland. METHODS The IBD Ahead Steering Committee identified key questions and provided the Swiss Expert Panel with a structured literature research. The expert panel agreed on a set of statements. During an international expert meeting the consolidated outcome of the national meetings was merged into final statements agreed by the participating International and National Steering Committee members - the IBD Ahead 'Optimized Monitoring' Consensus. RESULTS A systematic assessment of symptoms, endoscopy findings, and laboratory markers with special emphasis on faecal calprotectin is deemed necessary even in symptom-free patients. The choice of recommended imaging methods is adapted to the specific situation in Switzerland and highlights the importance of ultrasonography and magnetic resonance imaging besides endoscopy. CONCLUSION The recommendations stress the importance of monitoring disease activity on a regular basis and by objective parameters, such as faecal calprotectin and endoscopy with detailed documentation of findings. Physicians should not rely on symptoms only and adapt the monitoring schedule and choice of options to individual situations.
Resumo:
OBJECTIVES: Premature babies require supplementation with calcium and phosphorus to prevent metabolic bone disease of prematurity. To guide mineral supplementation, two methods of monitoring urinary excretion of calcium and phosphorus are used: urinary calcium or phosphorus concentration and calcium/creatinine or phosphorus/creatinine ratios. We compare these two methods in regards to their agreement on the need for mineral supplementation. METHODS: Retrospective chart review of 230 premature babies with birthweight <1500 g, undergoing screening of urinary spot samples from day 21 of life and fortnightly thereafter. Hypothetical cut-off values for urine calcium or phosphorus concentration (1 mmol/l) and urine calcium/creatinine ratio (0.5 mol/mol) or phosphorus/creatinine ratio (4 mol/mol) were applied to the sample results. The agreement on whether or not to supplement the respective minerals based on the results with the two methods was compared. Multivariate general linear models sought to identify patient characteristic to predict disagreeing results. RESULTS: 24.8% of cases disagreed on the indication for calcium supplementation, 8.8% for phosphorus. Total daily calcium intake was the only patient characteristic associated with discordant results. CONCLUSIONS: With the intention to supplement the respective mineral, comparison of urinary mineral concentration with mineral/creatinine ratio is moderate for Calcium and good for Phosphorus. The results do not allow to identify superiority of either method on the decision which babies require calcium and/or phosphorus supplements.
Resumo:
PURPOSE Therapeutic drug monitoring of patients receiving once daily aminoglycoside therapy can be performed using pharmacokinetic (PK) formulas or Bayesian calculations. While these methods produced comparable results, their performance has never been checked against full PK profiles. We performed a PK study in order to compare both methods and to determine the best time-points to estimate AUC0-24 and peak concentrations (C max). METHODS We obtained full PK profiles in 14 patients receiving a once daily aminoglycoside therapy. PK parameters were calculated with PKSolver using non-compartmental methods. The calculated PK parameters were then compared with parameters estimated using an algorithm based on two serum concentrations (two-point method) or the software TCIWorks (Bayesian method). RESULTS For tobramycin and gentamicin, AUC0-24 and C max could be reliably estimated using a first serum concentration obtained at 1 h and a second one between 8 and 10 h after start of the infusion. The two-point and the Bayesian method produced similar results. For amikacin, AUC0-24 could reliably be estimated by both methods. C max was underestimated by 10-20% by the two-point method and by up to 30% with a large variation by the Bayesian method. CONCLUSIONS The ideal time-points for therapeutic drug monitoring of once daily administered aminoglycosides are 1 h after start of a 30-min infusion for the first time-point and 8-10 h after start of the infusion for the second time-point. Duration of the infusion and accurate registration of the time-points of blood drawing are essential for obtaining precise predictions.
Resumo:
Although there has been a significant decrease in caries prevalence in developed countries, the slower progression of dental caries requires methods capable of detecting and quantifying lesions at an early stage. The aim of this study was to evaluate the effectiveness of fluorescence-based methods (DIAGNOdent 2095 laser fluorescence device [LF], DIAGNOdent 2190 pen [LFpen], and VistaProof fluorescence camera [FC]) in monitoring the progression of noncavitated caries-like lesions on smooth surfaces. Caries-like lesions were developed in 60 blocks of bovine enamel using a bacterial model of Streptococcus mutans and Lactobacillus acidophilus . Enamel blocks were evaluated by two independent examiners at baseline (phase I), after the first cariogenic challenge (eight days) (phase II), and after the second cariogenic challenge (a further eight days) (phase III) by two independent examiners using the LF, LFpen, and FC. Blocks were submitted to surface microhardness (SMH) and cross-sectional microhardness analyses. The intraclass correlation coefficient for intra- and interexaminer reproducibility ranged from 0.49 (FC) to 0.94 (LF/LFpen). SMH values decreased and fluorescence values increased significantly among the three phases. Higher values for sensitivity, specificity, and area under the receiver operating characteristic curve were observed for FC (phase II) and LFpen (phase III). A significant correlation was found between fluorescence values and SMH in all phases and integrated loss of surface hardness (ΔKHN) in phase III. In conclusion, fluorescence-based methods were effective in monitoring noncavitated caries-like lesions on smooth surfaces, with moderate correlation with SMH, allowing differentiation between sound and demineralized enamel.
Resumo:
Acute liver failure (ALF) models in pigs have been widely used for evaluating newly developed liver support systems. But hardly any guidelines are available for the surgical methods and the clinical management.
Resumo:
Introduction Acute hemodynamic instability increases morbidity and mortality. We investigated whether early non-invasive cardiac output monitoring enhances hemodynamic stabilization and improves outcome. Methods A multicenter, randomized controlled trial was conducted in three European university hospital intensive care units in 2006 and 2007. A total of 388 hemodynamically unstable patients identified during their first six hours in the intensive care unit (ICU) were randomized to receive either non-invasive cardiac output monitoring for 24 hrs (minimally invasive cardiac output/MICO group; n = 201) or usual care (control group; n = 187). The main outcome measure was the proportion of patients achieving hemodynamic stability within six hours of starting the study. Results The number of hemodynamic instability criteria at baseline (MICO group mean 2.0 (SD 1.0), control group 1.8 (1.0); P = .06) and severity of illness (SAPS II score; MICO group 48 (18), control group 48 (15); P = .86)) were similar. At 6 hrs, 45 patients (22%) in the MICO group and 52 patients (28%) in the control group were hemodynamically stable (mean difference 5%; 95% confidence interval of the difference -3 to 14%; P = .24). Hemodynamic support with fluids and vasoactive drugs, and pulmonary artery catheter use (MICO group: 19%, control group: 26%; P = .11) were similar in the two groups. The median length of ICU stay was 2.0 (interquartile range 1.2 to 4.6) days in the MICO group and 2.5 (1.1 to 5.0) days in the control group (P = .38). The hospital mortality was 26% in the MICO group and 21% in the control group (P = .34). Conclusions Minimally-invasive cardiac output monitoring added to usual care does not facilitate early hemodynamic stabilization in the ICU, nor does it alter the hemodynamic support or outcome. Our results emphasize the need to evaluate technologies used to measure stroke volume and cardiac output--especially their impact on the process of care--before any large-scale outcome studies are attempted.
Resumo:
Although sustainable land management (SLM) is widely promoted to prevent and mitigate land degradation and desertification, its monitoring and assessment (M&A) has received much less attention. This paper compiles methodological approaches which to date have been little reported in the literature. It draws lessons from these experiences and identifies common elements and future pathways as a basis for a global approach. The paper starts with local level methods where the World Overview of Conservation Approaches and Technologies (WOCAT) framework catalogues SLM case studies. This tool has been included in the local level assessment of Land Degradation Assessment in Drylands (LADA) and in the EU-DESIRE project. Complementary site-based approaches can enhance an ecological process-based understanding of SLM variation. At national and sub-national levels, a joint WOCAT/LADA/DESIRE spatial assessment based on land use systems identifies the status and trends of degradation and SLM, including causes, drivers and impacts on ecosystem services. Expert consultation is combined with scientific evidence and enhanced where necessary with secondary data and indicator databases. At the global level, the Global Environment Facility (GEF) knowledge from the land (KM:Land) initiative uses indicators to demonstrate impacts of SLM investments. Key lessons learnt include the need for a multi-scale approach, making use of common indicators and a variety of information sources, including scientific data and local knowledge through participatory methods. Methodological consistencies allow cross-scale analyses, and findings are analysed and documented for use by decision-makers at various levels. Effective M&A of SLM [e.g. for United Nations Convention to Combat Desertification (UNCCD)] requires a comprehensive methodological framework agreed by the major players.
Resumo:
Objectives: To compare outcomes of antiretroviral therapy (ART) in South Africa, where viral load monitoring is routine, with those in Malawi and Zambia, where monitoring is based on CD4 cell counts. Methods: We included 18 706 adult patients starting ART in South Africa and 80 937 patients in Zambia or Malawi. We examined CD4 responses in models for repeated measures and the probability of switching to second-line regimens, mortality and loss to follow-up in multistate models, measuring time from 6 months. Results: In South Africa, 9.8% [95% confidence interval (CI) 9.1–10.5] had switched at 3 years, 1.3% (95% CI 0.9–1.6) remained on failing first-line regimens, 9.2% (95% CI 8.5–9.8) were lost to follow-up and 4.3% (95% CI 3.9–4.8) had died. In Malawi and Zambia, more patients were on a failing first-line regimen [3.7% (95% CI 3.6–3.9], fewer patients had switched [2.1% (95% CI 2.0–2.3)] and more patients were lost to follow-up [15.3% (95% CI 15.0–15.6)] or had died [6.3% (95% CI 6.0–6.5)]. Median CD4 cell counts were lower in South Africa at the start of ART (93 vs. 132 cells/μl; P < 0.001) but higher after 3 years (425 vs. 383 cells/μl; P < 0.001). The hazard ratio comparing South Africa with Malawi and Zambia after adjusting for age, sex, first-line regimen and CD4 cell count was 0.58 (0.50–0.66) for death and 0.53 (0.48–0.58) for loss to follow-up. Conclusion: Over 3 years of ART mortality was lower in South Africa than in Malawi or Zambia. The more favourable outcome in South Africa might be explained by viral load monitoring leading to earlier detection of treatment failure, adherence counselling and timelier switching to second-line ART.
Resumo:
Madagascar is currently developing a policy and strategies to enhance the sustainable management of its natural resources, encouraged by United Nations Framework Convention on Climate Change (UNFCCC) and REDD. To set up a sustainable financing scheme methodologies have to be provided that estimate, prevent and mitigate leakage, develop national and regional baselines, and estimate carbon benefits. With this research study this challenge was tried to be addressed by analysing a lowland rainforest in the Analanjirofo region in the district of Soanierana Ivongo, North East of Madagascar. For two distinguished forest degradation stages: “low degraded forest” and “degraded forest” aboveground biomass and carbon stock was assessed. The corresponding rates of carbon within those two classes were calculated and linked to a multi-temporal set of SPOT satellite data acquired in 1991, 2004 and 2009. Deforestation and particularly degradation and the related carbon stock developments were analysed. With the assessed data for the 3 years 1991, 2004 and 2009 it was possible to model a baseline and to develop a forest prediction for 2020 for Analanjirofo region in the district of Soanierana Ivongo. These results, developed applying robust methods, may provide important spatial information regarding the priorities in planning and implementation of future REDD+ activities in the area.
Resumo:
Abstract Background and Aims: Data on the influence of calibration on accuracy of continuous glucose monitoring (CGM) are scarce. The aim of the present study was to investigate whether the time point of calibration has an influence on sensor accuracy and whether this effect differs according to glycemic level. Subjects and Methods: Two CGM sensors were inserted simultaneously in the abdomen on either side of 20 individuals with type 1 diabetes. One sensor was calibrated predominantly using preprandial glucose (calibration(PRE)). The other sensor was calibrated predominantly using postprandial glucose (calibration(POST)). At minimum three additional glucose values per day were obtained for analysis of accuracy. Sensor readings were divided into four categories according to the glycemic range of the reference values (low, ≤4 mmol/L; euglycemic, 4.1-7 mmol/L; hyperglycemic I, 7.1-14 mmol/L; and hyperglycemic II, >14 mmol/L). Results: The overall mean±SEM absolute relative difference (MARD) between capillary reference values and sensor readings was 18.3±0.8% for calibration(PRE) and 21.9±1.2% for calibration(POST) (P<0.001). MARD according to glycemic range was 47.4±6.5% (low), 17.4±1.3% (euglycemic), 15.0±0.8% (hyperglycemic I), and 17.7±1.9% (hyperglycemic II) for calibration(PRE) and 67.5±9.5% (low), 24.2±1.8% (euglycemic), 15.5±0.9% (hyperglycemic I), and 15.3±1.9% (hyperglycemic II) for calibration(POST). In the low and euglycemic ranges MARD was significantly lower in calibration(PRE) compared with calibration(POST) (P=0.007 and P<0.001, respectively). Conclusions: Sensor calibration predominantly based on preprandial glucose resulted in a significantly higher overall sensor accuracy compared with a predominantly postprandial calibration. The difference was most pronounced in the hypo- and euglycemic reference range, whereas both calibration patterns were comparable in the hyperglycemic range.
Resumo:
Introduction: Diagnosing arrhythmias by conventional Holter-ECG can be cumbersome because of artifacts, skin irritation and poor P-waves. In contrast, esophageal electrocardiography (eECG) is promising due to the anatomic relationship of the esophagus to the atria and its favorable bioelectric properties. Methods used: In an ambulant setting, we recorded eECGs from 10 volunteers with a novel, highly-miniaturized eECG recorder that is worn discretely behind the ear (1.5×1.8×5cm, 22grams). The device continuously records two eECG leads during 3 days with 500Hz sampling frequency and 24-bit resolution. Results: Mean ± SD recording time was 21.7±19.6 hours (max. 60 hours). Test persons were not limited in daily activities (e.g. eating, speaking) and only complained mild discomfort during probe insertion, which subsided later on. During 99.8% of time, the recorder acquired signals appropriate for further analysis. In unfiltered data, QRS complexes and P-waves were identifiable during >98% of time. P waves had higher amplitudes as compared to surface ECG (0.71 ± 0.42mV vs. 0.16 ± 0.03mV, p = 0.004). No complications occurred. Conclusion: Ambulatory eECG recording is safe, well tolerated and promising due to excellent P-wave detection, overcoming some limitations of conventional Holter ECG.