951 resultados para Methods validation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The distribution of the enzymopathy glucose-6-phosphate dehydrogenase (G6PD) deficiency is linked to areas of high malaria endemicity due to its association with protection from disease. G6PD deficiency is also identified as the cause of severe haemolysis following administration of the anti-malarial drug primaquine and further use of this drug will likely require identification of G6PD deficiency on a population level. Current conventional methods for G6PD screening have various disadvantages for field use. METHODS The WST8/1-methoxy PMS method, recently adapted for field use, was validated using a gold standard enzymatic assay (R&D Diagnostics Ltd ®) in a study involving 235 children under five years of age, who were recruited by random selection from a cohort study in Tororo, Uganda. Blood spots were collected by finger-prick onto filter paper at routine visits, and G6PD activity was determined by both tests. Performance of the WST8/1-methoxy PMS test under various temperature, light, and storage conditions was evaluated. RESULTS The WST8/1-methoxy PMS assay was found to have 72% sensitivity and 98% specificity when compared to the commercial enzymatic assay and the AUC was 0.904, suggesting good agreement. Misclassifications were at borderline values of G6PD activity between mild and normal levels, or related to outlier haemoglobin values (<8.0 gHb/dl or >14 gHb/dl) associated with ongoing anaemia or recent haemolytic crises. Although severe G6PD deficiency was not found in the area, the test enabled identification of low G6PD activity. The assay was found to be highly robust for field use; showing less light sensitivity, good performance over a wide temperature range, and good capacity for medium-to-long term storage. CONCLUSIONS The WST8/1-methoxy PMS assay was comparable to the currently used standard enzymatic test, and offers advantages in terms of cost, storage, portability and use in resource-limited settings. Such features make this test a potential key tool for deployment in the field for point of care assessment prior to primaquine administration in malaria-endemic areas. As with other G6PD tests, outlier haemoglobin levels may confound G6PD level estimation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND The Valve Academic Research Consortium (VARC) has proposed a standardized definition of bleeding in patients undergoing transcatheter aortic valve interventions (TAVI). The VARC bleeding definition has not been validated or compared to other established bleeding definitions so far. Thus, we aimed to investigate the impact of bleeding and compare the predictivity of VARC bleeding events with established bleeding definitions. METHODS AND RESULTS Between August 2007 and April 2012, 489 consecutive patients with severe aortic stenosis were included into the Bern-TAVI-Registry. Every bleeding complication was adjudicated according to the definitions of VARC, BARC, TIMI, and GUSTO. Periprocedural blood loss was added to the definition of VARC, providing a modified VARC definition. A total of 152 bleeding events were observed during the index hospitalization. Bleeding severity according to VARC was associated with a gradual increase in mortality, which was comparable to the BARC, TIMI, GUSTO, and the modified VARC classifications. The predictive precision of a multivariable model for mortality at 30 days was significantly improved by adding the most serious bleeding of VARC (area under the curve [AUC], 0.773; 95% confidence interval [CI], 0.706 to 0.839), BARC (AUC, 0.776; 95% CI, 0.694 to 0.857), TIMI (AUC, 0.768; 95% CI, 0.692 to 0.844), and GUSTO (AUC, 0.791; 95% CI, 0.714 to 0.869), with the modified VARC definition resulting in the best predictivity (AUC, 0.814; 95% CI, 0.759 to 0.870). CONCLUSIONS The VARC bleeding definition offers a severity stratification that is associated with a gradual increase in mortality and prognostic information comparable to established bleeding definitions. Adding the information of periprocedural blood loss to VARC may increase the sensitivity and the predictive power of this classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Trabecular bone is a porous mineralized tissue playing a major load bearing role in the human body. Prediction of age-related and disease-related fractures and the behavior of bone implant systems needs a thorough understanding of its structure-mechanical property relationships, which can be obtained using microcomputed tomography-based finite element modeling. In this study, a nonlinear model for trabecular bone as a cohesive-frictional material was implemented in a large-scale computational framework and validated by comparison of μFE simulations with experimental tests in uniaxial tension and compression. A good correspondence of stiffness and yield points between simulations and experiments was found for a wide range of bone volume fraction and degree of anisotropy in both tension and compression using a non-calibrated, average set of material parameters. These results demonstrate the ability of the model to capture the effects leading to failure of bone for three anatomical sites and several donors, which may be used to determine the apparent behavior of trabecular bone and its evolution with age, disease, and treatment in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE Intraarticular gadolinium-enhanced magnetic resonance arthrography (MRA) is commonly applied to characterize morphological disorders of the hip. However, the reproducibility of retrieving anatomic landmarks on MRA scans and their correlation with intraarticular pathologies is unknown. A precise mapping system for the exact localization of hip pathomorphologies with radial MRA sequences is lacking. Therefore, the purpose of the study was the establishment and validation of a reproducible mapping system for radial sequences of hip MRA. MATERIALS AND METHODS Sixty-nine consecutive intraarticular gadolinium-enhanced hip MRAs were evaluated. Radial sequencing consisted of 14 cuts orientated along the axis of the femoral neck. Three orthopedic surgeons read the radial sequences independently. Each MRI was read twice with a minimum interval of 7 days from the first reading. The intra- and inter-observer reliability of the mapping procedure was determined. RESULTS A clockwise system for hip MRA was established. The teardrop figure served to determine the 6 o'clock position of the acetabulum; the center of the greater trochanter served to determine the 12 o'clock position of the femoral head-neck junction. The intra- and inter-observer ICCs to retrieve the correct 6/12 o'clock positions were 0.906-0.996 and 0.978-0.988, respectively. CONCLUSIONS The established mapping system for radial sequences of hip joint MRA is reproducible and easy to perform.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND HIV-1 RNA viral load (VL) testing is recommended to monitor antiretroviral therapy (ART) but not available in many resource-limited settings. We developed and validated CD4-based risk charts to guide targeted VL testing. METHODS We modeled the probability of virologic failure up to 5 years of ART based on current and baseline CD4 counts, developed decision rules for targeted VL testing of 10%, 20%, or 40% of patients in 7 cohorts of patients starting ART in South Africa, and plotted cutoffs for VL testing on colour-coded risk charts. We assessed the accuracy of risk chart-guided VL testing to detect virologic failure in validation cohorts from South Africa, Zambia, and the Asia-Pacific. RESULTS In total, 31,450 adult patients were included in the derivation and 25,294 patients in the validation cohorts. Positive predictive values increased with the percentage of patients tested: from 79% (10% tested) to 98% (40% tested) in the South African cohort, from 64% to 93% in the Zambian cohort, and from 73% to 96% in the Asia-Pacific cohort. Corresponding increases in sensitivity were from 35% to 68% in South Africa, from 55% to 82% in Zambia, and from 37% to 71% in Asia-Pacific. The area under the receiver operating curve increased from 0.75 to 0.91 in South Africa, from 0.76 to 0.91 in Zambia, and from 0.77 to 0.92 in Asia-Pacific. CONCLUSIONS CD4-based risk charts with optimal cutoffs for targeted VL testing maybe useful to monitor ART in settings where VL capacity is limited.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Predicting long-term survival after admission to hospital is helpful for clinical, administrative and research purposes. The Hospital-patient One-year Mortality Risk (HOMR) model was derived and internally validated to predict the risk of death within 1 year after admission. We conducted an external validation of the model in a large multicentre study. METHODS We used administrative data for all nonpsychiatric admissions of adult patients to hospitals in the provinces of Ontario (2003-2010) and Alberta (2011-2012), and to the Brigham and Women's Hospital in Boston (2010-2012) to calculate each patient's HOMR score at admission. The HOMR score is based on a set of parameters that captures patient demographics, health burden and severity of acute illness. We determined patient status (alive or dead) 1 year after admission using population-based registries. RESULTS The 3 validation cohorts (n = 2,862,996 in Ontario, 210 595 in Alberta and 66,683 in Boston) were distinct from each other and from the derivation cohort. The overall risk of death within 1 year after admission was 8.7% (95% confidence interval [CI] 8.7% to 8.8%). The HOMR score was strongly and significantly associated with risk of death in all populations and was highly discriminative, with a C statistic ranging from 0.89 (95% CI 0.87 to 0.91) to 0.92 (95% CI 0.91 to 0.92). Observed and expected outcome risks were similar (median absolute difference in percent dying in 1 yr 0.3%, interquartile range 0.05%-2.5%). INTERPRETATION The HOMR score, calculated using routinely collected administrative data, accurately predicted the risk of death among adult patients within 1 year after admission to hospital for nonpsychiatric indications. Similar performance was seen when the score was used in geographically and temporally diverse populations. The HOMR model can be used for risk adjustment in analyses of health administrative data to predict long-term survival among hospital patients.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Over the last years, the interest in proton radiotherapy is rapidly increasing. Protons provide superior physical properties compared with conventional radiotherapy using photons. These properties result in depth dose curves with a large dose peak at the end of the proton track and the finite proton range allows sparing the distally located healthy tissue. These properties offer an increased flexibility in proton radiotherapy, but also increase the demand in accurate dose estimations. To carry out accurate dose calculations, first an accurate and detailed characterization of the physical proton beam exiting the treatment head is necessary for both currently available delivery techniques: scattered and scanned proton beams. Since Monte Carlo (MC) methods follow the particle track simulating the interactions from first principles, this technique is perfectly suited to accurately model the treatment head. Nevertheless, careful validation of these MC models is necessary. While for the dose estimation pencil beam algorithms provide the advantage of fast computations, they are limited in accuracy. In contrast, MC dose calculation algorithms overcome these limitations and due to recent improvements in efficiency, these algorithms are expected to improve the accuracy of the calculated dose distributions and to be introduced in clinical routine in the near future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Governance of food systems is a poorly understood determinant of food security. Much scholarship on food systems governance is non-empirical, while existing research is often case study-based and theoretically and methodologically incommensurable. This frustrates aggregation of evidence and generalisation. We undertook a systematic review of methods used in food systems governance research with a view to identifying a core set of indicators for future research. We gathered literature through a structured consultation and sampling from recent reviews. Indicators were identified and classified according to the levels and sectors they investigate. We found a concentration of indicators in food production at local to national levels and a sparseness in distribution and consumption. Unsurprisingly, many indicators of institutional structure were found, while agency-related indicators are moderately represented. We call for piloting and validation of these indicators and for methodological development to fill gaps identified. These efforts are expected to support a more consolidated future evidence base and eventual meta-analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVES Chewing efficiency may be evaluated using cohesive specimen, especially in elderly or dysphagic patients. The aim of this study was to evaluate three two-coloured chewing gums for a colour-mixing ability test and to validate a new purpose built software (ViewGum©). METHODS Dentate participants (dentate-group) and edentulous patients with mandibular two-implant overdentures (IOD-group) were recruited. First, the dentate-group chewed three different types of two-coloured gum (gum1-gum3) for 5, 10, 20, 30 and 50 chewing cycles. Subsequently the number of chewing cycles with the highest intra- and inter-rater agreement was determined visually by applying a scale (SA) and opto-electronically (ViewGum©, Bland-Altman analysis). The ViewGum© software determines semi-automatically the variance of hue (VOH); inadequate mixing presents with larger VOH than complete mixing. Secondly, the dentate-group and the IOD-group were compared. RESULTS The dentate-group comprised 20 participants (10 female, 30.3±6.7 years); the IOD-group 15 participants (10 female, 74.6±8.3 years). Intra-rater and inter-rater agreement (SA) was very high at 20 chewing cycles (95.00-98.75%). Gums 1-3 showed different colour-mixing characteristics as a function of chewing cycles, gum1 showed a logarithmic association; gum2 and gum3 demonstrated more linear behaviours. However, the number of chewing cycles could be predicted in all specimens from VOH (all p<0.0001, mixed linear regression models). Both analyses proved discriminative to the dental state. CONCLUSION ViewGum© proved to be a reliable and discriminative tool to opto-electronically assess chewing efficiency, given an elastic specimen is chewed for 20 cycles and could be recommended for the evaluation of chewing efficiency in a clinical and research setting. CLINICAL SIGNIFICANCE Chewing is a complex function of the oro-facial structures and the central nervous system. The application of the proposed assessments of the chewing function in geriatrics or special care dentistry could help visualising oro-functional or dental comorbidities in dysphagic patients or those suffering from protein-energy malnutrition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Canine S100 calcium-binding protein A12 (cS100A12) shows promise as biomarker of inflammation in dogs. A previously developed cS100A12-radioimmunoassay (RIA) requires radioactive tracers and is not sensitive enough for fecal cS100A12 concentrations in 79% of tested healthy dogs. An ELISA assay may be more sensitive than RIA and does not require radioactive tracers. OBJECTIVE The purpose of the study was to establish a sandwich ELISA for serum and fecal cS100A12, and to establish reference intervals (RI) for normal healthy canine serum and feces. METHODS Polyclonal rabbit anti-cS100A12 antibodies were generated and tested by Western blotting and immunohistochemistry. A sandwich ELISA was developed and validated, including accuracy and precision, and agreement with cS100A12-RIA. The RI, stability, and biologic variation in fecal cS100A12, and the effect of corticosteroids on serum cS100A12 were evaluated. RESULTS Lower detection limits were 5 μg/L (serum) and 1 ng/g (fecal), respectively. Intra- and inter-assay coefficients of variation were ≤ 4.4% and ≤ 10.9%, respectively. Observed-to-expected ratios for linearity and spiking recovery were 98.2 ± 9.8% (mean ± SD) and 93.0 ± 6.1%, respectively. There was a significant bias between the ELISA and the RIA. The RI was 49-320 μg/L for serum and 2-484 ng/g for fecal cS100A12. Fecal cS100A12 was stable for 7 days at 23, 4, -20, and -80°C; biologic variation was negligible but variation within one fecal sample was significant. Corticosteroid treatment had no clinically significant effect on serum cS100A12 concentrations. CONCLUSIONS The cS100A12-ELISA is a precise and accurate assay for serum and fecal cS100A12 in dogs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background/significance. The scarcity of reliable and valid Spanish language instruments for health related research has hindered research with the Hispanic population. Research suggests that fatalistic attitudes are related to poor cancer screening behaviors and may be one reason for low participation of Mexican-Americans in cancer screening. This problem is of major concern because Mexican-Americans constitute the largest Hispanic subgroup in the U.S.^ Purpose. The purposes of this study were: (1) To translate the Powe Fatalism Inventory, (PFI) into Spanish, and culturally adapt the instrument to the Mexican-American culture as found along the U.S.-Mexico border and (2) To test the equivalence between the Spanish translated, culturally adapted version of the PFI and the English version of the PFI to include clarity, content validity, reading level and reliability.^ Design. Descriptive, cross-sectional.^ Methods. The Spanish language translation used a translation model which incorporates a cultural adaptation process. The SPFI was administered to 175 bilingual participants residing in a midsize, U.S-Mexico border city. Data analysis included estimation of Cronbach's alpha, factor analysis, paired samples t-test comparison and multiple regression analysis using SPSS software, as well as measurement of content validity and reading level of the SPFI. ^ Findings. A reliability estimate using Cronbach's alpha coefficient was 0.81 for the SPFI compared to 0.80 for the PFI in this study. Factor Analysis extracted four factors which explained 59% of the variance. Paired t-test comparison revealed no statistically significant differences between the SPFI and PFI total or individual item scores. Content Validity Index was determined to be 1.0. Reading Level was assessed to be less than a 6th grade reading level. The correlation coefficient between the SPFI and PFI was 0.95.^ Conclusions. This study provided strong psychometric evidence that the Spanish translated, culturally adapted SPFI is an equivalent tool to the English version of the PFI in measuring cancer fatalism. This indicates that the two forms of the instrument can be used interchangeably in a single study to accommodate reading and speaking abilities of respondents. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. This study validated the content of an instrument designed to assess the performance of the medicolegal death investigation system. The instrument was modified from Version 2.0 of the Local Public Health System Performance Assessment Instrument (CDC) and is based on the 10 Essential Public Health Services. ^ Aims. The aims were to employ a cognitive testing process to interview a randomized sample of medicolegal death investigation office leaders, qualitatively describe the results, and revise the instrument accordingly. ^ Methods. A cognitive testing process was used to validate the survey instrument's content in terms of the how well participants could respond to and interpret the questions. Twelve randomly selected medicolegal death investigation chiefs (or equivalent) that represented the seven types of medicolegal death investigation systems and six different state mandates were interviewed by telephone. The respondents also were representative of the educational diversity within medicolegal death investigation leadership. Based on respondent comments, themes were identified that permitted improvement of the instrument toward collecting valid and reliable information when ultimately used in a field survey format. ^ Results. Responses were coded and classified, which permitted the identification of themes related to Comprehension/Interpretation, Retrieval, Estimate/Judgment, and Response. The majority of respondent comments related to Comprehension/Interpretation of the questions. Respondents identified 67 questions and 6 section explanations that merited rephrasing, adding, or deleting examples or words. In addition, five questions were added based on respondent comments. ^ Conclusion. The content of the instrument was validated by cognitive testing method design. The respondents agreed that the instrument would be a useful and relevant tool for assessing system performance. ^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accurate quantitative estimation of exposure using retrospective data has been one of the most challenging tasks in the exposure assessment field. To improve these estimates, some models have been developed using published exposure databases with their corresponding exposure determinants. These models are designed to be applied to reported exposure determinants obtained from study subjects or exposure levels assigned by an industrial hygienist, so quantitative exposure estimates can be obtained. ^ In an effort to improve the prediction accuracy and generalizability of these models, and taking into account that the limitations encountered in previous studies might be due to limitations in the applicability of traditional statistical methods and concepts, the use of computer science- derived data analysis methods, predominantly machine learning approaches, were proposed and explored in this study. ^ The goal of this study was to develop a set of models using decision trees/ensemble and neural networks methods to predict occupational outcomes based on literature-derived databases, and compare, using cross-validation and data splitting techniques, the resulting prediction capacity to that of traditional regression models. Two cases were addressed: the categorical case, where the exposure level was measured as an exposure rating following the American Industrial Hygiene Association guidelines and the continuous case, where the result of the exposure is expressed as a concentration value. Previously developed literature-based exposure databases for 1,1,1 trichloroethane, methylene dichloride and, trichloroethylene were used. ^ When compared to regression estimations, results showed better accuracy of decision trees/ensemble techniques for the categorical case while neural networks were better for estimation of continuous exposure values. Overrepresentation of classes and overfitting were the main causes for poor neural network performance and accuracy. Estimations based on literature-based databases using machine learning techniques might provide an advantage when they are applied to other methodologies that combine `expert inputs' with current exposure measurements, like the Bayesian Decision Analysis tool. The use of machine learning techniques to more accurately estimate exposures from literature-based exposure databases might represent the starting point for the independence from the expert judgment.^

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Appropriate field data are required to check the reliability of hydrodynamic models simulating the dispersion of soluble substances in the marine environment. This study deals with the collection of physical measurements and soluble tracer data intended specifically for this kind of validation. The intensity of currents as well as the complexity of topography and tides around the Cap de La Hague in the center of the English Channel makes it one of the most difficult areas to represent in terms of hydrodynamics and dispersion. Controlled releases of tritium - in the form of HTO - are carried out in this area by the AREVA-NC plant, providing an excellent soluble tracer. A total of 14 493 measurements were acquired to track dispersion in the hours and days following a release. These data, supplementing previously gathered data and physical measurements (bathymetry, water-surface levels, Eulerian and Lagrangian current studies) allow us to test dispersion models from the hour following release to periods of several years which are not accessible with dye experiments. The dispersion characteristics are described and methods are proposed for comparing models against measurements. An application is proposed for a 2 dimensions high-resolution numerical model. It shows how an extensive dataset can be used to build, calibrate and validate several aspects of the model in a highly dynamic and macrotidal area: tidal cycle timing, tidal amplitude, fixed-point current data, hodographs. This study presents results concerning the model's ability to reproduce residual Lagrangian currents, along with a comparison between simulation and high-frequency measurements of tracer dispersion. Physical and tracer data are available from the SISMER database of IFREMER (www.ifremer.fr/sismer/catal). This tool for validation of models in macro-tidal seas is intended to be an open and evolving resource, which could provide a benchmark for dispersion model validation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Independent Components Analysis is a Blind Source Separation method that aims to find the pure source signals mixed together in unknown proportions in the observed signals under study. It does this by searching for factors which are mutually statistically independent. It can thus be classified among the latent-variable based methods. Like other methods based on latent variables, a careful investigation has to be carried out to find out which factors are significant and which are not. Therefore, it is important to dispose of a validation procedure to decide on the optimal number of independent components to include in the final model. This can be made complicated by the fact that two consecutive models may differ in the order and signs of similarly-indexed ICs. As well, the structure of the extracted sources can change as a function of the number of factors calculated. Two methods for determining the optimal number of ICs are proposed in this article and applied to simulated and real datasets to demonstrate their performance.