151 resultados para respondent validation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Epidemiological studies show that elevated plasma levels of advanced glycation end products (AGEs) are associated with diabetes, kidney disease, and heart disease. Thus AGEs have been used as disease progression markers. However, the effects of variations in biological sample processing procedures on the level of AGEs in plasma/serum samples have not been investigated. The objective of this investigation was to assess the effect of variations in blood sample collection on measured N (ε)-(carboxymethyl)lysine (CML), the best characterised AGE, and its homolog, N (ε)-(carboxyethyl)lysine (CEL). The investigation examined the effect on CML and CEL of different blood collection tubes, inclusion of a stabilising cocktail, effect of freeze thaw cycles, different storage times and temperatures, and effects of delaying centrifugation on a pooled sample from healthy volunteers. CML and CEL were measured in extracted samples by ultra-performance liquid chromatography-tandem mass spectrometry. Median CML and CEL ranged from 0.132 to 0.140 mM/M lys and from 0.053 to 0.060 mM/M lys, respectively. No significant difference was shown CML or CEL in plasma/serum samples. Therefore samples collected as part of epidemiological studies that do not undergo specific sample treatment at collection are suitable for measuring CML and CEL.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: The new generation of activity monitors allow users to upload their data to the internet and review progress. The aim of this study is to validate the Fitbit Zip as a measure of free-living physical activity.

FINDINGS: Participants wore a Fitbit Zip, ActiGraph GT3X accelerometer and a Yamax CW700 pedometer for seven days. Participants were asked their opinion on the utility of the Fitbit Zip. Validity was assessed by comparing the output using Spearman's rank correlation coefficients, Wilcoxon signed rank tests and Bland-Altman plots. 59.5% (25/47) of the cohort were female. There was a high correlation in steps/day between the Fitbit Zip and the two reference devices (r = 0.91, p < 0.001). No statistically significant difference between the Fitbit and Yamax steps/day was observed (Median (IQR) 7477 (3597) vs 6774 (3851); p = 0.11). The Fitbit measured significantly more steps/day than the Actigraph (7477 (3597) vs 6774 (3851); p < 0.001). Bland-Altman plots revealed no systematic differences between the devices.

CONCLUSIONS: Given the high level of correlation and no apparent systematic biases in the Bland Altman plots, the use of Fitbit Zip as a measure of physical activity. However the Fitbit Zip recorded a significantly higher number of steps per day than the Actigraph.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Next-generation sequencing (NGS) is beginning to show its full potential for diagnostic and therapeutic applications. In particular, it is enunciating its capacity to contribute to a molecular taxonomy of cancer, to be used as a standard approach for diagnostic mutation detection, and to open new treatment options that are not exclusively organ-specific. If this is the case, how much validation is necessary and what should be the validation strategy, when bringing NGS into the diagnostic/clinical practice? This validation strategy should address key issues such as: what is the overall extent of the validation? Should essential indicators of test performance such as sensitivity of specificity be calculated for every target or sample type? Should bioinformatic interpretation approaches be validated with the same rigour? What is a competitive clinical turnaround time for a NGS-based test, and when does it become a cost-effective testing proposition? While we address these and other related topics in this commentary, we also suggest that a single set of international guidelines for the validation and use of NGS technology in routine diagnostics may allow us all to make a much more effective use of resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: It has been suggested that inaccuracies in cancer registries are distorting UK survival statistics. This study compared the Northern Ireland Cancer Registry (NICR) database of living patients, with independent data held by Northern Ireland's General Practitioners (GPs) to compare and validate the recorded diagnoses and dates held by the registry. 

Methods: All 387 GP practice managers were invited to participate. 100 practices (25.84%) responded. Comparisons were made for 17,102 patients, equivalent to 29.08% of the living patients (58,798) extracted from the NICR between 1993 and 2010. 

Results: There were no significant differences (p > 0.05) between the responding and nonresponding GP patient profiles for age, marital status or deprivation score. However, the responding GPs included more female patients (p = 0.02). NICR data accuracy was high, 0.08% of GP cancer patients (n = 15) were not included in registry records and 0.02% (n = 2) had a diagnosis date which varied more than 2 weeks from GP records (3 weeks and 5 months). The NICR had recorded two different tumour types and three different tumour statuses (benign vs. malignant) to the GPs. 

Conclusion: This comparison demonstrates a high level of accuracy within the NICR and that the survival statistics based on this data can be relied upon.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Statistical downscaling (SD) methods have become a popular, low-cost and accessible means of bridging the gap between the coarse spatial resolution at which climate models output climate scenarios and the finer spatial scale at which impact modellers require these scenarios, with various different SD techniques used for a wide range of applications across the world. This paper compares the Generator for Point Climate Change (GPCC) model and the Statistical DownScaling Model (SDSM)—two contrasting SD methods—in terms of their ability to generate precipitation series under non-stationary conditions across ten contrasting global climates. The mean, maximum and a selection of distribution statistics as well as the cumulative frequencies of dry and wet spells for four different temporal resolutions were compared between the models and the observed series for a validation period. Results indicate that both methods can generate daily precipitation series that generally closely mirror observed series for a wide range of non-stationary climates. However, GPCC tends to overestimate higher precipitation amounts, whilst SDSM tends to underestimate these. This infers that GPCC is more likely to overestimate the effects of precipitation on a given impact sector, whilst SDSM is likely to underestimate the effects. GPCC performs better than SDSM in reproducing wet and dry day frequency, which is a key advantage for many impact sectors. Overall, the mixed performance of the two methods illustrates the importance of users performing a thorough validation in order to determine the influence of simulated precipitation on their chosen impact sector.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Despite the increasing availability of digital slide viewing, and numerous advantages associated with its application, a lack of quality validation studies is amongst the reasons for poor uptake in routine practice. This study evaluated primary digital pathology reporting in the setting of routine subspecialist gastrointestinal pathology, commonplace in most tissue pathology laboratories and representing one of the highest volume specialties in most laboratories. Individual digital and glass slide diagnoses were compared amongst three pathologists reporting in a gastrointestinal subspecialty team, in a prospective series of 100 consecutive diagnostic cases from routine practice in a large teaching hospital laboratory. The study included a washout period of at least 6 months. Discordant diagnoses were classified, and the study evaluated against recent College of American Pathologists (CAP) recommendations for evaluating digital pathology systems for diagnostic use. The study design met all 12 of the CAP recommendations. The 100 study cases generated 300 pairs of diagnoses, comprising 100 glass slide diagnoses and 100 digital diagnoses from each of the three study pathologists. 286 of 300 pairs of diagnoses were concordant, representing intraobserver concordance of 95.3 %, broadly comparable to rates previously published in this field. In ten of the 14 discordant pairs, the glass slide diagnosis was favoured; in four cases, the digital diagnosis was favoured, but importantly, the 14 discordant intraobserver diagnoses were considered to be of minor clinical significance. Interobserver, or viewing modality independent, concordance was found in 94 of the total of 100 study cases, providing a comparable baseline discordance rate expected in any second viewing of pathology material. These overall results support the safe use of digital pathology in primary diagnostic reporting in this setting

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A single-step lateral flow immunoassay (LFIA) was developed and validated for the rapid screening of paralytic shellfish toxins (PSTs) from a variety of shellfish species, at concentrations relevant to regulatory limits of 800 μg STX-diHCl equivalents/kg shellfish meat. A simple aqueous extraction protocol was performed within several minutes from sample homogenate. The qualitative result was generated after a 5 min run time using a portable reader which removed subjectivity from data interpretation. The test was designed to generate noncompliant results with samples containing approximately 800 μg of STX-diHCl/kg. The cross-reactivities in relation to STX, expressed as mean ± SD, were as follows: NEO: 128.9% ± 29%; GTX1&4: 5.7% ± 1.5%; GTX2&3: 23.4% ± 10.4%; dcSTX: 55.6% ± 10.9%; dcNEO: 28.0% ± 8.9%; dcGTX2&3: 8.3% ± 2.7%; C1&C2: 3.1% ± 1.2%; GTX5: 23.3% ± 14.4% (n = 5 LFIA lots). There were no indications of matrix effects from the different samples evaluated (mussels, scallops, oysters, clams, cockles) nor interference from other shellfish toxins (domoic acid, okadaic acid group). Naturally contaminated sample evaluations showed no false negative results were generated from a variety of different samples and profiles (n = 23), in comparison to reference methods (MBA method 959.08, LC-FD method 2005.06). External laboratory evaluations of naturally contaminated samples (n = 39) indicated good correlation with reference methods (MBA, LC-FD). This is the first LFIA which has been shown, through rigorous validation, to have the ability to detect most major PSTs in a reliable manner and will be a huge benefit to both industry and regulators, who need to perform rapid and reliable testing to ensure shellfish are safe to eat.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article examines the influence on the engineering design process of the primary objective of validation, whether it is proving a model, a technology or a product. Through the examination of a number of stiffened panel case studies, the relationships between simulation, validation, design and the final product are established and discussed. The work demonstrates the complex interactions between the original (or anticipated) design model, the analysis model, the validation activities and the product in service. The outcome shows clearly some unintended consequences. High fidelity validation test simulations require a different set of detailed parameters to accurately capture behaviour. By doing so, there is a divergence from the original computer-aided design model, intrinsically limiting the value of the validation with respect to the product. This work represents a shift from the traditional perspective of encapsulating and controlling errors between simulation and experimental test to consideration of the wider design-test process. Specifically, it is a reflection on the implications of how models are built and validated, and the effect on results and understanding of structural behaviour. This article then identifies key checkpoints in the design process and how these should be used to update the computer-aided design system parameters for a design. This work strikes at a fundamental challenge in understanding the interaction between design, certification and operation of any complex system.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In Contingent Valuation studies, researchers often base their definition of the environmental good on scientific/expert consensus. However, respondents may not hold this same commodity definition prior to the transaction. This raises questions as to the potential for staging a satisfactory transaction, based on Fischoff and Furby's (1988) criteria. Some unresolved issues regarding the provision of information to respondents to facilitate such a transaction are highlighted. In this paper, we apply content analysis to focus group discussions and develop a set of rules which take account of the non-independence of group data to explore whether researcher and respondents' prior definitions are in any way similar. We use the results to guide information provision in a subsequent questionnaire.