908 resultados para Verification and validation technology
Resumo:
OBJECTIVE To construct statements of nursing diagnoses related to nursing practice for individuals with diabetes in Specialized Care, on the basis of the Database of Nursing Practice Terms related to diabetes, in the International Classification for Nursing Practice (ICNP®) and in the Theory of Basic Human Needs and to validate them with specialist nurses in the area. METHOD Methodological research, structured into sequential stages of construction, cross-mapping, validation and categorization of nursing diagnoses. RESULTS A list was indicated of 115 statements of diagnostic, including positive, negative and improvement statements; 59 nursing diagnoses present in and 56 nursing diagnoses absent from the ICNP® Version 2011. 66 diagnoses with CVI ≥ 0.50 were validated, being categorized on the basis of human needs. CONCLUSION It was observed that the use of the ICNP® 2011 favored the specifications of the concepts of professional practice in care with individuals with diabetes.
Resumo:
Abstract OBJECTIVE To describe the stages of construction and validation of an instrument in order to analyze the adherence to best care practices during labour and birth. METHOD Methodological research, carried out in three steps: construction of dimensions and items, face and content validity and semantic analysis of the items. RESULTS The face and content validity was carried out by 10 judges working in healthcare, teaching and research. Items with Content Validity Index (CVI) ≥ 0.9 were kept in full or undergone revisions as suggested by the judges. Semantic analysis, performed twice, indicated that there was no difficulty in understanding the items. CONCLUSION The instrument with three dimensions (organization of healthcare network to pregnancy and childbirth, evidence-based practices and work processes) followed the steps recommended in the literature, concluded with 50 items and total CVI of 0.98.
Development and validation of an instrument for evaluating the ludicity of games in health education
Resumo:
Abstract OBJECTIVE Developing and validating an instrument to evaluate the playfulness of games in health education contexts. METHODOLOGY A methodological, exploratory and descriptive research, developed in two stages: 1. Application of an open questionnaire to 50 graduate students, with content analysis of the answers and calculation of Kappa coefficient for defining items; 2. Procedures for construction of scales, with content validation by judges and analysis of the consensus estimate byContent Validity Index(CVI). RESULTS 53 items regarding the restless character of the games in the dimensions of playfulness, the formative components of learning and the profiles of the players. CONCLUSION Ludicity can be assessed by validated items related to the degree of involvement, immersion and reinvention of the subjects in the game along with the dynamics and playability of the game.
Resumo:
Background: Although CD4 cell count monitoring is used to decide when to start antiretroviral therapy in patients with HIV-1 infection, there are no evidence-based recommendations regarding its optimal frequency. It is common practice to monitor every 3 to 6 months, often coupled with viral load monitoring. We developed rules to guide frequency of CD4 cell count monitoring in HIV infection before starting antiretroviral therapy, which we validated retrospectively in patients from the Swiss HIV Cohort Study.Methodology/Principal Findings: We built up two prediction rules ("Snap-shot rule" for a single sample and "Track-shot rule" for multiple determinations) based on a systematic review of published longitudinal analyses of CD4 cell count trajectories. We applied the rules in 2608 untreated patients to classify their 18 061 CD4 counts as either justifiable or superfluous, according to their prior >= 5% or < 5% chance of meeting predetermined thresholds for starting treatment. The percentage of measurements that both rules falsely deemed superfluous never exceeded 5%. Superfluous CD4 determinations represented 4%, 11%, and 39% of all actual determinations for treatment thresholds of 500, 350, and 200x10(6)/L, respectively. The Track-shot rule was only marginally superior to the Snap-shot rule. Both rules lose usefulness for CD4 counts coming near to treatment threshold.Conclusions/Significance: Frequent CD4 count monitoring of patients with CD4 counts well above the threshold for initiating therapy is unlikely to identify patients who require therapy. It appears sufficient to measure CD4 cell count 1 year after a count > 650 for a threshold of 200, > 900 for 350, or > 1150 for 500x10(6)/L, respectively. When CD4 counts fall below these limits, increased monitoring frequency becomes advisable. These rules offer guidance for efficient CD4 monitoring, particularly in resource-limited settings.
Resumo:
INTRODUCTION: A clinical decision rule to improve the accuracy of a diagnosis of influenza could help clinicians avoid unnecessary use of diagnostic tests and treatments. Our objective was to develop and validate a simple clinical decision rule for diagnosis of influenza. METHODS: We combined data from 2 studies of influenza diagnosis in adult outpatients with suspected influenza: one set in California and one in Switzerland. Patients in both studies underwent a structured history and physical examination and had a reference standard test for influenza (polymerase chain reaction or culture). We randomly divided the dataset into derivation and validation groups and then evaluated simple heuristics and decision rules from previous studies and 3 rules based on our own multivariate analysis. Cutpoints for stratification of risk groups in each model were determined using the derivation group before evaluating them in the validation group. For each decision rule, the positive predictive value and likelihood ratio for influenza in low-, moderate-, and high-risk groups, and the percentage of patients allocated to each risk group, were reported. RESULTS: The simple heuristics (fever and cough; fever, cough, and acute onset) were helpful when positive but not when negative. The most useful and accurate clinical rule assigned 2 points for fever plus cough, 2 points for myalgias, and 1 point each for duration <48 hours and chills or sweats. The risk of influenza was 8% for 0 to 2 points, 30% for 3 points, and 59% for 4 to 6 points; the rule performed similarly in derivation and validation groups. Approximately two-thirds of patients fell into the low- or high-risk group and would not require further diagnostic testing. CONCLUSION: A simple, valid clinical rule can be used to guide point-of-care testing and empiric therapy for patients with suspected influenza.
Resumo:
BACKGROUND: Cardiovascular magnetic resonance (CMR) has become an important diagnostic imaging modality in cardiovascular medicine. However, insufficient image quality may compromise its diagnostic accuracy. We aimed to describe and validate standardized criteria to evaluate a) cine steady-state free precession (SSFP), b) late gadolinium enhancement (LGE), and c) stress first-pass perfusion images. These criteria will serve for quality assessment in the setting of the Euro-CMR registry. METHODS: Thirty-five qualitative criteria were defined (scores 0-3) with lower scores indicating better image quality. In addition, quantitative parameters were measured yielding 2 additional quality criteria, i.e. signal-to-noise ratio (SNR) of non-infarcted myocardium (as a measure of correct signal nulling of healthy myocardium) for LGE and % signal increase during contrast medium first-pass for perfusion images. These qualitative and quantitative criteria were assessed in a total of 90 patients (60 patients scanned at our own institution at 1.5T (n=30) and 3T (n=30) and in 30 patients randomly chosen from the Euro-CMR registry examined at 1.5T). Analyses were performed by 2 SCMR level-3 experts, 1 trained study nurse, and 1 trained medical student. RESULTS: The global quality score was 6.7±4.6 (n=90, mean of 4 observers, maximum possible score 64), range 6.4-6.9 (p=0.76 between observers). It ranged from 4.0-4.3 for 1.5T (p=0.96 between observers), from 5.9-6.9 for 3T (p=0.33 between observers), and from 8.6-10.3 for the Euro-CMR cases (p=0.40 between observers). The inter- (n=4) and intra-observer (n=2) agreement for the global quality score, i.e. the percentage of assignments to the same quality tertile ranged from 80% to 88% and from 90% to 98%, respectively. The agreement for the quantitative assessment for LGE images (scores 0-2 for SNR <2, 2-5, >5, respectively) ranged from 78-84% for the entire population, and 70-93% at 1.5T, 64-88% at 3T, and 72-90% for the Euro-CMR cases. The agreement for perfusion images (scores 0-2 for %SI increase >200%, 100%-200%,<100%, respectively) ranged from 81-91% for the entire population, and 76-100% at 1.5T, 67-96% at 3T, and 62-90% for the Euro-CMR registry cases. The intra-class correlation coefficient for the global quality score was 0.83. CONCLUSIONS: The described criteria for the assessment of CMR image quality are robust with a good inter- and intra-observer agreement. Further research is needed to define the impact of image quality on the diagnostic and prognostic yield of CMR studies.
Resumo:
A quantitative model of water movement within the immediate vicinity of an individual root is developed and results of an experiment to validate the model are presented. The model is based on the assumption that the amount of water transpired by a plant in a certain period is replaced by an equal volume entering its root system during the same time. The model is based on the Darcy-Buckingham equation to calculate the soil water matric potential at any distance from a plant root as a function of parameters related to crop, soil and atmospheric conditions. The model output is compared against measurements of soil water depletion by rice roots monitored using γ-beam attenuation in a greenhouse of the Escola Superior de Agricultura "Luiz de Queiroz"/Universidade de São Paulo(ESALQ/USP) in Piracicaba, State of São Paulo, Brazil, in 1993. The experimental results are in agreement with the output from the model. Model simulations show that a single plant root is able to withdraw water from more than 0.1 m away within a few days. We therefore can assume that root distribution is a less important factor for soil water extraction efficiency.
Resumo:
The turbot (Scophthalmus maximus) is a commercially valuable flatfish and one of the most promising aquaculture species in Europe. Two transcriptome 454-pyrosequencing runs were used in order to detect Single Nucleotide Polymorphisms (SNPs) in genesrelated to immune response and gonad differentiation. A total of 866 true SNPs were detected in 140 different contigs representing 262,093 bp as a whole. Only one true SNP was analyzed in each contig. One hundred and thirteen SNPs out of the 140 analyzed were feasible (genotyped), while Ш were polymorphic in a wild population. Transition/transversion ratio (1.354) was similar to that observed in other fish studies. Unbiased gene diversity (He) estimates ranged from 0.060 to 0.510 (mean = 0.351), minimum allele frequency (MAF) from 0.030 to 0.500 (mean = 0.259) and all loci were in Hardy-Weinberg equilibrium after Bonferroni correction. A large number of SNPs (49) were located in the coding region, 33 representing synonymous and 16 non-synonymous changes. Most SNP-containing genes were related to immune response and gonad differentiation processes, and could be candidates for functional changes leading to phenotypic changes. These markers will be useful for population screening to look for adaptive variation in wild and domestic turbot
Resumo:
l'imagerie par résonance magnétique (IRMC) est une technologie utilisée depuis les aimées quatre¬-vingts dans le monde de la cardiologie. Cette technique d'imagerie non-invasive permet d'acquérir Ses images du coeur en trois dimensions, dans n'importe quel, plan, sans application de radiation, et en haute résolution. Actuellement, cette technique est devenue un référence dans l'évaluation et 'l'investigation de différentes pathologies cardiaques. La morphologie cardiaque, la fonction des ventricules ainsi que leur contraction, la perfusion tissulaire ainsi que la viabilité tissulaire peuvent être caractérisés en utilisant différentes séquences d'imagerie. Cependant, cette technologie repose sur des principes physiques complexes et la mise en pratique de cette technique se heurte à la difficulté d'évaluer un organe en mouvement permanent. L'IRM cardiaque est donc sujette à différents artefacts qui perturbent l'interprétation des examens et peuvent diminuer la précision diagnostique de cette technique. A notre connaissance, la plupart des images d'IRMC sont analysées et interprétées sans évaluation rigoureuse de la qualité intrinsèque de l'examen. Jusqu'à présent, et à notre connaissance, aucun critère d'évaluation de la qualité des examens d'IRMC n'a été clairement déterminé. L'équipe d'IRMC du CHUV, dirigée par le Prof J. Schwitter, a recensé une liste de 35 critères qualitatifs et 12 critères quantitatifs évaluant la qualité d'un examen d'IRMC et les a introduit dans une grille d'évaluation. L'objet de cette étude est de décrire et de valider la reproductibilité des critères figurant dans cette grille d'évaluation, par l'interprétation simultanée d'examens IRMC par différents observateurs (cardiologues spécialisés en IRM, étudiant en médecine, infirmière spécialisée). Notre étude a permis de démontrer que les critères définis pour l'évaluation des examens d'IRMC sont robustes, et permettent une bonne reproductibilité intra- et inter-observateurs. Cette étude valide ainsi l'utilisation de ces critères de qualité dans le cadre de l'imagerie par résonance magnétique cardiaque. D'autres études sont encore nécessaires afin de déterminer l'impact de la qualité de l'image sur la précision diagnostique de cette technique. Les critères standardisés que nous avons validés seront utilisés pour évaluer la qualité des images dans le cadre d'une étude à échelle européenne relative à l'IRMC : "l'EuroCMR registry". Parmi les autres utilités visées par ces critères de qualité, citons notamment la possibilité d'avoir une référence d'évaluation de la qualité d'examen pour toutes les futures études cliniques utilisant la technologie d'IRMC, de permettre aux centres d'IRMC de quantifier leur niveau de qualité, voire de créer un certificat de standard de qualité pour ces centres, d'évaluer la reproductibilité de l'évaluation des images par différents observateurs d'un même centre, ou encore d'évaluer précisément la qualité des séquences développées à l'avenir dans le monde de l'IRMC.
Resumo:
Le "Chest wall syndrome" (CWS) est défini comme étant une source bénigne de douleurs thoraciques, localisées sur la paroi thoracique antérieure et provoquées par une affection musculosquelettique. Le CWS représente la cause la plus fréquente de douleurs thoraciques en médecine de premier recours. Le but de cette étude est de développer et valider un score de prédiction clinique pour le CWS. Une revue de la littérature a d'abord été effectuée, d'une part pour savoir si un tel score existait déjà, et d'autre part pour retrouver les variables décrites comme étant prédictives d'un CWS. Le travail d'analyse statistique a été effectué avec les données issues d'une cohorte clinique multicentrique de patients qui avaient consulté en médecine de premier recours en Suisse romande avec une douleur thoracique (59 cabinets, 672 patients). Un diagnostic définitif avait été posé à 12 mois de suivi. Les variables pertinentes ont été sélectionnées par analyses bivariées, et le score de prédiction clinique a été développé par régression logistique multivariée. Une validation externe de ce score a été faite en utilisant les données d'une cohorte allemande (n= 1212). Les analyses bivariées ont permis d'identifier 6 variables caractérisant le CWS : douleur thoracique (ni rétrosternale ni oppressive), douleur en lancées, douleur bien localisée, absence d'antécédent de maladie coronarienne, absence d'inquiétude du médecin et douleur reproductible à la palpation. Cette dernière variable compte pour 2 points dans le score, les autres comptent pour 1 point chacune; le score total s'étend donc de 0 à 7 points. Dans la cohorte de dérivation, l'aire sous la courbe sensibilité/spécificité (courbe ROC) est de 0.80 (95% de l'intervalle de confiance : 0.76-0.83). Avec un seuil diagnostic de > 6 points, le score présente 89% de spécificité et 45% de sensibilité. Parmi tous les patients qui présentaient un CWS (n = 284), 71% (n = 201) avaient une douleur reproductible à la palpation et 45% (n= 127) sont correctement diagnostiqués par le score. Pour une partie (n = 43) de ces patients souffrant de CWS et correctement classifiés, 65 investigations complémentaires (30 électrocardiogrammes, 16 radiographies du thorax, 10 analyses de laboratoire, 8 consultations spécialisées, et une tomodensitométrie thoracique) avaient été réalisées pour parvenir au diagnostic. Parmi les faux positifs (n = 41), on compte trois angors stables (1.8% de tous les positifs). Les résultats de la validation externe sont les suivants : une aire sous la courbe ROC de 0.76 (95% de l'intervalle de confiance : 0.73-0.79) avec une sensibilité de 22% et une spécificité de 93%. Ce score de prédiction clinique pour le CWS constitue un complément utile à son diagnostic, habituellement obtenu par exclusion. En effet, pour les 127 patients présentant un CWS et correctement classifiés par notre score, 65 investigations complémentaires auraient pu être évitées. Par ailleurs, la présence d'une douleur thoracique reproductible à la palpation, bien qu'étant sa plus importante caractéristique, n'est pas pathognomonique du CWS.
Resumo:
We developed a method of sample preparation using epoxy compound, which was validated in two steps. First, we studied the homogeneity within samples by scanning tubes filled with radioactive epoxy. We found within-sample homogeneity better than 2%. Then, we studied the homogeneity between samples during a 4.5 h dispensing time. The homogeneity between samples was found to be better than 2%. This study demonstrates that we have a validated method, which assures the traceability of epoxy samples.
Resumo:
Age-related changes in lumbar vertebral microarchitecture are evaluated, as assessed by trabecular bone score (TBS), in a cohort of 5,942 French women. The magnitude of TBS decline between 45 and 85 years of age is piecewise linear in the spine and averaged 14.5 %. TBS decline rate increases after 65 years by 50 %. INTRODUCTION: This study aimed to evaluate age-related changes in lumbar vertebral microarchitecture, as assessed by TBS, in a cohort of French women aged 45-85 years. METHODS: An all-comers cohort of French Caucasian women was selected from two clinical centers. Data obtained from these centers were cross-calibrated for TBS and bone mineral density (BMD). BMD and TBS were evaluated at L1-L4 and for all lumbar vertebrae combined using GE-Lunar Prodigy densitometer images. Weight, height, and body mass index (BMI) also were determined. To validate our all-comers cohort, the BMD normative data of our cohort and French Prodigy data were compared. RESULTS: A cohort of 5,942 French women aged 45 to 85 years was created. Dual-energy X-ray absorptiometry normative data obtained for BMD from this cohort were not significantly different from French prodigy normative data (p = 0.15). TBS values at L1-L4 were poorly correlated with BMI (r = -0.17) and weight (r = -0.14) and not correlated with height. TBS values obtained for all lumbar vertebra combined (L1, L2, L3, L4) decreased with age. The magnitude of TBS decline at L1-L4 between 45 and 85 years of age was piecewise linear in the spine and averaged 14.5 %, but this rate increased after 65 years by 50 %. Similar results were obtained for other region of interest in the lumbar spine. As opposed to BMD, TBS was not affected by spinal osteoarthrosis. CONCLUSION: The age-specific reference curve for TBS generated here could therefore be used to help clinicians to improve osteoporosis patient management and to monitor microarchitectural changes related to treatment or other diseases in routine clinical practice.
Resumo:
The prognosis of community-acquired pneumonia ranges from rapid resolution of symptoms and full recovery of functional status to the development of severe medical complications and death. The pneumonia severity index is a rigorously studied prediction rule for prognosis that objectively stratifies patients into quintiles of risk for short-term mortality on the basis of 20 demographic and clinical variables routinely available at presentation. The pneumonia severity index was derived and validated with data on >50,000 patients with community-acquired pneumonia by use of well-accepted methodological standards and is the only pneumonia decision aid that has been empirically shown to safely increase the proportion of patients given treatment in the outpatient setting. Because of its prognostic accuracy, methodological rigor, and effectiveness and safety as a decision aid, the pneumonia severity index has become the reference standard for risk stratification of community-acquired pneumonia