956 resultados para Validation model
Resumo:
The purpose of the study is: (1) to describe how nursing students' experienced their clinical learning environment and the supervision given by staff nurses working in hospital settings; and (2) to develop and test an evaluation scale of Clinical Learning Environment and Supervision (CLES). The study has been carried out in different phases. The pilot study (n=163) explored the association between the characteristics of a ward and its evaluation as a learning environment by students. The second version of research instrument (which was developed by the results of this pilot study) were tested by an expert panel (n=9 nurse teachers) and test-retest group formed by student nurses (n=38). After this evaluative phase, the CLES was formed as the basic research instrument for this study and it was tested with the Finnish main sample (n=416). In this phase, a concurrent validity instrument (Dunn & Burnett 1995) was used to confirm the validation process of CLES. The international comparative study was made by comparing the Finnish main sample with a British sample (n=142). The international comparative study was necessary for two reasons. In the instrument developing process, there is a need to test the new instrument in some other nursing culture. Other reason for comparative international study is the reflecting the impact of open employment markets in the European Union (EU) on the need to evaluate and to integrate EU health care educational systems. The results showed that the individualised supervision system is the most used supervision model and the supervisory relationship with personal mentor is the most meaningful single element of supervision evaluated by nursing students. The ward atmosphere and the management style of ward manager are the most important environmental factors of the clinical ward. The study integrates two theoretical elements - learning environment and supervision - in developing a preliminary theoretical model. The comparative international study showed that, Finnish students were more satisfied and evaluated their clinical placements and supervision with higher scores than students in the United Kingdom (UK). The difference between groups was statistical highly significant (p= 0.000). In the UK, clinical placements were longer but students met their nurse teachers less frequently than students in Finland. Arrangements for supervision were similar. This research process has produced the evaluation scale (CLES), which can be used in research and quality assessments of clinical learning environment and supervision in Finland and in the UK. CLES consists of 27 items and it is sub-divided into five sub-dimensions. Cronbach's alpha coefficient varied from high 0.94 to marginal 0.73. CLES is a compact evaluation scale and user-friendliness makes it suitable for continuing evaluation.
Resumo:
PURPOSE: Quantification of myocardial blood flow (MBF) with generator-produced (82)Rb is an attractive alternative for centres without an on-site cyclotron. Our aim was to validate (82)Rb-measured MBF in relation to that measured using (15)O-water, as a tracer 100% of which can be extracted from the circulation even at high flow rates, in healthy control subject and patients with mild coronary artery disease (CAD). METHODS: MBF was measured at rest and during adenosine-induced hyperaemia with (82)Rb and (15)O-water PET in 33 participants (22 control subjects, aged 30 ± 13 years; 11 CAD patients without transmural infarction, aged 60 ± 13 years). A one-tissue compartment (82)Rb model with ventricular spillover correction was used. The (82)Rb flow-dependent extraction rate was derived from (15)O-water measurements in a subset of 11 control subjects. Myocardial flow reserve (MFR) was defined as the hyperaemic/rest MBF. Pearson's correlation r, Bland-Altman 95% limits of agreement (LoA), and Lin's concordance correlation ρ (c) (measuring both precision and accuracy) were used. RESULTS: Over the entire MBF range (0.66-4.7 ml/min/g), concordance was excellent for MBF (r = 0.90, [(82)Rb-(15)O-water] mean difference ± SD = 0.04 ± 0.66 ml/min/g, LoA = -1.26 to 1.33 ml/min/g, ρ(c) = 0.88) and MFR (range 1.79-5.81, r = 0.83, mean difference = 0.14 ± 0.58, LoA = -0.99 to 1.28, ρ(c) = 0.82). Hyperaemic MBF was reduced in CAD patients compared with the subset of 11 control subjects (2.53 ± 0.74 vs. 3.62 ± 0.68 ml/min/g, p = 0.002, for (15)O-water; 2.53 ± 1.01 vs. 3.82 ± 1.21 ml/min/g, p = 0.013, for (82)Rb) and this was paralleled by a lower MFR (2.65 ± 0.62 vs. 3.79 ± 0.98, p = 0.004, for (15)O-water; 2.85 ± 0.91 vs. 3.88 ± 0.91, p = 0.012, for (82)Rb). Myocardial perfusion was homogeneous in 1,114 of 1,122 segments (99.3%) and there were no differences in MBF among the coronary artery territories (p > 0.31). CONCLUSION: Quantification of MBF with (82)Rb with a newly derived correction for the nonlinear extraction function was validated against MBF measured using (15)O-water in control subjects and patients with mild CAD, where it was found to be accurate at high flow rates. (82)Rb-derived MBF estimates seem robust for clinical research, advancing a step further towards its implementation in clinical routine.
Resumo:
This study aimed to evaluate the content validity of the nursing diagnosis of nausea in the immediate post-operative period, considering Fehring’s model. Descriptive study with 52 nurses experts who responded an instrument containing identification and validation of nausea diagnosis data. Most experts considered the domain 12 (Comfort), Class 1 (Physical Comfort) and the statement (Nausea) adequate to the diagnosis. Modifications were suggested in the current definition of this nursing diagnosis. Four defining characteristics were considered primary (reported nausea, increased salivation, aversion to food and vomiting sensation) and eight secondary (increased swallowing, sour taste in the mouth, pallor, tachycardia, diaphoresis, sensation of hot and cold, changes in blood pressure and pupil dilation). The total score for the diagnosis of nausea was 0.79. Reports of nausea, vomiting sensation, increased salivation and aversion to food are strong predictors of nursing diagnosis of nausea.
Resumo:
Huntington's disease (HD) is an autosomal dominant neurodegenerative disorder caused by an expansion of CAG repeats in the huntingtin (Htt) gene. Despite intensive efforts devoted to investigating the mechanisms of its pathogenesis, effective treatments for this devastating disease remain unavailable. The lack of suitable models recapitulating the entire spectrum of the degenerative process has severely hindered the identification and validation of therapeutic strategies. The discovery that the degeneration in HD is caused by a mutation in a single gene has offered new opportunities to develop experimental models of HD, ranging from in vitro models to transgenic primates. However, recent advances in viral-vector technology provide promising alternatives based on the direct transfer of genes to selected sub-regions of the brain. Rodent studies have shown that overexpression of mutant human Htt in the striatum using adeno-associated virus or lentivirus vectors induces progressive neurodegeneration, which resembles that seen in HD. This article highlights progress made in modeling HD using viral vector gene transfer. We describe data obtained with of this highly flexible approach for the targeted overexpression of a disease-causing gene. The ability to deliver mutant Htt to specific tissues has opened pathological processes to experimental analysis and allowed targeted therapeutic development in rodent and primate pre-clinical models.
Resumo:
Many definitions and debates exist about the core characteristics of social and solidarity economy (SSE) and its actors. Among others, legal forms, profit, geographical scope, and size as criteria for identifying SSE actors often reveal dissents among SSE scholars. Instead of using a dichotomous, either-in-or-out definition of SSE actors, this paper presents an assessment tool that takes into account multiple dimensions to offer a more comprehensive and nuanced view of the field. We first define the core dimensions of the assessment tool by synthesizing the multiple indicators found in the literature. We then empirically test these dimensions and their interrelatedness and seek to identify potential clusters of actors. Finally we discuss the practical implications of our model.
Resumo:
INTRODUCTION: A clinical decision rule to improve the accuracy of a diagnosis of influenza could help clinicians avoid unnecessary use of diagnostic tests and treatments. Our objective was to develop and validate a simple clinical decision rule for diagnosis of influenza. METHODS: We combined data from 2 studies of influenza diagnosis in adult outpatients with suspected influenza: one set in California and one in Switzerland. Patients in both studies underwent a structured history and physical examination and had a reference standard test for influenza (polymerase chain reaction or culture). We randomly divided the dataset into derivation and validation groups and then evaluated simple heuristics and decision rules from previous studies and 3 rules based on our own multivariate analysis. Cutpoints for stratification of risk groups in each model were determined using the derivation group before evaluating them in the validation group. For each decision rule, the positive predictive value and likelihood ratio for influenza in low-, moderate-, and high-risk groups, and the percentage of patients allocated to each risk group, were reported. RESULTS: The simple heuristics (fever and cough; fever, cough, and acute onset) were helpful when positive but not when negative. The most useful and accurate clinical rule assigned 2 points for fever plus cough, 2 points for myalgias, and 1 point each for duration <48 hours and chills or sweats. The risk of influenza was 8% for 0 to 2 points, 30% for 3 points, and 59% for 4 to 6 points; the rule performed similarly in derivation and validation groups. Approximately two-thirds of patients fell into the low- or high-risk group and would not require further diagnostic testing. CONCLUSION: A simple, valid clinical rule can be used to guide point-of-care testing and empiric therapy for patients with suspected influenza.
Resumo:
The objective of this paper is to compare the performance of twopredictive radiological models, logistic regression (LR) and neural network (NN), with five different resampling methods. One hundred and sixty-seven patients with proven calvarial lesions as the only known disease were enrolled. Clinical and CT data were used for LR and NN models. Both models were developed with cross validation, leave-one-out and three different bootstrap algorithms. The final results of each model were compared with error rate and the area under receiver operating characteristic curves (Az). The neural network obtained statistically higher Az than LR with cross validation. The remaining resampling validation methods did not reveal statistically significant differences between LR and NN rules. The neural network classifier performs better than the one based on logistic regression. This advantage is well detected by three-fold cross-validation, but remains unnoticed when leave-one-out or bootstrap algorithms are used.
Resumo:
Intensification of agricultural production without a sound management and regulations can lead to severe environmental problems, as in Western Santa Catarina State, Brazil, where intensive swine production has caused large accumulations of manure and consequently water pollution. Natural resource scientists are asked by decision-makers for advice on management and regulatory decisions. Distributed environmental models are useful tools, since they can be used to explore consequences of various management practices. However, in many areas of the world, quantitative data for model calibration and validation are lacking. The data-intensive distributed environmental model AgNPS was applied in a data-poor environment, the upper catchment (2,520 ha) of the Ariranhazinho River, near the city of Seara, in Santa Catarina State. Steps included data preparation, cell size selection, sensitivity analysis, model calibration and application to different management scenarios. The model was calibrated based on a best guess for model parameters and on a pragmatic sensitivity analysis. The parameters were adjusted to match model outputs (runoff volume, peak runoff rate and sediment concentration) closely with the sparse observed data. A modelling grid cell resolution of 150 m adduced appropriate and computer-fit results. The rainfall runoff response of the AgNPS model was calibrated using three separate rainfall ranges (< 25, 25-60, > 60 mm). Predicted sediment concentrations were consistently six to ten times higher than observed, probably due to sediment trapping along vegetated channel banks. Predicted N and P concentrations in stream water ranged from just below to well above regulatory norms. Expert knowledge of the area, in addition to experience reported in the literature, was able to compensate in part for limited calibration data. Several scenarios (actual, recommended and excessive manure applications, and point source pollution from swine operations) could be compared by the model, using a relative ranking rather than quantitative predictions.
Resumo:
Objective: Health status measures usually have an asymmetric distribution and present a highpercentage of respondents with the best possible score (ceiling effect), specially when they areassessed in the overall population. Different methods to model this type of variables have beenproposed that take into account the ceiling effect: the tobit models, the Censored Least AbsoluteDeviations (CLAD) models or the two-part models, among others. The objective of this workwas to describe the tobit model, and compare it with the Ordinary Least Squares (OLS) model,that ignores the ceiling effect.Methods: Two different data sets have been used in order to compare both models: a) real datacomming from the European Study of Mental Disorders (ESEMeD), in order to model theEQ5D index, one of the measures of utilities most commonly used for the evaluation of healthstatus; and b) data obtained from simulation. Cross-validation was used to compare thepredicted values of the tobit model and the OLS models. The following estimators werecompared: the percentage of absolute error (R1), the percentage of squared error (R2), the MeanSquared Error (MSE) and the Mean Absolute Prediction Error (MAPE). Different datasets werecreated for different values of the error variance and different percentages of individuals withceiling effect. The estimations of the coefficients, the percentage of explained variance and theplots of residuals versus predicted values obtained under each model were compared.Results: With regard to the results of the ESEMeD study, the predicted values obtained with theOLS model and those obtained with the tobit models were very similar. The regressioncoefficients of the linear model were consistently smaller than those from the tobit model. In thesimulation study, we observed that when the error variance was small (s=1), the tobit modelpresented unbiased estimations of the coefficients and accurate predicted values, specially whenthe percentage of individuals wiht the highest possible score was small. However, when theerrror variance was greater (s=10 or s=20), the percentage of explained variance for the tobitmodel and the predicted values were more similar to those obtained with an OLS model.Conclusions: The proportion of variability accounted for the models and the percentage ofindividuals with the highest possible score have an important effect in the performance of thetobit model in comparison with the linear model.
Resumo:
BACKGROUND: Genotypes obtained with commercial SNP arrays have been extensively used in many large case-control or population-based cohorts for SNP-based genome-wide association studies for a multitude of traits. Yet, these genotypes capture only a small fraction of the variance of the studied traits. Genomic structural variants (GSV) such as Copy Number Variation (CNV) may account for part of the missing heritability, but their comprehensive detection requires either next-generation arrays or sequencing. Sophisticated algorithms that infer CNVs by combining the intensities from SNP-probes for the two alleles can already be used to extract a partial view of such GSV from existing data sets. RESULTS: Here we present several advances to facilitate the latter approach. First, we introduce a novel CNV detection method based on a Gaussian Mixture Model. Second, we propose a new algorithm, PCA merge, for combining copy-number profiles from many individuals into consensus regions. We applied both our new methods as well as existing ones to data from 5612 individuals from the CoLaus study who were genotyped on Affymetrix 500K arrays. We developed a number of procedures in order to evaluate the performance of the different methods. This includes comparison with previously published CNVs as well as using a replication sample of 239 individuals, genotyped with Illumina 550K arrays. We also established a new evaluation procedure that employs the fact that related individuals are expected to share their CNVs more frequently than randomly selected individuals. The ability to detect both rare and common CNVs provides a valuable resource that will facilitate association studies exploring potential phenotypic associations with CNVs. CONCLUSION: Our new methodologies for CNV detection and their evaluation will help in extracting additional information from the large amount of SNP-genotyping data on various cohorts and use this to explore structural variants and their impact on complex traits.
Resumo:
In this work, a previously-developed, statistical-based, damage-detection approach was validated for its ability to autonomously detect damage in bridges. The damage-detection approach uses statistical differences in the actual and predicted behavior of the bridge caused under a subset of ambient trucks. The predicted behavior is derived from a statistics-based model trained with field data from the undamaged bridge (not a finite element model). The differences between actual and predicted responses, called residuals, are then used to construct control charts, which compare undamaged and damaged structure data. Validation of the damage-detection approach was achieved by using sacrificial specimens that were mounted to the bridge and exposed to ambient traffic loads and which simulated actual damage-sensitive locations. Different damage types and levels were introduced to the sacrificial specimens to study the sensitivity and applicability. The damage-detection algorithm was able to identify damage, but it also had a high false-positive rate. An evaluation of the sub-components of the damage-detection methodology and methods was completed for the purpose of improving the approach. Several of the underlying assumptions within the algorithm were being violated, which was the source of the false-positives. Furthermore, the lack of an automatic evaluation process was thought to potentially be an impediment to widespread use. Recommendations for the improvement of the methodology were developed and preliminarily evaluated. These recommendations are believed to improve the efficacy of the damage-detection approach.
Resumo:
False identity documents constitute a potential powerful source of forensic intelligence because they are essential elements of transnational crime and provide cover for organized crime. In previous work, a systematic profiling method using false documents' visual features has been built within a forensic intelligence model. In the current study, the comparison process and metrics lying at the heart of this profiling method are described and evaluated. This evaluation takes advantage of 347 false identity documents of four different types seized in two countries whose sources were known to be common or different (following police investigations and dismantling of counterfeit factories). Intra-source and inter-sources variations were evaluated through the computation of more than 7500 similarity scores. The profiling method could thus be validated and its performance assessed using two complementary approaches to measuring type I and type II error rates: a binary classification and the computation of likelihood ratios. Very low error rates were measured across the four document types, demonstrating the validity and robustness of the method to link documents to a common source or to differentiate them. These results pave the way for an operational implementation of a systematic profiling process integrated in a developed forensic intelligence model.
Resumo:
Photopolymerization is commonly used in a broad range of bioapplications, such as drug delivery, tissue engineering, and surgical implants, where liquid materials are injected and then hardened by means of illumination to create a solid polymer network. However, photopolymerization using a probe, e.g., needle guiding both the liquid and the curing illumination, has not been thoroughly investigated. We present a Monte Carlo model that takes into account the dynamic absorption and scattering parameters as well as solid-liquid boundaries of the photopolymer to yield the shape and volume of minimally invasively injected, photopolymerized hydrogels. In the first part of the article, our model is validated using a set of well-known poly(ethylene glycol) dimethacrylate hydrogels showing an excellent agreement between simulated and experimental volume-growth-rates. In the second part, in situ experimental results and simulations for photopolymerization in tissue cavities are presented. It was found that a cavity with a volume of 152 mm3 can be photopolymerized from the output of a 0.28-mm2 fiber by adding scattering lipid particles while only a volume of 38 mm3 (25%) was achieved without particles. The proposed model provides a simple and robust method to solve complex photopolymerization problems, where the dimension of the light source is much smaller than the volume of the photopolymerizable hydrogel.
Resumo:
Since integral abutment bridges decrease the initial and maintenance costs of bridges, they provide an attractive alternative for bridge designers. The objective of this project is to develop rational and experimentally verified design recommendations for these bridges. Field testing consisted of instrumenting two bridges in Iowa to monitor air and bridge temperatures, bridge displacements, and pile strains. Core samples were also collected to determine coefficients of thermal expansion for the two bridges. Design values for the coefficient of thermal expansion of concrete are recommended, as well as revised temperature ranges for the deck and girders of steel and concrete bridges. A girder extension model is developed to predict the longitudinal bridge displacements caused by changing bridge temperatures. Abutment rotations and passive soil pressures behind the abutment were neglected. The model is subdivided into segments that have uniform temperatures, coefficients of expansion, and moduli of elasticity. Weak axis pile strains were predicted using a fixed-head model. The pile is idealized as an equivalent cantilever with a length determined by the surrounding soil conditions and pile properties. Both the girder extension model and the fixed-head model are conservative for design purposes. A longitudinal frame model is developed to account for abutment rotations. The frame model better predicts both the longitudinal displacement and weak axis pile strains than do the simpler models. A lateral frame model is presented to predict the lateral motion of skewed bridges and the associated strong axis pile strains. Full passive soil pressure is assumed on the abutment face. Two alternatives for the pile design are presented. Alternative One is the more conservative and includes thermally induced stresses. Alternative Two neglects thermally induced stresses but allows for the partial formation of plastic hinges (inelastic redistribution of forces). Ductility criteria are presented for this alternative. Both alternatives are illustrated in a design example.
Resumo:
This study aimed to assess the psychometric robustness of the French version of the Supportive Care Needs Survey and breast cancer (BC) module (SCNS-SF34-Fr and SCNS-BR8-Fr). Breast cancer patients were recruited in two hospitals (in Paris, France and Lausanne, Switzerland) either in ambulatory chemotherapy or radiotherapy, or surgery services. They were invited to complete the SCNS-SF34-Fr and SCNS-BR8-Fr as well as quality of life and patient satisfaction questionnaires. Three hundred and eighty-four (73% response rate) BC patients returned completed questionnaires. A five-factor model was confirmed for the SCNS-SF34-Fr with adequate goodness-of-fit indexes, although some items evidenced content redundancy, and a one-factor was identified for the SCNS-BR8-Fr. Internal consistency and test-retest estimates were satisfactory for most scales. The SCNS-SF34-Fr and SCNS-BR8-Fr scales demonstrated conceptual differences with the quality of life and satisfaction with care scales, highlighting the specific relevance of this assessment. Different levels of needs could be differentiated between groups of BC patients in terms of age and level of education (P < 0.001). The SCNS-SF34-Fr and SCNS-BR8-Fr present adequate psychometric properties despite some redundant items. These questionnaires allow for the crucial endeavour to design appropriate care services according to BC patients' characteristics.