964 resultados para external validation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Evidence-based practice (EBP) is embraced internationally as an ideal approach to improve patient outcomes and provide cost-effective care. However, despite the support for and apparent benefits of evidence-based practice, it has been shown to be complex and difficult to incorporate into the clinical setting. Research exploring implementation of evidence-based practice has highlighted many internal and external barriers including clinicians’ lack of knowledge and confidence to integrate EBP into their day-to-day work. Nurses in particular often feel ill-equipped with little confidence to find, appraise and implement evidence. Aims: The following study aimed to undertake preliminary testing of the psychometric properties of tools that measure nurses’ self-efficacy and outcome expectancy in regard to evidence-based practice. Methods: A survey design was utilised in which nurses who had either completed an EBP unit or were randomly selected from a major tertiary referral hospital in Brisbane, Australia were sent two newly developed tools: 1) Self-efficacy in Evidence-Based Practice (SE-EBP) scale and 2) Outcome Expectancy for Evidence-Based Practice (OE-EBP) scale. Results: Principal Axis Factoring found three factors with eigenvalues above one for the SE-EBP explaining 73% of the variance and one factor for the OE-EBP scale explaining 82% of the variance. Cronbach’s alpha for SE-EBP, three SE-EBP factors and OE-EBP were all >.91 suggesting some item redundancy. The SE-EBP was able to distinguish between those with no prior exposure to EBP and those who completed an introductory EBP unit. Conclusions: While further investigation of the validity of these tools is needed, preliminary testing indicates that the SE-EBP and OE-EBP scales are valid and reliable instruments for measuring health professionals’ confidence in the process and the outcomes of basing their practice on evidence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Too often the relationship between client and external consultants is perceived as one of protagonist versus antogonist. Stories on dramatic, failed consultancies abound, as do related anecdotal quips. A contributing factor to many "apparently" failed consultancies is a poor appreciation by both the client and consultant of the client's true goals for the project and how to assess progress toward these goals. This paper presents and analyses a measurement model for assessing client success when engaging an external consultant. Three main areas of assessment are identified: (1) the consultant;s recommendations, (2) client learning, and (3) consultant performance. Engagement success is emperically measured along these dimensions through a series of case studies and a subsequent survey of clients and consultants involved in 85 computer-based information system selection projects. Validation fo the model constructs suggests the existence of six distinct and individually important dimensions of engagement success. both clients and consultants are encouraged to attend to these dimensions in pre-engagement proposal and selection processes, and post-engagement evaluation of outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the buildingEXODUS (V1.1) evacuation model is described and discussed and attempts at qualitative and quantitative model validation are presented. The data set used for the validation is the Tsukuba pavilion evacuation data. This data set is of particular interest as the evacuation was influenced by external conditions, namely inclement weather. As part of the validation exercise, the sensitivity of the buildingEXODUS predictions to a range of variables and conditions is examined, including; exit flow capacity, occupant response times and the impact of external conditions on the developing evacuation. The buildingEXODUS evacuation model was found to be able to produce good qualitative and quantitative agreement with the experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge production in entrepreneurship requires inclusivity as well as diversity and pluralism in research perspectives and approaches. In this article, the authors address concerns about interpretivist research regarding validity, reliability, objectivity, generalizability, and communicability of results that militate against its more widespread acceptance. Following the nonfoundationalist argument that all observation is theory-laden, context specific, and that there are no external criteria against which to assess research design and execution and the data produced, the authors propose that quality must be internalized within the underlying research philosophy rather than something to be tested upon completion. This requires a shift from the notion of validity as an outcome to validation as a process. To elucidate this, they provide a guiding framework and present a case illustration that will assist an interpretivist entrepreneurship researcher to establish and demonstrate the quality of their work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photographs have been used to enhance consumer reporting of preference of meat doneness, however, the use of photographs has not been validated for this purpose. This study used standard cooking methods to produce steaks of five different degrees of doneness (rare medium, medium well, well done and very well done) to study the consumer’s perception of doneness, from both the external and internal surface of the cooked steak and also from corresponding photographs of each sample. Consumers evaluated each surface of the cooked steaks in relation to doneness for acceptability, ‘just about right’ and perception of doneness. Data were analysed using a split plot ANOVA and least significant test. Perception scores (for both external and internal surfaces) between different presentation methods (steak samples and corresponding photos), were not significantly different (p > 0.05). The result indicates that photographs can be used as a valid approach for assessing preference for meat doneness.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To assess the impedance cardiogram recorded by an automated external defibrillator during cardiac arrest to facilitate emergency care by lay persons. Lay persons are poor at emergency pulse checks (sensitivity 84%, specificity 36%); guidelines recommend they should not be performed. The impedance cardiogram (dZ/dt) is used to indicate stroke volume. Can an impedance cardiogram algorithm in a defibrillator determine rapidly circulatory arrest and facilitate prompt initiation of external cardiac massage?

DESIGN: Clinical study.

SETTING: University hospital.

PATIENTS: Phase 1 patients attended for myocardial perfusion imaging. Phase 2 patients were recruited during cardiac arrest. This group included nonarrest controls.

INTERVENTIONS: The impedance cardiogram was recorded through defibrillator/electrocardiographic pads oriented in the standard cardiac arrest position.

MEASUREMENTS AND MAIN RESULTS: Phase 1: Stroke volumes from gated myocardial perfusion imaging scans were correlated with parameters from the impedance cardiogram system (dZ/dt(max) and the peak amplitude of the Fast Fourier Transform of dZ/dt between 1.5 Hz and 4.5 Hz). Multivariate analysis was performed to fit stroke volumes from gated myocardial perfusion imaging scans with linear and quadratic terms for dZ/dt(max) and the Fast Fourier Transform to identify significant parameters for incorporation into a cardiac arrest diagnostic algorithm. The square of the peak amplitude of the Fast Fourier Transform of dZ/dt was the best predictor of reduction in stroke volumes from gated myocardial perfusion imaging scans (range = 33-85 mL; p = .016). Having established that the two pad impedance cardiogram system could detect differences in stroke volumes from gated myocardial perfusion imaging scans, we assessed its performance in diagnosing cardiac arrest. Phase 2: The impedance cardiogram was recorded in 132 "cardiac arrest" patients (53 training, 79 validation) and 97 controls (47 training, 50 validation): the diagnostic algorithm indicated cardiac arrest with sensitivities and specificities (+/- exact 95% confidence intervals) of 89.1% (85.4-92.1) and 99.6% (99.4-99.7; training) and 81.1% (77.6-84.3) and 97% (96.7-97.4; validation).

CONCLUSIONS: The impedance cardiogram algorithm is a significant marker of circulatory collapse. Automated defibrillators with an integrated impedance cardiogram could improve emergency care by lay persons, enabling rapid and appropriate initiation of external cardiac massage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A single-step lateral flow immunoassay (LFIA) was developed and validated for the rapid screening of paralytic shellfish toxins (PSTs) from a variety of shellfish species, at concentrations relevant to regulatory limits of 800 μg STX-diHCl equivalents/kg shellfish meat. A simple aqueous extraction protocol was performed within several minutes from sample homogenate. The qualitative result was generated after a 5 min run time using a portable reader which removed subjectivity from data interpretation. The test was designed to generate noncompliant results with samples containing approximately 800 μg of STX-diHCl/kg. The cross-reactivities in relation to STX, expressed as mean ± SD, were as follows: NEO: 128.9% ± 29%; GTX1&4: 5.7% ± 1.5%; GTX2&3: 23.4% ± 10.4%; dcSTX: 55.6% ± 10.9%; dcNEO: 28.0% ± 8.9%; dcGTX2&3: 8.3% ± 2.7%; C1&C2: 3.1% ± 1.2%; GTX5: 23.3% ± 14.4% (n = 5 LFIA lots). There were no indications of matrix effects from the different samples evaluated (mussels, scallops, oysters, clams, cockles) nor interference from other shellfish toxins (domoic acid, okadaic acid group). Naturally contaminated sample evaluations showed no false negative results were generated from a variety of different samples and profiles (n = 23), in comparison to reference methods (MBA method 959.08, LC-FD method 2005.06). External laboratory evaluations of naturally contaminated samples (n = 39) indicated good correlation with reference methods (MBA, LC-FD). This is the first LFIA which has been shown, through rigorous validation, to have the ability to detect most major PSTs in a reliable manner and will be a huge benefit to both industry and regulators, who need to perform rapid and reliable testing to ensure shellfish are safe to eat.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Globalization and liberalization, with the entry of many prominent foreign manufacturers, changed the automobile scenario in India, since early 1990‟s. Manufacturers such as Ford, General Motors, Honda, Toyota, Suzuki, Hyundai, Renault, Mitsubishi, Benz, BMW, Volkswagen and Nissan set up their manufacturing units in India in joint venture with their Indian counterpart companies, by making use of the Foreign Direct Investment policy of the Government of India, These manufacturers started capturing the hearts of Indian car customers with their choice of technological and innovative product features, with quality and reliability. With the multiplicity of choices available to the Indian passenger car buyers, it drastically changed the way the car purchase scenario in India and particularly in the State of Kerala. This transformed the automobile scene from a sellers‟ market to buyers‟ market. Car customers started developing their own personal preferences and purchasing patterns, which were hitherto unknown in the Indian automobile segment. The main purpose of this paper is to come up with the identification of possible parameters and a framework development, that influence the consumer purchase behaviour patterns of passenger car owners in the State of Kerala, so that further research could be done, based on the framework and the identified parameters

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The goal of the Chemistry‐Climate Model Validation (CCMVal) activity is to improve understanding of chemistry‐climate models (CCMs) through process‐oriented evaluation and to provide reliable projections of stratospheric ozone and its impact on climate. An appreciation of the details of model formulations is essential for understanding how models respond to the changing external forcings of greenhouse gases and ozonedepleting substances, and hence for understanding the ozone and climate forecasts produced by the models participating in this activity. Here we introduce and review the models used for the second round (CCMVal‐2) of this intercomparison, regarding the implementation of chemical, transport, radiative, and dynamical processes in these models. In particular, we review the advantages and problems associated with approaches used to model processes of relevance to stratospheric dynamics and chemistry. Furthermore, we state the definitions of the reference simulations performed, and describe the forcing data used in these simulations. We identify some developments in chemistry‐climate modeling that make models more physically based or more comprehensive, including the introduction of an interactive ocean, online photolysis, troposphere‐stratosphere chemistry, and non‐orographic gravity‐wave deposition as linked to tropospheric convection. The relatively new developments indicate that stratospheric CCM modeling is becoming more consistent with our physically based understanding of the atmosphere.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on the first realtime ionospheric predictions network and its capabilities to ingest a global database and forecast F-layer characteristics and "in situ" electron densities along the track of an orbiting spacecraft. A global network of ionosonde stations reported around-the-clock observations of F-region heights and densities, and an on-line library of models provided forecasting capabilities. Each model was tested against the incoming data; relative accuracies were intercompared to determine the best overall fit to the prevailing conditions; and the best-fit model was used to predict ionospheric conditions on an orbit-to-orbit basis for the 12-hour period following a twice-daily model test and validation procedure. It was found that the best-fit model often provided averaged (i.e., climatologically-based) accuracies better than 5% in predicting the heights and critical frequencies of the F-region peaks in the latitudinal domain of the TSS-1R flight path. There was a sharp contrast however, in model-measurement comparisons involving predictions of actual, unaveraged, along-track densities at the 295 km orbital altitude of TSS-1R In this case, extrema in the first-principle models varied by as much as an order of magnitude in density predictions, and the best-fit models were found to disagree with the "in situ" observations of Ne by as much as 140%. The discrepancies are interpreted as a manifestation of difficulties in accurately and self-consistently modeling the external controls of solar and magnetospheric inputs and the spatial and temporal variabilities in electric fields, thermospheric winds, plasmaspheric fluxes, and chemistry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is a family of well-known external clustering validity indexes to measure the degree of compatibility or similarity between two hard partitions of a given data set, including partitions with different numbers of categories. A unified, fully equivalent set-theoretic formulation for an important class of such indexes was derived and extended to the fuzzy domain in a previous work by the author [Campello, R.J.G.B., 2007. A fuzzy extension of the Rand index and other related indexes for clustering and classification assessment. Pattern Recognition Lett., 28, 833-841]. However, the proposed fuzzy set-theoretic formulation is not valid as a general approach for comparing two fuzzy partitions of data. Instead, it is an approach for comparing a fuzzy partition against a hard referential partition of the data into mutually disjoint categories. In this paper, generalized external indexes for comparing two data partitions with overlapping categories are introduced. These indexes can be used as general measures for comparing two partitions of the same data set into overlapping categories. An important issue that is seldom touched in the literature is also addressed in the paper, namely, how to compare two partitions of different subsamples of data. A number of pedagogical examples and three simulation experiments are presented and analyzed in details. A review of recent related work compiled from the literature is also provided. (c) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inverted flying exercise with external loads of 25, 50, 75 and 100% of each individual maximum load in the pectoralis major and deltoideus anterior muscles was electromyographically analyzed in eleven male volunteers, using surface electrodes MEDI-TRACE-200 connected to a biological signals acquisition module coupled to a PC/AT computer. Electromyographic signals were processed and the effective values obtained were standardized through maximum voluntary isometric contraction. When the concentric phase of each muscle with the same load was statistically compared with the eccentric phase, it was observed that for all loads all the muscles presented significant electromyographic difference, and that the concentric phase was always higher. By analyzing the different loads for each muscle, it was noticed that in the concentric phase all the muscles presented significant electromyographic activity, being it higher with maximum load. When the effect of each load on different muscle in the concentric and eccentric phases was analyzed, the muscles presented a distinct activity profile.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the development of a knowledge-based system (KBS) prototype able to design natural gas cogeneration plants, demonstrating new features for this field. The design of such power plants represents a synthesis problem, subject to thermodynamic constraints that include the location and sizing of components. The project was developed in partnership with the major Brazilian gas and oil company, and involved interaction with an external consultant as well as an interdisciplinary team. The paper focuses on validation and lessons learned, concentrating on important aspects such as the generation of alternative configuration schemes, breadth of each scheme description created by the system, and its module to support economic feasibility analysis. (C) 2014 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analyzing and modeling relationships between the structure of chemical compounds, their physico-chemical properties, and biological or toxic effects in chemical datasets is a challenging task for scientific researchers in the field of cheminformatics. Therefore, (Q)SAR model validation is essential to ensure future model predictivity on unseen compounds. Proper validation is also one of the requirements of regulatory authorities in order to approve its use in real-world scenarios as an alternative testing method. However, at the same time, the question of how to validate a (Q)SAR model is still under discussion. In this work, we empirically compare a k-fold cross-validation with external test set validation. The introduced workflow allows to apply the built and validated models to large amounts of unseen data, and to compare the performance of the different validation approaches. Our experimental results indicate that cross-validation produces (Q)SAR models with higher predictivity than external test set validation and reduces the variance of the results. Statistical validation is important to evaluate the performance of (Q)SAR models, but does not support the user in better understanding the properties of the model or the underlying correlations. We present the 3D molecular viewer CheS-Mapper (Chemical Space Mapper) that arranges compounds in 3D space, such that their spatial proximity reflects their similarity. The user can indirectly determine similarity, by selecting which features to employ in the process. The tool can use and calculate different kinds of features, like structural fragments as well as quantitative chemical descriptors. Comprehensive functionalities including clustering, alignment of compounds according to their 3D structure, and feature highlighting aid the chemist to better understand patterns and regularities and relate the observations to established scientific knowledge. Even though visualization tools for analyzing (Q)SAR information in small molecule datasets exist, integrated visualization methods that allows for the investigation of model validation results are still lacking. We propose visual validation, as an approach for the graphical inspection of (Q)SAR model validation results. New functionalities in CheS-Mapper 2.0 facilitate the analysis of (Q)SAR information and allow the visual validation of (Q)SAR models. The tool enables the comparison of model predictions to the actual activity in feature space. Our approach reveals if the endpoint is modeled too specific or too generic and highlights common properties of misclassified compounds. Moreover, the researcher can use CheS-Mapper to inspect how the (Q)SAR model predicts activity cliffs. The CheS-Mapper software is freely available at http://ches-mapper.org.