224 resultados para external validation
Resumo:
Hospital disaster resilience can be defined as “the ability of hospitals to resist, absorb, and respond to the shock of disasters while maintaining and surging essential health services, and then to recover to its original state or adapt to a new one.” This article aims to provide a framework which can be used to comprehensively measure hospital disaster resilience. An evaluation framework for assessing hospital resilience was initially proposed through a systematic literature review and Modified-Delphi consultation. Eight key domains were identified: hospital safety, command, communication and cooperation system, disaster plan, resource stockpile, staff capability, disaster training and drills, emergency services and surge capability, and recovery and adaptation. The data for this study were collected from 41 tertiary hospitals in Shandong Province in China, using a specially designed questionnaire. Factor analysis was conducted to determine the underpinning structure of the framework. It identified a four-factor structure of hospital resilience, namely, emergency medical response capability (F1), disaster management mechanisms (F2), hospital infrastructural safety (F3), and disaster resources (F4). These factors displayed good internal consistency. The overall level of hospital disaster resilience (F) was calculated using the scoring model: F = 0.615F1 + 0.202F2 + 0.103F3 + 0.080F4. This validated framework provides a new way to operationalise the concept of hospital resilience, and it is also a foundation for the further development of the measurement instrument in future studies.
Resumo:
This article presents the field applications and validations for the controlled Monte Carlo data generation scheme. This scheme was previously derived to assist the Mahalanobis squared distance–based damage identification method to cope with data-shortage problems which often cause inadequate data multinormality and unreliable identification outcome. To do so, real-vibration datasets from two actual civil engineering structures with such data (and identification) problems are selected as the test objects which are then shown to be in need of enhancement to consolidate their conditions. By utilizing the robust probability measures of the data condition indices in controlled Monte Carlo data generation and statistical sensitivity analysis of the Mahalanobis squared distance computational system, well-conditioned synthetic data generated by an optimal controlled Monte Carlo data generation configurations can be unbiasedly evaluated against those generated by other set-ups and against the original data. The analysis results reconfirm that controlled Monte Carlo data generation is able to overcome the shortage of observations, improve the data multinormality and enhance the reliability of the Mahalanobis squared distance–based damage identification method particularly with respect to false-positive errors. The results also highlight the dynamic structure of controlled Monte Carlo data generation that makes this scheme well adaptive to any type of input data with any (original) distributional condition.
Resumo:
The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.
Resumo:
The present study explores reproducing the closest geometry of a high pressure ratio single stage radial-inflow turbine applied in the Sundstrans Power Systems T-100 Multipurpose Small Power Unit. The commercial software ANSYS-Vista RTD along with a built in module, BladeGen, is used to conduct a meanline design and create 3D geometry of one flow passage. Carefully examining the proposed design against the geometrical and experimental data, ANSYS-TurboGrid is applied to generate computational mesh. CFD simulations are performed with ANSYS-CFX in which three-dimensional Reynolds-Averaged Navier-Stokes equations are solved subject to appropriate boundary conditions. Results are compared with numerical and experimental data published in the literature in order to generate the exact geometry of the existing turbine and validate the numerical results against the experimental ones.
Resumo:
Singapore is located at the equator, with abundant supply of solar radiation, relatively high ambient temperature and relative humidity throughout the year. The meteorological conditions of Singapore are favourable for efficient operation of solar energy based systems. Solar assisted heat pump systems are built on the roof-top of National University of Singapore’s Faculty of Engineering. The objectives of this study include the design and performance evaluation of a solar assisted heat-pump system for water desalination, water heating and drying of clothes. Using MATLAB programming language, a 2-dimensional simulation model has been developed to conduct parametric studies on the system. The system shows good prospect to be implemented in both industrial and residential applications and would give new opportunities in replacing conventional energy sources with green renewable energy.
Resumo:
Computational models in physiology often integrate functional and structural information from a large range of spatio-temporal scales from the ionic to the whole organ level. Their sophistication raises both expectations and scepticism concerning how computational methods can improve our understanding of living organisms and also how they can reduce, replace and refine animal experiments. A fundamental requirement to fulfil these expectations and achieve the full potential of computational physiology is a clear understanding of what models represent and how they can be validated. The present study aims at informing strategies for validation by elucidating the complex interrelations between experiments, models and simulations in cardiac electrophysiology. We describe the processes, data and knowledge involved in the construction of whole ventricular multiscale models of cardiac electrophysiology. Our analysis reveals that models, simulations, and experiments are intertwined, in an assemblage that is a system itself, namely the model-simulation-experiment (MSE) system. Validation must therefore take into account the complex interplay between models, simulations and experiments. Key points for developing strategies for validation are: 1) understanding sources of bio-variability is crucial to the comparison between simulation and experimental results; 2) robustness of techniques and tools is a pre-requisite to conducting physiological investigations using the MSE system; 3) definition and adoption of standards facilitates interoperability of experiments, models and simulations; 4) physiological validation must be understood as an iterative process that defines the specific aspects of electrophysiology the MSE system targets, and is driven by advancements in experimental and computational methods and the combination of both.
Resumo:
Background: The Lower Limb Functional Index (LLFI) is a relatively recently published regional outcome measure. The development article showed the LLFI had robust and valid clinimetric properties with sound psychometric and practical characteristics when compared to the Lower Limb Extremity Scale (LEFS) criterion standard. Objective: The purpose of this study was cross cultural adaptation and validation of the LLFI Spanish-version (LLFI-Sp) in a Spanish population. Methods: A two stage observational study was conducted. The LLFI was initially cross-culturally adapted to Spanish through double forward and single backward translation; then subsequently validated for the psychometric characteristics of validity, internal consistency, reliability, error score and factor structure. Participants (n = 136) with various lower limb conditions of >12 weeks duration completed the LLFI-Sp, Western Ontario and McMaster University Osteoarthritis Index (WOMAC) and the Euroqol Health Questionnaire 5 Dimensions (EQ-5D-3 L). The full sample was employed to determine internal consistency, concurrent criterion validity, construct validity and factor structure; a subgroup (n = 45) determined reliability at seven days concurrently completing a global rating of change scale. Results: The LLFI-Sp demonstrated high but not excessive internal consistency (α = 0.91) and high reliability (ICC = 0.96). The factor structure was one-dimensional which supported the construct validity. Criterion validity with the WOMAC was strong (r = 0.77) and with the EQ-5D-3 L fair and inversely correlated (r = -0.62). The study limitations included the lack of longitudinal data and the determination of responsiveness. Conclusions: The LLFI-Sp supports the findings of the original English version as being a valid lower limb regional outcome measure. It demonstrated similar psychometric properties for internal consistency, validity, reliability, error score and factor structure.
Resumo:
Stormwater pollution is linked to stream ecosystem degradation. In predicting stormwater pollution, various types of modelling techniques are adopted. The accuracy of predictions provided by these models depends on the data quality, appropriate estimation of model parameters, and the validation undertaken. It is well understood that available water quality datasets in urban areas span only relatively short time scales unlike water quantity data, which limits the applicability of the developed models in engineering and ecological assessment of urban waterways. This paper presents the application of leave-one-out (LOO) and Monte Carlo cross validation (MCCV) procedures in a Monte Carlo framework for the validation and estimation of uncertainty associated with pollutant wash-off when models are developed using a limited dataset. It was found that the application of MCCV is likely to result in a more realistic measure of model coefficients than LOO. Most importantly, MCCV and LOO were found to be effective in model validation when dealing with a small sample size which hinders detailed model validation and can undermine the effectiveness of stormwater quality management strategies.
Resumo:
Study Design Delphi panel and cohort study. Objective To develop and refine a condition-specific, patient-reported outcome measure, the Ankle Fracture Outcome of Rehabilitation Measure (A-FORM), and to examine its psychometric properties, including factor structure, reliability, and validity, by assessing item fit with the Rasch model. Background To our knowledge, there is no patient-reported outcome measure specific to ankle fracture with a robust content foundation. Methods A 2-stage research design was implemented. First, a Delphi panel that included patients and health professionals developed the items and refined the item wording. Second, a cohort study (n = 45) with 2 assessment points was conducted to permit preliminary maximum-likelihood exploratory factor analysis and Rasch analysis. Results The Delphi panel reached consensus on 53 potential items that were carried forward to the cohort phase. From the 2 time points, 81 questionnaires were completed and analyzed; 38 potential items were eliminated on account of greater than 10% missing data, factor loadings, and uniqueness. The 15 unidimensional items retained in the scale demonstrated appropriate person and item reliability after (and before) removal of 1 item (anxious about footwear) that had a higher-than-ideal outfit statistic (1.75). The “anxious about footwear” item was retained in the instrument, but only the 14 items with acceptable infit and outfit statistics (range, 0.5–1.5) were included in the summary score. Conclusion This investigation developed and refined the A-FORM (Version 1.0). The A-FORM items demonstrated favorable psychometric properties and are suitable for conversion to a single summary score. Further studies utilizing the A-FORM instrument are warranted. J Orthop Sports Phys Ther 2014;44(7):488–499. Epub 22 May 2014. doi:10.2519/jospt.2014.4980
Resumo:
In recent years, increasing focus has been made on making good business decisions utilizing the product of data analysis. With the advent of the Big Data phenomenon, this is even more apparent than ever before. But the question is how can organizations trust decisions made on the basis of results obtained from analysis of untrusted data? Assurances and trust that data and datasets that inform these decisions have not been tainted by outside agency. This study will propose enabling the authentication of datasets specifically by the extension of the RESTful architectural scheme to include authentication parameters while operating within a larger holistic security framework architecture or model compliant to legislation.