362 resultados para Validation model

em Queensland University of Technology - ePrints Archive


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research-in-progress paper reports preliminary findings of a study that is designed to identify characteristics of an expert in the discipline of Information Systems (IS). The paper delivers a formative research model to depict characteristics of an expert with three additive constructs, using concepts derived from psychology, knowledge management and social-behaviour research. The paper then explores the formation and application ‘expertise’ using four investigative questions in the context of System Evaluations. Data have been gathered from 220 respondents representing three medium sized companies in India, using the SAP Enterprise Resource Planning system. The paper summarizes planned data analyses in construct validation, model testing and model application. A validated construct of expertise of IS will have a wide range of implications for research and practice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

OBJECTIVE: This study explored gene expression differences in predicting response to chemoradiotherapy in esophageal cancer. PURPOSE:: A major pathological response to neoadjuvant chemoradiation is observed in about 40% of esophageal cancer patients and is associated with favorable outcomes. However, patients with tumors of similar histology, differentiation, and stage can have vastly different responses to the same neoadjuvant therapy. This dichotomy may be due to differences in the molecular genetic environment of the tumor cells. BACKGROUND DATA: Diagnostic biopsies were obtained from a training cohort of esophageal cancer patients (13), and extracted RNA was hybridized to genome expression microarrays. The resulting gene expression data was verified by qRT-PCR. In a larger, independent validation cohort (27), we examined differential gene expression by qRT-PCR. The ability of differentially-regulated genes to predict response to therapy was assessed in a multivariate leave-one-out cross-validation model. RESULTS: Although 411 genes were differentially expressed between normal and tumor tissue, only 103 genes were altered between responder and non-responder tumor; and 67 genes differentially expressed >2-fold. These included genes previously reported in esophageal cancer and a number of novel genes. In the validation cohort, 8 of 12 selected genes were significantly different between the response groups. In the predictive model, 5 of 8 genes could predict response to therapy with 95% accuracy in a subset (74%) of patients. CONCLUSIONS: This study has identified a gene microarray pattern and a set of genes associated with response to neoadjuvant chemoradiation in esophageal cancer. The potential of these genes as biomarkers of response to treatment warrants further investigation. Copyright © 2009 by Lippincott Williams & Wilkins.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Mobile Emissions Assessment System for Urban and Regional Evaluation (MEASURE) model provides an external validation capability for hot stabilized option; the model is one of several new modal emissions models designed to predict hot stabilized emission rates for various motor vehicle groups as a function of the conditions under which the vehicles are operating. The validation of aggregate measurements, such as speed and acceleration profile, is performed on an independent data set using three statistical criteria. The MEASURE algorithms have proved to provide significant improvements in both average emission estimates and explanatory power over some earlier models for pollutants across almost every operating cycle tested.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Design teams are confronted with the quandary of choosing apposite building control systems to suit the needs of particular intelligent building projects, due to the availability of innumerable ‘intelligent’ building products and a dearth of inclusive evaluation tools. This paper is organised to develop a model for facilitating the selection evaluation for intelligent HVAC control systems for commercial intelligent buildings. To achieve these objectives, systematic research activities have been conducted to first develop, test and refine the general conceptual model using consecutive surveys; then, to convert the developed conceptual framework into a practical model; and, finally, to evaluate the effectiveness of the model by means of expert validation. The results of the surveys are that ‘total energy use’ is perceived as the top selection criterion, followed by the‘system reliability and stability’, ‘operating and maintenance costs’, and ‘control of indoor humidity and temperature’. This research not only presents a systematic and structured approach to evaluate candidate intelligent HVAC control system against the critical selection criteria (CSC), but it also suggests a benchmark for the selection of one control system candidate against another.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

From Pontryagin’s Maximum Principle to the Duke Kahanamoku Aquatic Complex; we develop the theory and generate implementable time efficient trajectories for a test-bed autonomous underwater vehicle (AUV). This paper is the beginning of the journey from theory to implementation. We begin by considering pure motion trajectories and move into a rectangular trajectory which is a concatenation of pure surge and pure sway. These trajectories are tested using our numerical model and demonstrated by our AUV in the pool. In this paper we demonstrate that the above motions are realizable through our method, and we gain confidence in our numerical model. We conclude that using our current techniques, implementation of time efficient trajectories is likely to succeed.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Validation is an important issue in the development and application of Bayesian Belief Network (BBN) models, especially when the outcome of the model cannot be directly observed. Despite this, few frameworks for validating BBNs have been proposed and fewer have been applied to substantive real-world problems. In this paper we adopt the approach by Pitchforth and Mengersen (2013), which includes nine validation tests that each focus on the structure, discretisation, parameterisation and behaviour of the BBNs included in the case study. We describe the process and result of implementing a validation framework on a model of a real airport terminal system with particular reference to its effectiveness in producing a valid model that can be used and understood by operational decision makers. In applying the proposed validation framework we demonstrate the overall validity of the Inbound Passenger Facilitation Model as well as the effectiveness of the validity framework itself.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The design and development of process-aware information systems is often supported by specifying requirements as business process models. Although this approach is generally accepted as an effective strategy, it remains a fundamental challenge to adequately validate these models given the diverging skill set of domain experts and system analysts. As domain experts often do not feel confident in judging the correctness and completeness of process models that system analysts create, the validation often has to regress to a discourse using natural language. In order to support such a discourse appropriately, so-called verbalization techniques have been defined for different types of conceptual models. However, there is currently no sophisticated technique available that is capable of generating natural-looking text from process models. In this paper, we address this research gap and propose a technique for generating natural language texts from business process models. A comparison with manually created process descriptions demonstrates that the generated texts are superior in terms of completeness, structure, and linguistic complexity. An evaluation with users further demonstrates that the texts are very understandable and effectively allow the reader to infer the process model semantics. Hence, the generated texts represent a useful input for process model validation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Stormwater pollution is linked to stream ecosystem degradation. In predicting stormwater pollution, various types of modelling techniques are adopted. The accuracy of predictions provided by these models depends on the data quality, appropriate estimation of model parameters, and the validation undertaken. It is well understood that available water quality datasets in urban areas span only relatively short time scales unlike water quantity data, which limits the applicability of the developed models in engineering and ecological assessment of urban waterways. This paper presents the application of leave-one-out (LOO) and Monte Carlo cross validation (MCCV) procedures in a Monte Carlo framework for the validation and estimation of uncertainty associated with pollutant wash-off when models are developed using a limited dataset. It was found that the application of MCCV is likely to result in a more realistic measure of model coefficients than LOO. Most importantly, MCCV and LOO were found to be effective in model validation when dealing with a small sample size which hinders detailed model validation and can undermine the effectiveness of stormwater quality management strategies.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The objective of this work is to formulate a nonlinear, coupled model of a container ship during parametric roll resonance, and to validate the model using experimental data.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: The aim of this study was to develop a model capable of predicting variability in the mental workload experienced by frontline operators under routine and nonroutine conditions. Background: Excess workload is a risk that needs to be managed in safety-critical industries. Predictive models are needed to manage this risk effectively yet are difficult to develop. Much of the difficulty stems from the fact that workload prediction is a multilevel problem. Method: A multilevel workload model was developed in Study 1 with data collected from an en route air traffic management center. Dynamic density metrics were used to predict variability in workload within and between work units while controlling for variability among raters. The model was cross-validated in Studies 2 and 3 with the use of a high-fidelity simulator. Results: Reported workload generally remained within the bounds of the 90% prediction interval in Studies 2 and 3. Workload crossed the upper bound of the prediction interval only under nonroutine conditions. Qualitative analyses suggest that nonroutine events caused workload to cross the upper bound of the prediction interval because the controllers could not manage their workload strategically. Conclusion: The model performed well under both routine and nonroutine conditions and over different patterns of workload variation. Application: Workload prediction models can be used to support both strategic and tactical workload management. Strategic uses include the analysis of historical and projected workflows and the assessment of staffing needs. Tactical uses include the dynamic reallocation of resources to meet changes in demand.