905 resultados para Errors in variables models


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Advances in safety research—trying to improve the collective understanding of motor vehicle crash causation—rests upon the pursuit of numerous lines of inquiry. The research community has focused on analytical methods development (negative binomial specifications, simultaneous equations, etc.), on better experimental designs (before-after studies, comparison sites, etc.), on improving exposure measures, and on model specification improvements (additive terms, non-linear relations, etc.). One might think of different lines of inquiry in terms of ‘low lying fruit’—areas of inquiry that might provide significant improvements in understanding crash causation. It is the contention of this research that omitted variable bias caused by the exclusion of important variables is an important line of inquiry in safety research. In particular, spatially related variables are often difficult to collect and omitted from crash models—but offer significant ability to better understand contributing factors to crashes. This study—believed to represent a unique contribution to the safety literature—develops and examines the role of a sizeable set of spatial variables in intersection crash occurrence. In addition to commonly considered traffic and geometric variables, examined spatial factors include local influences of weather, sun glare, proximity to drinking establishments, and proximity to schools. The results indicate that inclusion of these factors results in significant improvement in model explanatory power, and the results also generally agree with expectation. The research illuminates the importance of spatial variables in safety research and also the negative consequences of their omissions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research objectives of this thesis were to contribute to Bayesian statistical methodology by contributing to risk assessment statistical methodology, and to spatial and spatio-temporal methodology, by modelling error structures using complex hierarchical models. Specifically, I hoped to consider two applied areas, and use these applications as a springboard for developing new statistical methods as well as undertaking analyses which might give answers to particular applied questions. Thus, this thesis considers a series of models, firstly in the context of risk assessments for recycled water, and secondly in the context of water usage by crops. The research objective was to model error structures using hierarchical models in two problems, namely risk assessment analyses for wastewater, and secondly, in a four dimensional dataset, assessing differences between cropping systems over time and over three spatial dimensions. The aim was to use the simplicity and insight afforded by Bayesian networks to develop appropriate models for risk scenarios, and again to use Bayesian hierarchical models to explore the necessarily complex modelling of four dimensional agricultural data. The specific objectives of the research were to develop a method for the calculation of credible intervals for the point estimates of Bayesian networks; to develop a model structure to incorporate all the experimental uncertainty associated with various constants thereby allowing the calculation of more credible credible intervals for a risk assessment; to model a single day’s data from the agricultural dataset which satisfactorily captured the complexities of the data; to build a model for several days’ data, in order to consider how the full data might be modelled; and finally to build a model for the full four dimensional dataset and to consider the timevarying nature of the contrast of interest, having satisfactorily accounted for possible spatial and temporal autocorrelations. This work forms five papers, two of which have been published, with two submitted, and the final paper still in draft. The first two objectives were met by recasting the risk assessments as directed, acyclic graphs (DAGs). In the first case, we elicited uncertainty for the conditional probabilities needed by the Bayesian net, incorporated these into a corresponding DAG, and used Markov chain Monte Carlo (MCMC) to find credible intervals, for all the scenarios and outcomes of interest. In the second case, we incorporated the experimental data underlying the risk assessment constants into the DAG, and also treated some of that data as needing to be modelled as an ‘errors-invariables’ problem [Fuller, 1987]. This illustrated a simple method for the incorporation of experimental error into risk assessments. In considering one day of the three-dimensional agricultural data, it became clear that geostatistical models or conditional autoregressive (CAR) models over the three dimensions were not the best way to approach the data. Instead CAR models are used with neighbours only in the same depth layer. This gave flexibility to the model, allowing both the spatially structured and non-structured variances to differ at all depths. We call this model the CAR layered model. Given the experimental design, the fixed part of the model could have been modelled as a set of means by treatment and by depth, but doing so allows little insight into how the treatment effects vary with depth. Hence, a number of essentially non-parametric approaches were taken to see the effects of depth on treatment, with the model of choice incorporating an errors-in-variables approach for depth in addition to a non-parametric smooth. The statistical contribution here was the introduction of the CAR layered model, the applied contribution the analysis of moisture over depth and estimation of the contrast of interest together with its credible intervals. These models were fitted using WinBUGS [Lunn et al., 2000]. The work in the fifth paper deals with the fact that with large datasets, the use of WinBUGS becomes more problematic because of its highly correlated term by term updating. In this work, we introduce a Gibbs sampler with block updating for the CAR layered model. The Gibbs sampler was implemented by Chris Strickland using pyMCMC [Strickland, 2010]. This framework is then used to consider five days data, and we show that moisture in the soil for all the various treatments reaches levels particular to each treatment at a depth of 200 cm and thereafter stays constant, albeit with increasing variances with depth. In an analysis across three spatial dimensions and across time, there are many interactions of time and the spatial dimensions to be considered. Hence, we chose to use a daily model and to repeat the analysis at all time points, effectively creating an interaction model of time by the daily model. Such an approach allows great flexibility. However, this approach does not allow insight into the way in which the parameter of interest varies over time. Hence, a two-stage approach was also used, with estimates from the first-stage being analysed as a set of time series. We see this spatio-temporal interaction model as being a useful approach to data measured across three spatial dimensions and time, since it does not assume additivity of the random spatial or temporal effects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Body-size measurement errors are usually ignored in stock assessments, but may be important when body-size data (e.g., from visual sur veys) are imprecise. We used experiments and models to quantify measurement errors and their effects on assessment models for sea scallops (Placopecten magellanicus). Errors in size data obscured modes from strong year classes and increased frequency and size of the largest and smallest sizes, potentially biasing growth, mortality, and biomass estimates. Modeling techniques for errors in age data proved useful for errors in size data. In terms of a goodness of model fit to the assessment data, it was more important to accommodate variance than bias. Models that accommodated size errors fitted size data substantially better. We recommend experimental quantification of errors along with a modeling approach that accommodates measurement errors because a direct algebraic approach was not robust and because error parameters were diff icult to estimate in our assessment model. The importance of measurement errors depends on many factors and should be evaluated on a case by case basis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

1. Closed Ecological Systems (CES) are small manmade ecosystems which do not have any material exchange with the surrounding environment. Recent ecological and technological advances enable successful establishment and maintenance of CES, making them a suitable tool for detecting and measuring subtle feedbacks and mechanisms. 2. As a part of an analogue (physical) C cycle modelling experiment, we developed a non-intrusive methodology to control the internal environment and to monitor atmospheric CO2 concentration inside 16 replicated CES. Whilst maintaining an air-tight seal of all CES, this approach allowed for access to the CO2 measuring equipment for periodic re-calibration and repairs. 3. To ensure reliable cross-comparison of CO2 observations between individual CES units and to minimise the cost of the system, only one CO2 sampling unit was used. An ADC BioScientific OP-2 (open-path) analyser mounted on a swinging arm was passing over a set of 16 measuring cells. Each cell was connected to an individual CES with air continuously circulating between them. 4. Using this setup, we were able to continuously measure several environmental variables and CO2 concentration within each closed system, allowing us to study minute effects of changing temperature on C fluxes within each CES. The CES and the measuring cells showed minimal air leakage during an experimental run lasting, on average, 3 months. The CO2 analyser assembly performed reliably for over 2 years, however an early iteration of the present design proved to be sensitive to positioning errors. 5. We indicate how the methodology can be further improved and suggest possible avenues where future CES based research could be applied.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a general modeling approach to investigate and to predict measurement errors in active energy meters both induction and electronic types. The measurement error modeling is based on Generalized Additive Model (GAM), Ridge Regression method and experimental results of meter provided by a measurement system. The measurement system provides a database of 26 pairs of test waveforms captured in a real electrical distribution system, with different load characteristics (industrial, commercial, agricultural, and residential), covering different harmonic distortions, and balanced and unbalanced voltage conditions. In order to illustrate the proposed approach, the measurement error models are discussed and several results, which are derived from experimental tests, are presented in the form of three-dimensional graphs, and generalized as error equations. © 2009 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: This study used 12 photoelastics models with different height and thickness to evaluate if the axial loading of 100N on implants changes the morphology of the photoelastic reflection. Methods: For the photoelastic analysis, the models were placed in a reflection polariscope for observation of the isochromatic fringes patterns. The formation of these fringes resulted from an axial load of 100N applied to the midpoint of the healing abutment attached to the implant with 10.0mm x 3.75mm (Conexão, Sistemas de Próteses, Brazil). The tension in each photoelastic model was monitored, photographed and observed using the software Phothoshop 7.0. For qualitative analysis, the area under the implant apex was measured including the green band of the second order fringe of each model using the software Image Tool. After comparison of the areas, the performance generated by each specimen was defined regarding the axial loading. Results: There were alterations in area with different height and thickness of the photoelastic models. It was observed that the group III (30mm in height) presented the smallest area. Conclusion: There was variation in the size of the areas analyzed for different height and thickness of the models and the morphology of the replica may directly influence the result in researches with photoelastic models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Thesis (Ph.D.)--University of Washington, 2016-06

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crash prediction models are used for a variety of purposes including forecasting the expected future performance of various transportation system segments with similar traits. The influence of intersection features on safety have been examined extensively because intersections experience a relatively large proportion of motor vehicle conflicts and crashes compared to other segments in the transportation system. The effects of left-turn lanes at intersections in particular have seen mixed results in the literature. Some researchers have found that left-turn lanes are beneficial to safety while others have reported detrimental effects on safety. This inconsistency is not surprising given that the installation of left-turn lanes is often endogenous, that is, influenced by crash counts and/or traffic volumes. Endogeneity creates problems in econometric and statistical models and is likely to account for the inconsistencies reported in the literature. This paper reports on a limited-information maximum likelihood (LIML) estimation approach to compensate for endogeneity between left-turn lane presence and angle crashes. The effects of endogeneity are mitigated using the approach, revealing the unbiased effect of left-turn lanes on crash frequency for a dataset of Georgia intersections. The research shows that without accounting for endogeneity, left-turn lanes ‘appear’ to contribute to crashes; however, when endogeneity is accounted for in the model, left-turn lanes reduce angle crash frequencies as expected by engineering judgment. Other endogenous variables may lurk in crash models as well, suggesting that the method may be used to correct simultaneity problems with other variables and in other transportation modeling contexts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Older driver research has mostly focused on identifying that small proportion of older drivers who are unsafe. Little is known about how normal cognitive changes in aging affect driving in the wider population of adults who drive regularly. We evaluated the association of cognitive function and age, with driving errors. Method: A sample of 266 drivers aged 70 to 88 years were assessed on abilities that decline in normal aging (visual attention, processing speed, inhibition, reaction time, task switching) and the UFOV® which is a validated screening instrument for older drivers. Participants completed an on-road driving test. Generalized linear models were used to estimate the associations of cognitive factor with specific driving errors and number of errors in self-directed and instructor navigated conditions. Results: All errors types increased with chronological age. Reaction time was not associated with driving errors in multivariate analyses. A cognitive factor measuring Speeded Selective Attention and Switching was uniquely associated with the most errors types. The UFOV predicted blindspot errors and errors on dual carriageways. After adjusting for age, education and gender the cognitive factors explained 7% of variance in the total number of errors in the instructor navigated condition and 4% of variance in the self-navigated condition. Conclusion: We conclude that among older drivers errors increase with age and are associated with speeded selective attention particularly when that requires attending to the stimuli in the periphery of the visual field, task switching, errors inhibiting responses and visual discrimination. These abilities should be the target of cognitive training.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stormwater quality modelling results is subject to uncertainty. The variability of input parameters is an important source of overall model error. An in-depth understanding of the variability associated with input parameters can provide knowledge on the uncertainty associated with these parameters and consequently assist in uncertainty analysis of stormwater quality models and the decision making based on modelling outcomes. This paper discusses the outcomes of a research study undertaken to analyse the variability related to pollutant build-up parameters in stormwater quality modelling. The study was based on the analysis of pollutant build-up samples collected from 12 road surfaces in residential, commercial and industrial land uses. It was found that build-up characteristics vary appreciably even within the same land use. Therefore, using land use as a lumped parameter would contribute significant uncertainties in stormwater quality modelling. Additionally, it was also found that the variability in pollutant build-up can also be significant depending on the pollutant type. This underlines the importance of taking into account specific land use characteristics and targeted pollutant species when undertaking uncertainty analysis of stormwater quality models or in interpreting the modelling outcomes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Process modeling grammars are used to create scripts of a business domain that a process-aware information system is intended to support. A key grammatical construct of such grammars is known as a Gateway. A Gateway construct is used to describe scenarios in which the workflow of a process diverges or converges according to relevant conditions. Gateway constructs have been subjected to much academic discussion about their meaning, role and usefulness, and have been linked to both process-modeling errors and process-model understandability. This paper examines perceptual discriminability effects of Gateway constructs on an individual's abilities to interpret process models. We compare two ways of expressing two convergence and divergence patterns – Parallel Split and Simple Merge – implemented in a process modeling grammar. On the basis of an experiment with 98 students, we provide empirical evidence that Gateway constructs aid the interpretation of process models due to a perceptual discriminability effect, especially when models are complex. We discuss the emerging implications for research and practice, in terms of revisions to grammar specifications, guideline development and design choices in process modeling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Heckman-type selection models have been used to control HIV prevalence estimates for selection bias when participation in HIV testing and HIV status are associated after controlling for observed variables. These models typically rely on the strong assumption that the error terms in the participation and the outcome equations that comprise the model are distributed as bivariate normal.
Methods: We introduce a novel approach for relaxing the bivariate normality assumption in selection models using copula functions. We apply this method to estimating HIV prevalence and new confidence intervals (CI) in the 2007 Zambia Demographic and Health Survey (DHS) by using interviewer identity as the selection variable that predicts participation (consent to test) but not the outcome (HIV status).
Results: We show in a simulation study that selection models can generate biased results when the bivariate normality assumption is violated. In the 2007 Zambia DHS, HIV prevalence estimates are similar irrespective of the structure of the association assumed between participation and outcome. For men, we estimate a population HIV prevalence of 21% (95% CI = 16%–25%) compared with 12% (11%–13%) among those who consented to be tested; for women, the corresponding figures are 19% (13%–24%) and 16% (15%–17%).
Conclusions: Copula approaches to Heckman-type selection models are a useful addition to the methodological toolkit of HIV epidemiology and of epidemiology in general. We develop the use of this approach to systematically evaluate the robustness of HIV prevalence estimates based on selection models, both empirically and in a simulation study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Introduction: It has been suggested that doctors in their first year of post-graduate training make a disproportionate number of prescribing errors.

Obkective: This study aimed to compare the prevalence of prescribing errors made by first-year post-graduate doctors with that of errors by senior doctors and non-medical prescribers and to investigate the predictors of potentially serious prescribing errors.

Methods: Pharmacists in 20 hospitals over 7 prospectively selected days collected data on the number of medication orders checked, the grade of prescriber and details of any prescribing errors. Logistic regression models (adjusted for clustering by hospital) identified factors predicting the likelihood of prescribing erroneously and the severity of prescribing errors.

Results: Pharmacists reviewed 26,019 patients and 124,260 medication orders; 11,235 prescribing errors were detected in 10,986 orders. The mean error rate was 8.8 % (95 % confidence interval [CI] 8.6-9.1) errors per 100 medication orders. Rates of errors for all doctors in training were significantly higher than rates for medical consultants. Doctors who were 1 year (odds ratio [OR] 2.13; 95 % CI 1.80-2.52) or 2 years in training (OR 2.23; 95 % CI 1.89-2.65) were more than twice as likely to prescribe erroneously. Prescribing errors were 70 % (OR 1.70; 95 % CI 1.61-1.80) more likely to occur at the time of hospital admission than when medication orders were issued during the hospital stay. No significant differences in severity of error were observed between grades of prescriber. Potentially serious errors were more likely to be associated with prescriptions for parenteral administration, especially for cardiovascular or endocrine disorders.

Conclusions: The problem of prescribing errors in hospitals is substantial and not solely a problem of the most junior medical prescribers, particularly for those errors most likely to cause significant patient harm. Interventions are needed to target these high-risk errors by all grades of staff and hence improve patient safety.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical tests in vector autoregressive (VAR) models are typically based on large-sample approximations, involving the use of asymptotic distributions or bootstrap techniques. After documenting that such methods can be very misleading even with fairly large samples, especially when the number of lags or the number of equations is not small, we propose a general simulation-based technique that allows one to control completely the level of tests in parametric VAR models. In particular, we show that maximized Monte Carlo tests [Dufour (2002)] can provide provably exact tests for such models, whether they are stationary or integrated. Applications to order selection and causality testing are considered as special cases. The technique developed is applied to quarterly and monthly VAR models of the U.S. economy, comprising income, money, interest rates and prices, over the period 1965-1996.