946 resultados para motor vehicle crashes


Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES: To quantify the driving difficulties of older adults using a detailed assessment of driving performance and to link this with self-reported retrospective and prospective crashes. DESIGN: Prospective cohort study. SETTING: On-road driving assessment. PARTICIPANTS: Two hundred sixty-seven community-living adults aged 70 to 88 randomly recruited through the electoral roll. MEASUREMENTS: Performance on a standardized measure of driving performance. RESULTS: Lane positioning, approach, and blind spot monitoring were the most common error types, and errors occurred most frequently in situations involving merging and maneuvering. Drivers reporting more retrospective or prospective crashes made significantly more driving errors. Driver instructor interventions during self-navigation (where the instructor had to brake or take control of the steering to avoid an accident) were significantly associated with higher retrospective and prospective crashes; every instructor intervention almost doubled prospective crash risk. CONCLUSION: These findings suggest that on-road driving assessment provides useful information on older driver difficulties, with the self-directed component providing the most valuable information.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been considerable research conducted over the last 20 years focused on predicting motor vehicle crashes on transportation facilities. The range of statistical models commonly applied includes binomial, Poisson, Poisson-gamma (or negative binomial), zero-inflated Poisson and negative binomial models (ZIP and ZINB), and multinomial probability models. Given the range of possible modeling approaches and the host of assumptions with each modeling approach, making an intelligent choice for modeling motor vehicle crash data is difficult. There is little discussion in the literature comparing different statistical modeling approaches, identifying which statistical models are most appropriate for modeling crash data, and providing a strong justification from basic crash principles. In the recent literature, it has been suggested that the motor vehicle crash process can successfully be modeled by assuming a dual-state data-generating process, which implies that entities (e.g., intersections, road segments, pedestrian crossings, etc.) exist in one of two states—perfectly safe and unsafe. As a result, the ZIP and ZINB are two models that have been applied to account for the preponderance of “excess” zeros frequently observed in crash count data. The objective of this study is to provide defensible guidance on how to appropriate model crash data. We first examine the motor vehicle crash process using theoretical principles and a basic understanding of the crash process. It is shown that the fundamental crash process follows a Bernoulli trial with unequal probability of independent events, also known as Poisson trials. We examine the evolution of statistical models as they apply to the motor vehicle crash process, and indicate how well they statistically approximate the crash process. We also present the theory behind dual-state process count models, and note why they have become popular for modeling crash data. A simulation experiment is then conducted to demonstrate how crash data give rise to “excess” zeros frequently observed in crash data. It is shown that the Poisson and other mixed probabilistic structures are approximations assumed for modeling the motor vehicle crash process. Furthermore, it is demonstrated that under certain (fairly common) circumstances excess zeros are observed—and that these circumstances arise from low exposure and/or inappropriate selection of time/space scales and not an underlying dual state process. In conclusion, carefully selecting the time/space scales for analysis, including an improved set of explanatory variables and/or unobserved heterogeneity effects in count regression models, or applying small-area statistical methods (observations with low exposure) represent the most defensible modeling approaches for datasets with a preponderance of zeros

Relevância:

100.00% 100.00%

Publicador:

Resumo:

• What is risk compensation, and why is relevant to motor vehicle crashes? • Recent simulator work that revealed risk compensation • Current and future work on risk compensation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crashes at any particular transport network location consist of a chain of events arising from a multitude of potential causes and/or contributing factors whose nature is likely to reflect geometric characteristics of the road, spatial effects of the surrounding environment, and human behavioural factors. It is postulated that these potential contributing factors do not arise from the same underlying risk process, and thus should be explicitly modelled and understood. The state of the practice in road safety network management applies a safety performance function that represents a single risk process to explain crash variability across network sites. This study aims to elucidate the importance of differentiating among various underlying risk processes contributing to the observed crash count at any particular network location. To demonstrate the principle of this theoretical and corresponding methodological approach, the study explores engineering (e.g. segment length, speed limit) and unobserved spatial factors (e.g. climatic factors, presence of schools) as two explicit sources of crash contributing factors. A Bayesian Latent Class (BLC) analysis is used to explore these two sources and to incorporate prior information about their contribution to crash occurrence. The methodology is applied to the state controlled roads in Queensland, Australia and the results are compared with the traditional Negative Binomial (NB) model. A comparison of goodness of fit measures indicates that the model with a double risk process outperforms the single risk process NB model, and thus indicating the need for further research to capture all the three crash generation processes into the SPFs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Perceptions surrounding the underlying causes of accidents and injuries may be a key mechanism influencing postaccident health and functional outcomes among people injured in road crashes. In particular, attributions of responsibility may influence rates of postcrash depressive symptomatology and return-to-work.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Federal Highway Administration, Washington, D.C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transportation Department, Washington, D.C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Iowa motor vehicle crashes spreadsheet, years 1925-2014

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Readily accepted knowledge regarding crash causation is consistently omitted from efforts to model and subsequently understand motor vehicle crash occurrence and their contributing factors. For instance, distracted and impaired driving accounts for a significant proportion of crash occurrence, yet is rarely modeled explicitly. In addition, spatially allocated influences such as local law enforcement efforts, proximity to bars and schools, and roadside chronic distractions (advertising, pedestrians, etc.) play a role in contributing to crash occurrence and yet are routinely absent from crash models. By and large, these well-established omitted effects are simply assumed to contribute to model error, with predominant focus on modeling the engineering and operational effects of transportation facilities (e.g. AADT, number of lanes, speed limits, width of lanes, etc.) The typical analytical approach—with a variety of statistical enhancements—has been to model crashes that occur at system locations as negative binomial (NB) distributed events that arise from a singular, underlying crash generating process. These models and their statistical kin dominate the literature; however, it is argued in this paper that these models fail to capture the underlying complexity of motor vehicle crash causes, and thus thwart deeper insights regarding crash causation and prevention. This paper first describes hypothetical scenarios that collectively illustrate why current models mislead highway safety researchers and engineers. It is argued that current model shortcomings are significant, and will lead to poor decision-making. Exploiting our current state of knowledge of crash causation, crash counts are postulated to arise from three processes: observed network features, unobserved spatial effects, and ‘apparent’ random influences that reflect largely behavioral influences of drivers. It is argued; furthermore, that these three processes in theory can be modeled separately to gain deeper insight into crash causes, and that the model represents a more realistic depiction of reality than the state of practice NB regression. An admittedly imperfect empirical model that mixes three independent crash occurrence processes is shown to outperform the classical NB model. The questioning of current modeling assumptions and implications of the latent mixture model to current practice are the most important contributions of this paper, with an initial but rather vulnerable attempt to model the latent mixtures as a secondary contribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Statistical modeling of traffic crashes has been of interest to researchers for decades. Over the most recent decade many crash models have accounted for extra-variation in crash counts—variation over and above that accounted for by the Poisson density. The extra-variation – or dispersion – is theorized to capture unaccounted for variation in crashes across sites. The majority of studies have assumed fixed dispersion parameters in over-dispersed crash models—tantamount to assuming that unaccounted for variation is proportional to the expected crash count. Miaou and Lord [Miaou, S.P., Lord, D., 2003. Modeling traffic crash-flow relationships for intersections: dispersion parameter, functional form, and Bayes versus empirical Bayes methods. Transport. Res. Rec. 1840, 31–40] challenged the fixed dispersion parameter assumption, and examined various dispersion parameter relationships when modeling urban signalized intersection accidents in Toronto. They suggested that further work is needed to determine the appropriateness of the findings for rural as well as other intersection types, to corroborate their findings, and to explore alternative dispersion functions. This study builds upon the work of Miaou and Lord, with exploration of additional dispersion functions, the use of an independent data set, and presents an opportunity to corroborate their findings. Data from Georgia are used in this study. A Bayesian modeling approach with non-informative priors is adopted, using sampling-based estimation via Markov Chain Monte Carlo (MCMC) and the Gibbs sampler. A total of eight model specifications were developed; four of them employed traffic flows as explanatory factors in mean structure while the remainder of them included geometric factors in addition to major and minor road traffic flows. The models were compared and contrasted using the significance of coefficients, standard deviance, chi-square goodness-of-fit, and deviance information criteria (DIC) statistics. The findings indicate that the modeling of the dispersion parameter, which essentially explains the extra-variance structure, depends greatly on how the mean structure is modeled. In the presence of a well-defined mean function, the extra-variance structure generally becomes insignificant, i.e. the variance structure is a simple function of the mean. It appears that extra-variation is a function of covariates when the mean structure (expected crash count) is poorly specified and suffers from omitted variables. In contrast, when sufficient explanatory variables are used to model the mean (expected crash count), extra-Poisson variation is not significantly related to these variables. If these results are generalizable, they suggest that model specification may be improved by testing extra-variation functions for significance. They also suggest that known influences of expected crash counts are likely to be different than factors that might help to explain unaccounted for variation in crashes across sites

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bicyclists are among the most vulnerable of road users, with high fatal crash rates. Although visibility aids have been widely advocated to help prevent bicycle-vehicle conflicts, to date no study has investigated, among crash-involved cyclists, the kind of visibility aids they were using at the time of the crash. This study undertook a detailed investigation of visibility factors involved in bicyclist-motor-vehicle crashes. We surveyed 184 bicyclists (predominantly from Australia via internet cycling forums) who had been involved in motor vehicle collisions regarding the perceived cause of the collision, ambient weather and general visibility, as well as the clothing and bicycle lights used by the bicyclist. Over a third of the crashes occurred in low light levels (dawn, dusk or night-time), which is disproportionate given that only a small proportion of bicyclists typically ride at these times. Importantly, 19% of these bicyclists reported not using bicycle lights at the time of the crash, and only 34% were wearing reflective clothing. Only two participants (of 184) nominated bicyclist visibility as the cause of the crash: 61% attributed the crash to driver inattention. These findings demonstrate that crash-involved bicyclists tend to under-rate and under-utilise visibility aids as a means of improving their safety.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current state of the practice in Blackspot Identification (BSI) utilizes safety performance functions based on total crash counts to identify transport system sites with potentially high crash risk. This paper postulates that total crash count variation over a transport network is a result of multiple distinct crash generating processes including geometric characteristics of the road, spatial features of the surrounding environment, and driver behaviour factors. However, these multiple sources are ignored in current modelling methodologies in both trying to explain or predict crash frequencies across sites. Instead, current practice employs models that imply that a single underlying crash generating process exists. The model mis-specification may lead to correlating crashes with the incorrect sources of contributing factors (e.g. concluding a crash is predominately caused by a geometric feature when it is a behavioural issue), which may ultimately lead to inefficient use of public funds and misidentification of true blackspots. This study aims to propose a latent class model consistent with a multiple crash process theory, and to investigate the influence this model has on correctly identifying crash blackspots. We first present the theoretical and corresponding methodological approach in which a Bayesian Latent Class (BLC) model is estimated assuming that crashes arise from two distinct risk generating processes including engineering and unobserved spatial factors. The Bayesian model is used to incorporate prior information about the contribution of each underlying process to the total crash count. The methodology is applied to the state-controlled roads in Queensland, Australia and the results are compared to an Empirical Bayesian Negative Binomial (EB-NB) model. A comparison of goodness of fit measures illustrates significantly improved performance of the proposed model compared to the NB model. The detection of blackspots was also improved when compared to the EB-NB model. In addition, modelling crashes as the result of two fundamentally separate underlying processes reveals more detailed information about unobserved crash causes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Both systems were designed and developed by NHTSA's National Center for Statistics and Analysis (NCSA) to provide an overall measure of highway safety, to help identify traffic safety problems, to suggest solutions, and to help provide an objective basis on which to evaluate the effectiveness of motor vehicle safety standards and highway safety initiatives. Data from these systems are used to answer requests for information from the international and national highway traffic safety communities, including state and local governments, the Congress, Federal agencies, research organizations, industry, the media, and private citizens.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Assessment and prediction of the impact of vehicular traffic emissions on air quality and exposure levels requires knowledge of vehicle emission factors. The aim of this study was quantification of emission factors from an on road, over twelve months measurement program conducted at two sites in Brisbane: 1) freeway type (free flowing traffic at about 100 km/h, fleet dominated by small passenger cars - Tora St); and 2) urban busy road with stop/start traffic mode, fleet comprising a significant fraction of heavy duty vehicles - Ipswich Rd. A physical model linking concentrations measured at the road for specific meteorological conditions with motor vehicle emission factors was applied for data analyses. The focus of the study was on submicrometer particles; however the measurements also included supermicrometer particles, PM2.5, carbon monoxide, sulfur dioxide, oxides of nitrogen. The results of the study are summarised in this paper. In particular, the emission factors for submicrometer particles were 6.08 x 1013 and 5.15 x 1013 particles per vehicle-1 km-1 for Tora St and Ipswich Rd respectively and for supermicrometer particles for Tora St, 1.48 x 109 particles per vehicle-1 km-1. Emission factors of diesel vehicles at both sites were about an order of magnitude higher than emissions from gasoline powered vehicles. For submicrometer particles and gasoline vehicles the emission factors were 6.08 x 1013 and 4.34 x 1013 particles per vehicle-1 km-1 for Tora St and Ipswich Rd, respectively, and for diesel vehicles were 5.35 x 1014 and 2.03 x 1014 particles per vehicle-1 km-1 for Tora St and Ipswich Rd, respectively. For supermicrometer particles at Tora St the emission factors were 2.59 x 109 and 1.53 x 1012 particles per vehicle-1 km-1, for gasoline and diesel vehicles, respectively.