944 resultados para Railroad safety, Bayesian methods, Accident modification factor, Countermeasure selection


Relevância:

100.00% 100.00%

Publicador:

Resumo:

National Highway Traffic Safety Administration, Washington, D.C.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Federal Highway Administration, Office of Safety and Traffic Operations Research and Development, McLean, Va.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

"Summary and analysis of accidents on railroads in the United States."

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Crash reduction factors (CRFs) are used to estimate the potential number of traffic crashes expected to be prevented from investment in safety improvement projects. The method used to develop CRFs in Florida has been based on the commonly used before-and-after approach. This approach suffers from a widely recognized problem known as regression-to-the-mean (RTM). The Empirical Bayes (EB) method has been introduced as a means to addressing the RTM problem. This method requires the information from both the treatment and reference sites in order to predict the expected number of crashes had the safety improvement projects at the treatment sites not been implemented. The information from the reference sites is estimated from a safety performance function (SPF), which is a mathematical relationship that links crashes to traffic exposure. The objective of this dissertation was to develop the SPFs for different functional classes of the Florida State Highway System. Crash data from years 2001 through 2003 along with traffic and geometric data were used in the SPF model development. SPFs for both rural and urban roadway categories were developed. The modeling data used were based on one-mile segments that contain homogeneous traffic and geometric conditions within each segment. Segments involving intersections were excluded. The scatter plots of data show that the relationships between crashes and traffic exposure are nonlinear, that crashes increase with traffic exposure in an increasing rate. Four regression models, namely, Poisson (PRM), Negative Binomial (NBRM), zero-inflated Poisson (ZIP), and zero-inflated Negative Binomial (ZINB), were fitted to the one-mile segment records for individual roadway categories. The best model was selected for each category based on a combination of the Likelihood Ratio test, the Vuong statistical test, and the Akaike's Information Criterion (AIC). The NBRM model was found to be appropriate for only one category and the ZINB model was found to be more appropriate for six other categories. The overall results show that the Negative Binomial distribution model generally provides a better fit for the data than the Poisson distribution model. In addition, the ZINB model was found to give the best fit when the count data exhibit excess zeros and over-dispersion for most of the roadway categories. While model validation shows that most data points fall within the 95% prediction intervals of the models developed, the Pearson goodness-of-fit measure does not show statistical significance. This is expected as traffic volume is only one of the many factors contributing to the overall crash experience, and that the SPFs are to be applied in conjunction with Accident Modification Factors (AMFs) to further account for the safety impacts of major geometric features before arriving at the final crash prediction. However, with improved traffic and crash data quality, the crash prediction power of SPF models may be further improved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Bayesian methods offer a flexible and convenient probabilistic learning framework to extract interpretable knowledge from complex and structured data. Such methods can characterize dependencies among multiple levels of hidden variables and share statistical strength across heterogeneous sources. In the first part of this dissertation, we develop two dependent variational inference methods for full posterior approximation in non-conjugate Bayesian models through hierarchical mixture- and copula-based variational proposals, respectively. The proposed methods move beyond the widely used factorized approximation to the posterior and provide generic applicability to a broad class of probabilistic models with minimal model-specific derivations. In the second part of this dissertation, we design probabilistic graphical models to accommodate multimodal data, describe dynamical behaviors and account for task heterogeneity. In particular, the sparse latent factor model is able to reveal common low-dimensional structures from high-dimensional data. We demonstrate the effectiveness of the proposed statistical learning methods on both synthetic and real-world data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background and Purpose - Loss of motor function is common after stroke and leads to significant chronic disability. Stem cells are capable of self-renewal and of differentiating into multiple cell types, including neurones, glia, and vascular cells. We assessed the safety of granulocyte-colony-stimulating factor (G-CSF) after stroke and its effect on circulating CD34 stem cells. Methods - We performed a 2-center, dose-escalation, double-blind, randomized, placebo-controlled pilot trial (ISRCTN 16784092) of G-CSF (6 blocks of 1 to 10 g/kg SC, 1 or 5 daily doses) in 36 patients with recent ischemic stroke. Circulating CD34 stem cells were measured by flow cytometry; blood counts and measures of safety and functional outcome were also monitored. All measures were made blinded to treatment. Results - Thirty-six patients, whose mean SD age was 768 years and of whom 50% were male, were recruited. G-CSF (5 days of 10 g/kg) increased CD34 count in a dose-dependent manner, from 2.5 to 37.7 at day 5 (area under curve, P0.005). A dose-dependent rise in white cell count (P0.001) was also seen. There was no difference between treatment groups in the number of patients with serious adverse events: G-CSF, 7/24 (29%) versus placebo 3/12 (25%), or in their dependence (modified Rankin Scale, median 4, interquartile range, 3 to 5) at 90 days. Conclusions - ”G-CSF is effective at mobilizing bone marrow CD34 stem cells in patients with recent ischemic stroke. Administration is feasible and appears to be safe and well tolerated. The fate of mobilized cells and their effect on functional outcome remain to be determined. (Stroke. 2006;37:2979-2983.)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The problem of silent multiple comparisons is one of the most difficult statistical problems faced by scientists. It is a particular problem for investigating a one-off cancer cluster reported to a health department because any one of hundreds, or possibly thousands, of neighbourhoods, schools, or workplaces could have reported a cluster, which could have been for any one of several types of cancer or any one of several time periods. Methods This paper contrasts the frequentist approach with a Bayesian approach for dealing with silent multiple comparisons in the context of a one-off cluster reported to a health department. Two published cluster investigations were re-analysed using the Dunn-Sidak method to adjust frequentist p-values and confidence intervals for silent multiple comparisons. Bayesian methods were based on the Gamma distribution. Results Bayesian analysis with non-informative priors produced results similar to the frequentist analysis, and suggested that both clusters represented a statistical excess. In the frequentist framework, the statistical significance of both clusters was extremely sensitive to the number of silent multiple comparisons, which can only ever be a subjective "guesstimate". The Bayesian approach is also subjective: whether there is an apparent statistical excess depends on the specified prior. Conclusion In cluster investigations, the frequentist approach is just as subjective as the Bayesian approach, but the Bayesian approach is less ambitious in that it treats the analysis as a synthesis of data and personal judgements (possibly poor ones), rather than objective reality. Bayesian analysis is (arguably) a useful tool to support complicated decision-making, because it makes the uncertainty associated with silent multiple comparisons explicit.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aims: The Rural and Remote Road Safety Study (RRRSS) addresses a recognised need for greater research on road trauma in rural and remote Australia, the costs of which are disproportionately high compared with urban areas. The 5-year multi-phase study with whole-of-government support concluded in June 2008. Drawing on RRRSS data, we analysed fatal motorcycle crashes which occurred over 39 months to provide a description of crash characteristics, contributing factors and people involved. The descriptive analysis and discussion may inform development of tailored motorcycle safety interventions. Methods: RRRSS criteria sought vehicle crashes resulting in death or hospitalisation for 24 hours minimum of at least 1 person aged 16 years or over, in the study area defined roughly as the Queensland area north from Bowen in the east and Boulia in the west (excluding Townsville and Cairns urban areas). Fatal motorcycle crashes were selected from the RRRSS dataset. Analysis considered medical data covering injury types and severity, evidence of alcohol, drugs and prior medical conditions, as well as crash descriptions supplied by police to Queensland Transport on contributing circumstances, vehicle types, environmental conditions and people involved. Crash data were plotted in a geographic information system (MapInfo) for spatial analysis. Results: There were 23 deaths from 22 motorcycle crashes on public roads meeting RRRSS criteria. Of these, half were single vehicle crashes and half involved 2 or more vehicles. In contrast to general patterns for driver/rider age distribution in crashes, riders below 25 years of age were represented proportionally within the population. Riders in their thirties comprised 41% of fatalities, with a further 36% accounted for by riders in their fifties. 18 crashes occurred in the Far North Statistical Division (SD), with 2 crashes in both the Northern and North West SDs. Behavioural factors comprised the vast majority of contributing circumstances cited by police, with adverse environmental conditions noted in only 4 cases. Conclusions: Fatal motorcycle crashes were more likely to involve another vehicle and less likely to involve a young rider than non-fatal crashes recorded by the RRRSS. Rider behaviour contributed to the majority of crashes and should be a major focus of research, education and policy development, while other road users’ behaviour and awareness also remains important. With 68% of crashes occurring on major and secondary roads within a 130km radius of Cairns, efforts should focus on this geographic area.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective We aimed to predict sub-national spatial variation in numbers of people infected with Schistosoma haematobium, and associated uncertainties, in Burkina Faso, Mali and Niger, prior to implementation of national control programmes. Methods We used national field survey datasets covering a contiguous area 2,750 × 850 km, from 26,790 school-aged children (5–14 years) in 418 schools. Bayesian geostatistical models were used to predict prevalence of high and low intensity infections and associated 95% credible intervals (CrI). Numbers infected were determined by multiplying predicted prevalence by numbers of school-aged children in 1 km2 pixels covering the study area. Findings Numbers of school-aged children with low-intensity infections were: 433,268 in Burkina Faso, 872,328 in Mali and 580,286 in Niger. Numbers with high-intensity infections were: 416,009 in Burkina Faso, 511,845 in Mali and 254,150 in Niger. 95% CrIs (indicative of uncertainty) were wide; e.g. the mean number of boys aged 10–14 years infected in Mali was 140,200 (95% CrI 6200, 512,100). Conclusion National aggregate estimates for numbers infected mask important local variation, e.g. most S. haematobium infections in Niger occur in the Niger River valley. Prevalence of high-intensity infections was strongly clustered in foci in western and central Mali, north-eastern and northwestern Burkina Faso and the Niger River valley in Niger. Populations in these foci are likely to carry the bulk of the urinary schistosomiasis burden and should receive priority for schistosomiasis control. Uncertainties in predicted prevalence and numbers infected should be acknowledged and taken into consideration by control programme planners.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This literature review examines the relationship between traffic lane widths on the safety of road users. It focuses on the impacts of lane widths on motor vehicle behaviour and cyclists’ safety. The review commenced with a search of available databases. Peer reviewed articles and road authority reports were reviewed, as well as current engineering guidelines. Research shows that traffic lane width influences drivers’ perceived difficulty of the task, risk perception and possibly speed choices. Total roadway width, and the presence of onroad cycling facilities, influence cyclists’ positioning on the road. Lateral displacement between bicycles and vehicles is smallest when a marked bicycle facility is present. Reduced motor vehicle speeds can significantly improve the safety of vulnerable road users, particularly pedestrians and cyclists. It has been shown that if road lane widths on urban roads were reduced, through various mechanisms, it could result in a safety environment for all road users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

‘Hooning’ constitutes a set of illegal and high-risk vehicle related activities typically performed by males aged 17-25, a group that is over-represented in road trauma statistics. This study used an online survey of 422 participants to test the efficacy of the Five Factor Model of Personality in predicting ‘loss of traction’ (LOT) hooning behaviour. Drivers who engaged in LOT behaviour scored significantly lower on the factor of Agreeableness than those who did not. Regression analyses indicated that the Five Factor Model of Personality was a significant predictor of LOT behaviour over and above sex and age, although Agreeableness was the only significant personality factor in the model. The findings may be used to better understand those drivers likely to engage in LOT behaviours. Road safety advertising and educational campaigns can target less socially agreeable drivers, and aim to encourage more agreeable attitudes to driving, particularly for younger male drivers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Red light cameras (RLCs) have been used in a number of US cities to yield a demonstrable reduction in red light violations; however, evaluating their impact on safety (crashes) has been relatively more difficult. Accurately estimating the safety impacts of RLCs is challenging for several reasons. First, many safety related factors are uncontrolled and/or confounded during the periods of observation. Second, “spillover” effects caused by drivers reacting to non-RLC equipped intersections and approaches can make the selection of comparison sites difficult. Third, sites selected for RLC installation may not be selected randomly, and as a result may suffer from the regression to the mean bias. Finally, crash severity and resulting costs need to be considered in order to fully understand the safety impacts of RLCs. Recognizing these challenges, a study was conducted to estimate the safety impacts of RLCs on traffic crashes at signalized intersections in the cities of Phoenix and Scottsdale, Arizona. Twenty-four RLC equipped intersections in both cities are examined in detail and conclusions are drawn. Four different evaluation methodologies were employed to cope with the technical challenges described in this paper and to assess the sensitivity of results based on analytical assumptions. The evaluation results indicated that both Phoenix and Scottsdale are operating cost-effective installations of RLCs: however, the variability in RLC effectiveness within jurisdictions is larger in Phoenix. Consistent with findings in other regions, angle and left-turn crashes are reduced in general, while rear-end crashes tend to increase as a result of RLCs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large trucks are involved in a disproportionately small fraction of the total crashes but a disproportionately large fraction of fatal crashes. Large truck crashes often result in significant congestion due to their large physical dimensions and from difficulties in clearing crash scenes. Consequently, preventing large truck crashes is critical to improving highway safety and operations. This study identifies high risk sites (hot spots) for large truck crashes in Arizona and examines potential risk factors related to the design and operation of the high risk sites. High risk sites were identified using both state of the practice methods (accident reduction potential using negative binomial regression with long crash histories) and a newly proposed method using Property Damage Only Equivalents (PDOE). The hot spots identified via the count model generally exhibited low fatalities and major injuries but large minor injuries and PDOs, while the opposite trend was observed using the PDOE methodology. The hot spots based on the count model exhibited large AADTs, whereas those based on the PDOE showed relatively small AADTs but large fractions of trucks and high posted speed limits. Documented site investigations of hot spots revealed numerous potential risk factors, including weaving activities near freeway junctions and ramps, absence of acceleration lanes near on-ramps, small shoulders to accommodate large trucks, narrow lane widths, inadequate signage, and poor lighting conditions within a tunnel.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background Pakistan has the highest population rate of road fatalities in South Asia (25.3 fatalities per 100,000 people: Global Status Report on Road Safety, WHO 2009). Along with road environment and vehicle factors, human factors make a substantial contribution to traffic safety in Pakistan. Beliefs about road crash causation and prevention have been demonstrated to contribute to risky road use behaviour and resistance to preventive measures in a handful of other developing countries, but has not been explored in Pakistan. In particular, fatalism (whether based on religion, other cultural beliefs or experience) has been highlighted as a barrier to achieving changes in attitudes and behaviour. Aims The research reported here aimed (i) to explore perceptions of road crash causation among policy makers, police officers, professional drivers and car drivers in Pakistan; (ii) to identify how cultural and religious beliefs influence road use behaviour in Pakistan; and (iii) to understand how fatalistic beliefs may work as obstacles to road safety interventions. Methods In-depth interviews were conducted by the primary author (mostly in Urdu) in Lahore, Rawalpindi and Islamabad with 12 professional drivers (taxi, bus and truck), 4 car drivers, 6 police officers, 4 policy makers and 2 religious orators. All but two were Muslim, two were female, and they were drawn from a wide range of ages (24 to 60) and educational backgrounds. The interviews were taped and transcribed, then translated into English and analysed for themes related to the aims. Results Fatalism emerged as a pervasive belief utilised to justify risky road use behaviour and to resist messages about preventive measures. There was a strong religious underpinning to the statement of fatalistic beliefs (this reflects popular conceptions of Islam rather than scholarly interpretations), but also an overlap with superstitious beliefs which have longer-standing roots in Pakistani culture. These beliefs were not limited to people of poor educational background or position. A particular issue which was explored in more detail was the way in which these beliefs and their interpretation within Pakistani society contributed to poor police reporting of crashes. Discussion and conclusions The pervasive nature of fatalistic beliefs in Pakistan affects road user behaviour by supporting continued risk taking behaviour on the road, and by interfering with public health messages about behaviours which would reduce the risk of traffic crashes. The widespread influence of these beliefs on the ways that people respond to traffic crashes and the death of family members contribute to low crash reporting rates and to a system which is difficult to change. The promotion of an evidence-based approach to road user behaviour is recommended, along with improved professional education for police and policy makers.