377 resultados para Applied Statistics


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Distraction resulting from mobile phone use whilst driving has been shown to increase the reaction times of drivers, thereby increasing the likelihood of a crash. This study compares the effects of mobile phone conversations on reaction times of drivers responding to traffic events that occur at different points in a driver’s field of view. The CARRS-Q Advanced Driving Simulator was used to test a group of young drivers on various simulated driving tasks including a traffic event that occurred within the driver’s central vision—a lead vehicle braking suddenly—and an event that occurred within the driver’s peripheral—a pedestrian entering a zebra crossing from a footpath. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), and while engaged in hands-free and handheld phone conversations. The drivers were aged between 21 to 26 years and split evenly by gender. Differences in reaction times for an event in a driver’s central vision were not statistically significant across phone conditions, probably due to a lower speed selection by the distracted drivers. In contrast, the reaction times to detect an event that originated in a distracted driver’s peripheral vision were more than 50% longer compared to the baseline condition. A further statistical analysis revealed that deterioration of reaction times to an event in the peripheral vision was greatest for distracted drivers holding a provisional licence. Many critical events originate in a driver’s periphery, including vehicles, bicyclists, and pedestrians emerging from side streets. A reduction in the ability to detect these events while distracted presents a significant safety concern that must be addressed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of mobile phones while driving is more prevalent among young drivers—a less experienced cohort with elevated crash risk. The objective of this study was to examine and better understand the reaction times of young drivers to a traffic event originating in their peripheral vision whilst engaged in a mobile phone conversation. The CARRS-Q Advanced Driving Simulator was used to test a sample of young drivers on various simulated driving tasks, including an event that originated within the driver’s peripheral vision, whereby a pedestrian enters a zebra crossing from a sidewalk. Thirty-two licensed drivers drove the simulator in three phone conditions: baseline (no phone conversation), hands-free and handheld. In addition to driving the simulator each participant completed questionnaires related to driver demographics, driving history, usage of mobile phones while driving, and general mobile phone usage history. The participants were 21 to 26 years old and split evenly by gender. Drivers’ reaction times to a pedestrian in the zebra crossing were modelled using a parametric accelerated failure time (AFT) duration model with a Weibull distribution. Also tested where two different model specifications to account for the structured heterogeneity arising from the repeated measures experimental design. The Weibull AFT model with gamma heterogeneity was found to be the best fitting model and identified four significant variables influencing the reaction times, including phone condition, driver’s age, license type (Provisional license holder or not), and self-reported frequency of usage of handheld phones while driving. The reaction times of drivers were more than 40% longer in the distracted condition compared to baseline (not distracted). Moreover, the impairment of reaction times due to mobile phone conversations was almost double for provisional compared to open license holders. A reduction in the ability to detect traffic events in the periphery whilst distracted presents a significant and measurable safety concern that will undoubtedly persist unless mitigated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Singapore is a highly urbanized city-state country where walking is an important mode of travel. Pedestrians form about 25% of road fatalities every year, making them one of the most vulnerable road user groups in Singapore. Engineering measures like provision of overhead pedestrian crossings and raised zebra crossings tend to address pedestrian safety in general, but there may be occasions where pedestrians are particularly vulnerable so that targeted interventions are more appropriate. The objective of this study is to identify factors and situations that affect the injury severity of pedestrians involved in traffic crashes. Six years of crash data from 2003 to 2008 containing around four thousands pedestrian crashes at roadway segments were analyzed. Injury severity of pedestrians—recorded as slight injury, major injury and fatal—were modeled as a function of roadway characteristics, traffic features, environmental factors and pedestrian demographics by an ordered probit model. Results suggest that the injury severity of pedestrians involved in crashes during night time is higher indicating that pedestrian visibility during night is a key issue in pedestrian safety. The likelihood of fatal or serious injuries is higher for crashes on roads with high speed limit, center and median lane of multi-lane roads, school zones, roads with two-way divided traffic type, and when pedestrians cross the roads. Elderly pedestrians appear to be involved in fatal and serious injury crashes more when they attempt to cross the road without using nearby crossing facilities. Specific countermeasures are recommended based on the findings of this study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sustainability is a key driver for decisions in the management and future development of industries. The World Commission on Environment and Development (WCED, 1987) outlined imperatives which need to be met for environmental, economic and social sustainability. Development of strategies for measuring and improving sustainability in and across these domains, however, has been hindered by intense debate between advocates for one approach fearing that efforts by those who advocate for another could have unintended adverse impacts. Studies attempting to compare the sustainability performance of countries and industries have also found ratings of performance quite variable depending on the sustainability indices used. Quantifying and comparing the sustainability of industries across the triple bottom line of economy, environment and social impact continues to be problematic. Using the Australian dairy industry as a case study, a Sustainability Scorecard, developed as a Bayesian network model, is proposed as an adaptable tool to enable informed assessment, dialogue and negotiation of strategies at a global level as well as being suitable for developing local solutions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Discretization of a geographical region is quite common in spatial analysis. There have been few studies into the impact of different geographical scales on the outcome of spatial models for different spatial patterns. This study aims to investigate the impact of spatial scales and spatial smoothing on the outcomes of modelling spatial point-based data. Given a spatial point-based dataset (such as occurrence of a disease), we study the geographical variation of residual disease risk using regular grid cells. The individual disease risk is modelled using a logistic model with the inclusion of spatially unstructured and/or spatially structured random effects. Three spatial smoothness priors for the spatially structured component are employed in modelling, namely an intrinsic Gaussian Markov random field, a second-order random walk on a lattice, and a Gaussian field with Matern correlation function. We investigate how changes in grid cell size affect model outcomes under different spatial structures and different smoothness priors for the spatial component. A realistic example (the Humberside data) is analyzed and a simulation study is described. Bayesian computation is carried out using an integrated nested Laplace approximation. The results suggest that the performance and predictive capacity of the spatial models improve as the grid cell size decreases for certain spatial structures. It also appears that different spatial smoothness priors should be applied for different patterns of point data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Exposure control or case-control methodologies are common techniques for estimating crash risks, however they require either observational data on control cases or exogenous exposure data, such as vehicle-kilometres travelled. This study proposes an alternative methodology for estimating crash risk of road user groups, whilst controlling for exposure under a variety of roadway, traffic and environmental factors by using readily available police-reported crash data. In particular, the proposed method employs a combination of a log-linear model and quasi-induced exposure technique to identify significant interactions among a range of roadway, environmental and traffic conditions to estimate associated crash risks. The proposed methodology is illustrated using a set of police-reported crash data from January 2004 to June 2009 on roadways in Queensland, Australia. Exposure-controlled crash risks of motorcyclists—involved in multi-vehicle crashes at intersections—were estimated under various combinations of variables like posted speed limit, intersection control type, intersection configuration, and lighting condition. Results show that the crash risk of motorcycles at three-legged intersections is high if the posted speed limits along the approaches are greater than 60 km/h. The crash risk at three-legged intersections is also high when they are unsignalized. Dark lighting conditions appear to increase the crash risk of motorcycles at signalized intersections, but the problem of night time conspicuity of motorcyclists at intersections is lessened on approaches with lower speed limits. This study demonstrates that this combined methodology is a promising tool for gaining new insights into the crash risks of road user groups, and is transferrable to other road users.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This study considered the problem of predicting survival, based on three alternative models: a single Weibull, a mixture of Weibulls and a cure model. Instead of the common procedure of choosing a single “best” model, where “best” is defined in terms of goodness of fit to the data, a Bayesian model averaging (BMA) approach was adopted to account for model uncertainty. This was illustrated using a case study in which the aim was the description of lymphoma cancer survival with covariates given by phenotypes and gene expression. The results of this study indicate that if the sample size is sufficiently large, one of the three models emerge as having highest probability given the data, as indicated by the goodness of fit measure; the Bayesian information criterion (BIC). However, when the sample size was reduced, no single model was revealed as “best”, suggesting that a BMA approach would be appropriate. Although a BMA approach can compromise on goodness of fit to the data (when compared to the true model), it can provide robust predictions and facilitate more detailed investigation of the relationships between gene expression and patient survival. Keywords: Bayesian modelling; Bayesian model averaging; Cure model; Markov Chain Monte Carlo; Mixture model; Survival analysis; Weibull distribution

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Monitoring stream networks through time provides important ecological information. The sampling design problem is to choose locations where measurements are taken so as to maximise information gathered about physicochemical and biological variables on the stream network. This paper uses a pseudo-Bayesian approach, averaging a utility function over a prior distribution, in finding a design which maximizes the average utility. We use models for correlations of observations on the stream network that are based on stream network distances and described by moving average error models. Utility functions used reflect the needs of the experimenter, such as prediction of location values or estimation of parameters. We propose an algorithmic approach to design with the mean utility of a design estimated using Monte Carlo techniques and an exchange algorithm to search for optimal sampling designs. In particular we focus on the problem of finding an optimal design from a set of fixed designs and finding an optimal subset of a given set of sampling locations. As there are many different variables to measure, such as chemical, physical and biological measurements at each location, designs are derived from models based on different types of response variables: continuous, counts and proportions. We apply the methodology to a synthetic example and the Lake Eacham stream network on the Atherton Tablelands in Queensland, Australia. We show that the optimal designs depend very much on the choice of utility function, varying from space filling to clustered designs and mixtures of these, but given the utility function, designs are relatively robust to the type of response variable.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The application of the Bluetooth (BT) technology to transportation has been enabling researchers to make accurate travel time observations, in freeway and arterial roads. The Bluetooth traffic data are generally incomplete, for they only relate to those vehicles that are equipped with Bluetooth devices, and that are detected by the Bluetooth sensors of the road network. The fraction of detected vehicles versus the total number of transiting vehicles is often referred to as Bluetooth Penetration Rate (BTPR). The aim of this study is to precisely define the spatio-temporal relationship between the quantities that become available through the partial, noisy BT observations; and the hidden variables that describe the actual dynamics of vehicular traffic. To do so, we propose to incorporate a multi- class traffic model into a Sequential Montecarlo Estimation algorithm. Our framework has been applied for the empirical travel time investigations into the Brisbane Metropolitan region.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Hot spot identification (HSID) aims to identify potential sites—roadway segments, intersections, crosswalks, interchanges, ramps, etc.—with disproportionately high crash risk relative to similar sites. An inefficient HSID methodology might result in either identifying a safe site as high risk (false positive) or a high risk site as safe (false negative), and consequently lead to the misuse the available public funds, to poor investment decisions, and to inefficient risk management practice. Current HSID methods suffer from issues like underreporting of minor injury and property damage only (PDO) crashes, challenges of accounting for crash severity into the methodology, and selection of a proper safety performance function to model crash data that is often heavily skewed by a preponderance of zeros. Addressing these challenges, this paper proposes a combination of a PDO equivalency calculation and quantile regression technique to identify hot spots in a transportation network. In particular, issues related to underreporting and crash severity are tackled by incorporating equivalent PDO crashes, whilst the concerns related to the non-count nature of equivalent PDO crashes and the skewness of crash data are addressed by the non-parametric quantile regression technique. The proposed method identifies covariate effects on various quantiles of a population, rather than the population mean like most methods in practice, which more closely corresponds with how black spots are identified in practice. The proposed methodology is illustrated using rural road segment data from Korea and compared against the traditional EB method with negative binomial regression. Application of a quantile regression model on equivalent PDO crashes enables identification of a set of high-risk sites that reflect the true safety costs to the society, simultaneously reduces the influence of under-reported PDO and minor injury crashes, and overcomes the limitation of traditional NB model in dealing with preponderance of zeros problem or right skewed dataset.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This research identifies roadway, traffic, and environmental factors that influence the injury severity of road traffic crashes in Dhaka. Dhaka provides a rather unusual driving risk environment to study, since virtually anyone can obtain a drivers’ license and very little traffic enforcement and fines are given when drivers violate traffic rules. To examine this city with presumed heightened crash severity risk, police reported crash data from 2007 to 2011 containing about 2714 road traffic crashes were collected. The injury severity of traffic crashes—recorded as either fatal, serious injury, or property damage only—were modeled using an ordered Probit model. Significant factors increasing the probability of fatal injuries include crashes along highways (65%), absence of a road divider (80%), crashes during night time (54%), and vehicle-pedestrian collisions (367%); whereas two-way traffic configuration (21%), and traffic police controlled schemes (41%) decrease the probability of fatalities. Both similarities and differences of the findings between crash risk in Dhaka and developed countries are discussed in policy relevant terms.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A predictive model of terrorist activity is developed by examining the daily number of terrorist attacks in Indonesia from 1994 through 2007. The dynamic model employs a shot noise process to explain the self-exciting nature of the terrorist activities. This estimates the probability of future attacks as a function of the times since the past attacks. In addition, the excess of nonattack days coupled with the presence of multiple coordinated attacks on the same day compelled the use of hurdle models to jointly model the probability of an attack day and corresponding number of attacks. A power law distribution with a shot noise driven parameter best modeled the number of attacks on an attack day. Interpretation of the model parameters is discussed and predictive performance of the models is evaluated.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives The goal of this article is to examine whether or not the results of the Queensland Community Engagement Trial (QCET)-a randomized controlled trial that tested the impact of procedural justice policing on citizen attitudes toward police-were affected by different types of nonresponse bias. Method We use two methods (Cochrane and Elffers methods) to explore nonresponse bias: First, we assess the impact of the low response rate by examining the effects of nonresponse group differences between the experimental and control conditions and pooled variance under different scenarios. Second, we assess the degree to which item response rates are influenced by the control and experimental conditions. Results Our analysis of the QCET data suggests that our substantive findings are not influenced by the low response rate in the trial. The results are robust even under extreme conditions, and statistical significance of the results would only be compromised in cases where the pooled variance was much larger for the nonresponse group and the difference between experimental and control conditions was greatly diminished. We also find that there were no biases in the item response rates across the experimental and control conditions. Conclusion RCTs that involve field survey responses-like QCET-are potentially compromised by low response rates and how item response rates might be influenced by the control or experimental conditions. Our results show that the QCET results were not sensitive to the overall low response rate across the experimental and control conditions and the item response rates were not significantly different across the experimental and control groups. Overall, our analysis suggests that the results of QCET are robust and any biases in the survey responses do not significantly influence the main experimental findings.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Research suggests that the length and quality of police-citizen encounters affect policing outcomes. The Koper Curve, for example, shows that the optimal length for police presence in hot spots is between 14 and 15 minutes, with diminishing returns observed thereafter. Our study, using data from the Queensland Community Engagement Trial (QCET), examines the impact of encounter length on citizen perceptions of police performance. QCET involved a randomised field trial, where 60 random breath test (RBT) traffic stop operations were randomly allocated to an experimental condition involving a procedurally just encounter or a business-as-usual control condition. Our results show that the optimal length of time for procedurally just encounters during RBT traffic stops is just less than 2 minutes. We show, therefore, that it is important to encourage and facilitate positive police–citizen encounters during RBTat traffic stops, while ensuring that the length of these interactions does not pass a point of diminishing returns.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The use of graphical processing unit (GPU) parallel processing is becoming a part of mainstream statistical practice. The reliance of Bayesian statistics on Markov Chain Monte Carlo (MCMC) methods makes the applicability of parallel processing not immediately obvious. It is illustrated that there are substantial gains in improved computational time for MCMC and other methods of evaluation by computing the likelihood using GPU parallel processing. Examples use data from the Global Terrorism Database to model terrorist activity in Colombia from 2000 through 2010 and a likelihood based on the explicit convolution of two negative-binomial processes. Results show decreases in computational time by a factor of over 200. Factors influencing these improvements and guidelines for programming parallel implementations of the likelihood are discussed.