477 resultados para causality probability
Resumo:
The combination of alcohol and driving is a major health and economic burden to most communities in industrialised countries. The total cost of crashes for Australia in 1996 was estimated at approximately 15 billion dollars and the costs for fatal crashes were about 3 billion dollars (BTE, 2000). According to the Bureau of Infrastructure, Transport and Regional Development and Local Government (2009; BITRDLG) the overall cost of road fatality crashes for 2006 $3.87 billion, with a single fatal crash costing an estimated $2.67 million. A major contributing factor to crashes involving serious injury is alcohol intoxication while driving. It is a well documented fact that consumption of liquor impairs judgment of speed, distance and increases involvement in higher risk behaviours (Waller, Hansen, Stutts, & Popkin, 1986a; Waller et al., 1986b). Waller et al. (1986a; b) asserts that liquor impairs psychomotor function and therefore renders the driver impaired in a crisis situation. This impairment includes; vision (degraded), information processing (slowed), steering, and performing two tasks at once in congested traffic (Moskowitz & Burns, 1990). As BAC levels increase the risk of crashing and fatality increase exponentially (Department of Transport and Main Roads, 2009; DTMR). According to Compton et al. (2002) as cited in the Department of Transport and Main Roads (2009), crash risk based on probability, is five times higher when the BAC is 0.10 compared to a BAC of 0.00. The type of injury patterns sustained also tends to be more severe when liquor is involved, especially with injuries to the brain (Waller et al., 1986b). Single and Rohl (1997) reported that 30% of all fatal crashes in Australia where alcohol involvement was known were associated with Breadth Analysis Content (BAC) above the legal limit of 0.05gms/100ml. Alcohol related crashes therefore contributes to a third of the total cost of fatal crashes (i.e. $1 billion annually) and crashes where alcohol is involved are more likely to result in death or serious injury (ARRB Transport Research, 1999). It is a major concern that a drug capable of impairment such as is the most available and popular drug in Australia (Australian Institute of Health and Welfare, 2007; AIHW). According to the AIHW (2007) 89.9% of the approximately 25,000 Australians over the age of 14 surveyed had consumed at some point in time, and 82.9% had consumed liquor in the previous year. This study found that 12.1% of individuals admitted to driving a motor vehicle whilst intoxicated. In general males consumed more liquor in all age groups. In Queensland there were 21503 road crashes in 2001, involving 324 fatalities and the largest contributing factor was alcohol and or drugs (Road Traffic Report, 2001). 23438 road crashes in 2004, involving 289 fatalities and the largest contributing factor was alcohol and or drugs (DTMR, 2009). Although a number of measures such as random breath testing have been effective in reducing the road toll (Watson, Fraine & Mitchell, 1995) the recidivist drink driver remains a serious problem. These findings were later supported with research by Leal, King, and Lewis (2006). This Queensland study found that of the 24661 drink drivers intercepted in 2004, 3679 (14.9%) were recidivists with multiple drink driving convictions in the previous three years covered (Leal et al., 2006). The legal definition of the term “recidivist” is consistent with the Transport Operations (Road Use Management) Act (1995) and is assigned to individuals who have been charged with multiple drink driving offences in the previous five years. In Australia relatively little attention has been given to prevention programs that target high-risk repeat drink drivers. However, over the last ten years a rehabilitation program specifically designed to reduce recidivism among repeat drink drivers has been operating in Queensland. The program, formally known as the “Under the Limit” drink driving rehabilitation program (UTL) was designed and implemented by the research team at the Centre for Accident Research and Road Safety in Queensland with funding from the Federal Office of Road Safety and the Institute of Criminology (see Sheehan, Schonfeld & Davey, 1995). By 2009 over 8500 drink-drivering offenders had been referred to the program (Australian Institute of Crime, 2009).
Resumo:
Numerous econometric models have been proposed for forecasting property market performance, but limited success has been achieved in finding a reliable and consistent model to predict property market movements over a five to ten year timeframe. This research focuses on office rental growth forecasts and overviews many of the office rent models that have evolved over the past 20 years. A model by DiPasquale and Wheaton is selected for testing in the Brisbane, Australia office market. The adaptation of this study did not provide explanatory variables that could assist in developing a reliable, predictive model of office rental growth. In light of this result, the paper suggests a system dynamics framework that includes an econometric model based on historical data as well as user input guidance for the primary variables. The rent forecast outputs would be assessed having regard to market expectations and probability profiling undertaken for use in simulation exercises. The paper concludes with ideas for ongoing research.
Resumo:
Spectrum sensing optimisation techniques maximise the efficiency of spectrum sensing while satisfying a number of constraints. Many optimisation models consider the possibility of the primary user changing activity state during the secondary user's transmission period. However, most ignore the possibility of activity change during the sensing period. The observed primary user signal during sensing can exhibit a duty cycle which has been shown to severely degrade detection performance. This paper shows that (a) the probability of state change during sensing cannot be neglected and (b) the true detection performance obtained when incorporating the duty cycle of the primary user signal can deviate significantly from the results expected with the assumption of no such duty cycle.
Resumo:
In this paper we consider the case of large cooperative communication systems where terminals use the protocol known as slotted amplify-and-forward protocol to aid the source in its transmission. Using the perturbation expansion methods of resolvents and large deviation techniques we obtain an expression for the Stieltjes transform of the asymptotic eigenvalue distribution of a sample covariance random matrix of the type HH† where H is the channel matrix of the transmission model for the transmission protocol we consider. We prove that the resulting expression is similar to the Stieltjes transform in its quadratic equation form for the Marcenko-Pastur distribution.
Resumo:
We study the regret of optimal strategies for online convex optimization games. Using von Neumann's minimax theorem, we show that the optimal regret in this adversarial setting is closely related to the behavior of the empirical minimization algorithm in a stochastic process setting: it is equal to the maximum, over joint distributions of the adversary's action sequence, of the difference between a sum of minimal expected losses and the minimal empirical loss. We show that the optimal regret has a natural geometric interpretation, since it can be viewed as the gap in Jensen's inequality for a concave functional--the minimizer over the player's actions of expected loss--defined on a set of probability distributions. We use this expression to obtain upper and lower bounds on the regret of an optimal strategy for a variety of online learning problems. Our method provides upper bounds without the need to construct a learning algorithm; the lower bounds provide explicit optimal strategies for the adversary. Peter L. Bartlett, Alexander Rakhlin
Resumo:
International support is capable of making the difference between the successful defense of democracy and its ignominious defeat. Indeed, the perceived probability of both support for democratically chosen leaders and opposition to their attackers can fundamentally shift the balance in the domestic struggle between them. Nevertheless, although changes to international law and international relations justify a greater international role in preventing and deterring coups and erosions, not all responsibility for protecting democracy should be assigned to the international community. Indeed, the first line of defense should be a democracy’s own domestic initiatives, with the main role of the international community being to support a domestic response to threats to democracy.
Resumo:
Maximum-likelihood estimates of the parameters of stochastic differential equations are consistent and asymptotically efficient, but unfortunately difficult to obtain if a closed-form expression for the transitional probability density function of the process is not available. As a result, a large number of competing estimation procedures have been proposed. This article provides a critical evaluation of the various estimation techniques. Special attention is given to the ease of implementation and comparative performance of the procedures when estimating the parameters of the Cox–Ingersoll–Ross and Ornstein–Uhlenbeck equations respectively.
Resumo:
Early detection surveillance programs aim to find invasions of exotic plant pests and diseases before they are too widespread to eradicate. However, the value of these programs can be difficult to justify when no positive detections are made. To demonstrate the value of pest absence information provided by these programs, we use a hierarchical Bayesian framework to model estimates of incursion extent with and without surveillance. A model for the latent invasion process provides the baseline against which surveillance data are assessed. Ecological knowledge and pest management criteria are introduced into the model using informative priors for invasion parameters. Observation models assimilate information from spatio-temporal presence/absence data to accommodate imperfect detection and generate posterior estimates of pest extent. When applied to an early detection program operating in Queensland, Australia, the framework demonstrates that this typical surveillance regime provides a modest reduction in the estimate that a surveyed district is infested. More importantly, the model suggests that early detection surveillance programs can provide a dramatic reduction in the putative area of incursion and therefore offer a substantial benefit to incursion management. By mapping spatial estimates of the point probability of infestation, the model identifies where future surveillance resources can be most effectively deployed.
Resumo:
This paper discusses the statistical analyses used to derive bridge live loads models for Hong Kong from a 10-year weigh-in-motion (WIM) data. The statistical concepts required and the terminologies adopted in the development of bridge live load models are introduced. This paper includes studies for representative vehicles from the large amount of WIM data in Hong Kong. Different load affecting parameters such as gross vehicle weights, axle weights, axle spacings, average daily number of trucks etc are first analyzed by various stochastic processes in order to obtain the mathematical distributions of these parameters. As a prerequisite to determine accurate bridge design loadings in Hong Kong, this study not only takes advantages of code formulation methods used internationally but also presents a new method for modelling collected WIM data using a statistical approach.
Resumo:
Overweight and obesity are a significant cause of poor health worldwide, particularly in conjunction with low levels of physical activity (PA). PA is health-protective and essential for the physical growth and development of children, promoting physical and psychological health while simultaneously increasing the probability of remaining active as an adult. However, many obese children and adolescents have a unique set of physiological, biomechanical, and neuromuscular barriers to PA that they must overcome. It is essential to understand the influence of these barriers on an obese child's motivation in order to exercise and tailor exercise programs to the special needs of this population. Chapter Outline • Introduction • Defining Physical Activity, Exercise, and Physical Fitness • Physical Activity, Physical Fitness, And Motor Competence In Obese Children • Physical Activity and Obesity in Children • Physical Fitness in Obese Children • Balance and Gait in Obese Children • Motor Competence in Obese Children • Physical Activity Guidelines for Obese Children • Clinical Assessment of the Obese Child • Physical Activity Characteristics: Mode • Physical Activity Characteristics: Intensity • Physical Activity Characteristics: Frequency • Physical Activity Characteristics: Duration • Conclusion
Resumo:
Sample complexity results from computational learning theory, when applied to neural network learning for pattern classification problems, suggest that for good generalization performance the number of training examples should grow at least linearly with the number of adjustable parameters in the network. Results in this paper show that if a large neural network is used for a pattern classification problem and the learning algorithm finds a network with small weights that has small squared error on the training patterns, then the generalization performance depends on the size of the weights rather than the number of weights. For example, consider a two-layer feedforward network of sigmoid units, in which the sum of the magnitudes of the weights associated with each unit is bounded by A and the input dimension is n. We show that the misclassification probability is no more than a certain error estimate (that is related to squared error on the training set) plus A3 √((log n)/m) (ignoring log A and log m factors), where m is the number of training patterns. This may explain the generalization performance of neural networks, particularly when the number of training examples is considerably smaller than the number of weights. It also supports heuristics (such as weight decay and early stopping) that attempt to keep the weights small during training. The proof techniques appear to be useful for the analysis of other pattern classifiers: when the input domain is a totally bounded metric space, we use the same approach to give upper bounds on misclassification probability for classifiers with decision boundaries that are far from the training examples.
Resumo:
We investigate the behavior of the empirical minimization algorithm using various methods. We first analyze it by comparing the empirical, random, structure and the original one on the class, either in an additive sense, via the uniform law of large numbers, or in a multiplicative sense, using isomorphic coordinate projections. We then show that a direct analysis of the empirical minimization algorithm yields a significantly better bound, and that the estimates we obtain are essentially sharp. The method of proof we use is based on Talagrand’s concentration inequality for empirical processes.
Resumo:
We study sample-based estimates of the expectation of the function produced by the empirical minimization algorithm. We investigate the extent to which one can estimate the rate of convergence of the empirical minimizer in a data dependent manner. We establish three main results. First, we provide an algorithm that upper bounds the expectation of the empirical minimizer in a completely data-dependent manner. This bound is based on a structural result due to Bartlett and Mendelson, which relates expectations to sample averages. Second, we show that these structural upper bounds can be loose, compared to previous bounds. In particular, we demonstrate a class for which the expectation of the empirical minimizer decreases as O(1/n) for sample size n, although the upper bound based on structural properties is Ω(1). Third, we show that this looseness of the bound is inevitable: we present an example that shows that a sharp bound cannot be universally recovered from empirical data.
Resumo:
We consider the problem of binary classification where the classifier can, for a particular cost, choose not to classify an observation. Just as in the conventional classification problem, minimization of the sample average of the cost is a difficult optimization problem. As an alternative, we propose the optimization of a certain convex loss function φ, analogous to the hinge loss used in support vector machines (SVMs). Its convexity ensures that the sample average of this surrogate loss can be efficiently minimized. We study its statistical properties. We show that minimizing the expected surrogate loss—the φ-risk—also minimizes the risk. We also study the rate at which the φ-risk approaches its minimum value. We show that fast rates are possible when the conditional probability P(Y=1|X) is unlikely to be close to certain critical values.