966 resultados para Crowd density estimation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many traffic situations require drivers to cross or merge into a stream having higher priority. Gap acceptance theory enables us to model such processes to analyse traffic operation. This discussion demonstrated that numerical search fine tuned by statistical analysis can be used to determine the most likely critical gap for a sample of drivers, based on their largest rejected gap and accepted gap. This method shares some common features with the Maximum Likelihood Estimation technique (Troutbeck 1992) but lends itself well to contemporary analysis tools such as spreadsheet and is particularly analytically transparent. This method is considered not to bias estimation of critical gap due to very small rejected gaps or very large rejected gaps. However, it requires a sufficiently large sample that there is reasonable representation of largest rejected gap/accepted gap pairs within a fairly narrow highest likelihood search band.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Markov chain Monte Carlo (MCMC) estimation provides a solution to the complex integration problems that are faced in the Bayesian analysis of statistical problems. The implementation of MCMC algorithms is, however, code intensive and time consuming. We have developed a Python package, which is called PyMCMC, that aids in the construction of MCMC samplers and helps to substantially reduce the likelihood of coding error, as well as aid in the minimisation of repetitive code. PyMCMC contains classes for Gibbs, Metropolis Hastings, independent Metropolis Hastings, random walk Metropolis Hastings, orientational bias Monte Carlo and slice samplers as well as specific modules for common models such as a module for Bayesian regression analysis. PyMCMC is straightforward to optimise, taking advantage of the Python libraries Numpy and Scipy, as well as being readily extensible with C or Fortran.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND - High-density lipoprotein (HDL) protects against arterial atherothrombosis, but it is unknown whether it protects against recurrent venous thromboembolism. METHODS AND RESULTS - We studied 772 patients after a first spontaneous venous thromboembolism (average follow-up 48 months) and recorded the end point of symptomatic recurrent venous thromboembolism, which developed in 100 of the 772 patients. The relationship between plasma lipoprotein parameters and recurrence was evaluated. Plasma apolipoproteins AI and B were measured by immunoassays for all subjects. Compared with those without recurrence, patients with recurrence had lower mean (±SD) levels of apolipoprotein AI (1.12±0.22 versus 1.23±0.27 mg/mL, P<0.001) but similar apolipoprotein B levels. The relative risk of recurrence was 0.87 (95% CI, 0.80 to 0.94) for each increase of 0.1 mg/mL in plasma apolipoprotein AI. Compared with patients with apolipoprotein AI levels in the lowest tertile (<1.07 mg/mL), the relative risk of recurrence was 0.46 (95% CI, 0.27 to 0.77) for the highest-tertile patients (apolipoprotein AI >1.30 mg/mL) and 0.78 (95% CI, 0.50 to 1.22) for midtertile patients (apolipoprotein AI of 1.07 to 1.30 mg/mL). Using nuclear magnetic resonance, we determined the levels of 10 major lipoprotein subclasses and HDL cholesterol for 71 patients with recurrence and 142 matched patients without recurrence. We found a strong trend for association between recurrence and low levels of HDL particles and HDL cholesterol. CONCLUSIONS - Patients with high levels of apolipoprotein AI and HDL have a decreased risk of recurrent venous thromboembolism. © 2007 American Heart Association, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background-Although dyslipoproteinemia is associated with arterial atherothrombosis, little is known about plasma lipoproteins in venous thrombosis patients. Methods and Results-We determined plasma lipoprotein subclass concentrations using nuclear magnetic resonance spectroscopy and antigenic levels of apolipoproteins AI and B in blood samples from 49 male venous thrombosis patients and matched controls aged <55 years. Venous thrombosis patients had significantly lower levels of HDL particles, large HDL particles, HDL cholesterol, and apolipoprotein AI and significantly higher levels of LDL particles and small LDL particles. The quartile-based odds ratios for decreased HDL particle and apolipoprotein AI levels in patients compared with controls were 6.5 and 6.0 (95% CI, 2.3 to 19 and 2.1 to 17), respectively. Odds ratios for apolipoprotein B/apolipoprotein AI ratio and LDL cholesterol/HDL cholesterol ratio were 6.3 and 2.7 (95% CI, 1.9 to 21 and 1.1 to 6.5), respectively. When polymorphisms in genes for hepatic lipase, endothelial lipase, and cholesteryl ester transfer protein were analyzed, patients differed significantly from controls in the allelic frequency for the TaqI B1/B2 polymorphism in cholesteryl ester transfer protein, consistent with the observed pattern of lower HDL and higher LDL. Conclusions-Venous thrombosis in men aged <55 years old is associated with dyslipoproteinemia involving lower levels of HDL particles, elevated levels of small LDL particles, and an elevated ratio of apolipoprotein B/apolipoprotein AI. This dyslipoproteinemia seems associated with a related cholesteryl ester transfer protein genotype difference. © 2005 American Heart Association, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Bus Rapid Transit (BRT), because of its operational flexibility and simplicity, is rapidly gaining popularity with urban designers and transit planners. Earlier BRTs were bus shared lane or bus only lane, which share the roadway with general and other forms of traffic. In recent time, more sophisticated designs of BRT have emerged, such as busway, which has separate carriageway for buses and provides very high physical separation of buses from general traffic. Line capacities of a busway are predominately dependent on bus capacity of its stations. Despite new developments in BRT designs, the methodology of capacity analysis is still based on traditional principles of kerbside bus stop on bus only lane operations. Consequently, the tradition methodology lacks accounting for various dimensions of busway station operation, such as passenger crowd, passenger walking and bus lost time along the long busway station platform. This research has developed a purpose made bus capacity analysis methodology for busway station analysis. Extensive observations of kerbside bus stops and busway stations in Brisbane, Australia were made and differences in their operation were studied. A large scale data collection was conducted using the video recording technique at the Mater Hill Busway Station on the South East Busway in Brisbane. This research identified new parameters concerning busway station operation, and through intricate analysis identified the elements and processes which influence the bus dwell time at a busway station platform. A new variable, Bus lost time, was defined and its quantitative descriptions were established. Based on these finding and analysis, a busway station platform bus capacity methodology was developed, comprising of new models for busway station lost time, busway station dwell time, busway station loading area bus capacity, and busway station platform bus capacity. The new methodology not only accounts for passenger boarding and alighting, but also covers platform crowd and bus lost time in station platform bus capacity estimation. The applicability of this methodology was shown through demonstrative examples. Additionally, these examples illustrated the significance of the bus lost time variable in determining station capacities.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study model selection strategies based on penalized empirical loss minimization. We point out a tight relationship between error estimation and data-based complexity penalization: any good error estimate may be converted into a data-based penalty function and the performance of the estimate is governed by the quality of the error estimate. We consider several penalty functions, involving error estimates on independent test data, empirical VC dimension, empirical VC entropy, and margin-based quantities. We also consider the maximal difference between the error on the first half of the training data and the second half, and the expected maximal discrepancy, a closely related capacity estimate that can be calculated by Monte Carlo integration. Maximal discrepancy penalty functions are appealing for pattern classification problems, since their computation is equivalent to empirical risk minimization over the training data with some labels flipped.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Gradient-based approaches to direct policy search in reinforcement learning have received much recent attention as a means to solve problems of partial observability and to avoid some of the problems associated with policy degradation in value-function methods. In this paper we introduce GPOMDP, a simulation-based algorithm for generating a biased estimate of the gradient of the average reward in Partially Observable Markov Decision Processes (POMDPs) controlled by parameterized stochastic policies. A similar algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The algorithm's chief advantages are that it requires storage of only twice the number of policy parameters, uses one free parameter β ∈ [0,1) (which has a natural interpretation in terms of bias-variance trade-off), and requires no knowledge of the underlying state. We prove convergence of GPOMDP, and show how the correct choice of the parameter β is related to the mixing time of the controlled POMDP. We briefly describe extensions of GPOMDP to controlled Markov chains, continuous state, observation and control spaces, multiple-agents, higher-order derivatives, and a version for training stochastic policies with internal states. In a companion paper (Baxter, Bartlett, & Weaver, 2001) we show how the gradient estimates generated by GPOMDP can be used in both a traditional stochastic gradient algorithm and a conjugate-gradient procedure to find local optima of the average reward. ©2001 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider complexity penalization methods for model selection. These methods aim to choose a model to optimally trade off estimation and approximation errors by minimizing the sum of an empirical risk term and a complexity penalty. It is well known that if we use a bound on the maximal deviation between empirical and true risks as a complexity penalty, then the risk of our choice is no more than the approximation error plus twice the complexity penalty. There are many cases, however, where complexity penalties like this give loose upper bounds on the estimation error. In particular, if we choose a function from a suitably simple convex function class with a strictly convex loss function, then the estimation error (the difference between the risk of the empirical risk minimizer and the minimal risk in the class) approaches zero at a faster rate than the maximal deviation between empirical and true risks. In this paper, we address the question of whether it is possible to design a complexity penalized model selection method for these situations. We show that, provided the sequence of models is ordered by inclusion, in these cases we can use tight upper bounds on estimation error as a complexity penalty. Surprisingly, this is the case even in situations when the difference between the empirical risk and true risk (and indeed the error of any estimate of the approximation error) decreases much more slowly than the complexity penalty. We give an oracle inequality showing that the resulting model selection method chooses a function with risk no more than the approximation error plus a constant times the complexity penalty.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a technique for estimating the 6DOF pose of a PTZ camera by tracking a single moving target in the image with known 3D position. This is useful in situations where it is not practical to measure the camera pose directly. Our application domain is estimating the pose of a PTZ camerso so that it can be used for automated GPS-based tracking and filming of UAV flight trials. We present results which show the technique is able to localize a PTZ after a short vision-tracked flight, and that the estimated pose is sufficiently accurate for the PTZ to then actively track a UAV based on GPS position data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper contributes to the recent debate about the role of referees in the home advantage phenomenon. Specifically, it aims to provide a convincing answer to the newly posed question of the existence of individual differences among referees in terms of the home advantage (Boyko, Boyko, & Boyko, 2007; Johnston, 2008). Using multilevel modelling on a large and representative dataset we find that (1) the home advantage effect differs significantly among referees, and (2) this relationship is moderated by the size of the crowd. These new results suggest that a part of the home advantage is due to the effect of the crowd on the referees, and that some referees are more prone to be influenced by the crowd than others. This provides strong evidence to indicate that referees are a significant contributing factor to the home advantage. The implications of these findings are discussed both in terms of the relevant social psychological research, and with respect to the selection, assessment, and training of referees.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Estimates of the half-life to convergence of prices across a panel of cities are subject to bias from three potential sources: inappropriate cross-sectional aggregation of heterogeneous coefficients, presence of lagged dependent variables in a model with individual fixed effects, and time aggregation of commodity prices. This paper finds no evidence of heterogeneity bias in annual CPI data for 17 U.S. cities from 1918 to 2006, but correcting for the “Nickell bias” and time aggregation bias produces a half-life of 7.5 years, shorter than estimates from previous studies.