863 resultados para Traffic Forecasting
Resumo:
Navigational collisions are one of the major safety concerns for many seaports. Despite the extent of work recently done on collision risk analysis in port waters, little is known about the influencing factors of the risk. This paper develops a technique for modeling collision risks in port waterways in order to examine the associations between the risks and the geometric, traffic, and regulatory control characteristics of waterways. A binomial logistic model, which accounts for the correlations in the risks of a particular fairway at different time periods, is derived from traffic conflicts and calibrated for the Singapore port fairways. Estimation results show that the fairways attached to shoreline, traffic intersection and international fairway attribute higher risks, whereas those attached to confined water and local fairway possess lower risks. Higher risks are also found in the fairways featuring higher degree of bend, lower depth of water, higher numbers of cardinal and isolated danger marks, higher density of moving ships and lower operating speed. The risks are also found to be higher for night-time conditions.
Resumo:
This paper investigates relationship between traffic conditions and the crash occurrence likelihood (COL) using the I-880 data. To remedy the data limitations and the methodological shortcomings suffered by previous studies, a multiresolution data processing method is proposed and implemented, upon which binary logistic models were developed. The major findings of this paper are: 1) traffic conditions have significant impacts on COL at the study site; Specifically, COL in a congested (transitioning) traffic flow is about 6 (1.6) times of that in a free flow condition; 2)Speed variance alone is not sufficient to capture traffic dynamics’ impact on COL; a traffic chaos indicator that integrates speed, speed variance, and flow is proposed and shows a promising performance; 3) Models based on aggregated data shall be interpreted with caution. Generally, conclusions obtained from such models shall not be generalized to individual vehicles (drivers) without further evidences using high-resolution data and it is dubious to either claim or disclaim speed kills based on aggregated data.
Resumo:
Most crash severity studies ignored severity correlations between driver-vehicle units involved in the same crashes. Models without accounting for these within-crash correlations will result in biased estimates in the factor effects. This study developed a Bayesian hierarchical binomial logistic model to identify the significant factors affecting the severity level of driver injury and vehicle damage in traffic crashes at signalized intersections. Crash data in Singapore were employed to calibrate the model. Model fitness assessment and comparison using Intra-class Correlation Coefficient (ICC) and Deviance Information Criterion (DIC) ensured the suitability of introducing the crash-level random effects. Crashes occurring in peak time, in good street lighting condition, involving pedestrian injuries are associated with a lower severity, while those in night time, at T/Y type intersections, on right-most lane, and installed with red light camera have larger odds of being severe. Moreover, heavy vehicles have a better resistance on severe crash, while crashes involving two-wheel vehicles, young or aged drivers, and the involvement of offending party are more likely to result in severe injuries.
Resumo:
Poisson distribution has often been used for count like accident data. Negative Binomial (NB) distribution has been adopted in the count data to take care of the over-dispersion problem. However, Poisson and NB distributions are incapable of taking into account some unobserved heterogeneities due to spatial and temporal effects of accident data. To overcome this problem, Random Effect models have been developed. Again another challenge with existing traffic accident prediction models is the distribution of excess zero accident observations in some accident data. Although Zero-Inflated Poisson (ZIP) model is capable of handling the dual-state system in accident data with excess zero observations, it does not accommodate the within-location correlation and between-location correlation heterogeneities which are the basic motivations for the need of the Random Effect models. This paper proposes an effective way of fitting ZIP model with location specific random effects and for model calibration and assessment the Bayesian analysis is recommended.
Resumo:
Navigational collisions are one of the major safety concerns in many seaports. To address this safety concern, a comprehensive and structured method of collision risk management is necessary. Traditionally management of port water collision risks has been relied on historical collision data. However, this collision-data-based approach is hampered by several shortcomings, such as randomness and rarity of collision occurrence leading to obtaining insufficient number of samples for a sound statistical analysis, insufficiency in explaining collision causation, and reactive approach to safety. A promising alternative approach that overcomes these shortcomings is the navigational traffic conflict technique that uses traffic conflicts as an alternative to the collision data. This paper proposes a collision risk management method by utilizing the principles of this technique. This risk management method allows safety analysts to diagnose safety deficiencies in a proactive manner, which, consequently, has great potential for managing collision risks in a fast, reliable and efficient manner.
Resumo:
Forecasts generated by time series models traditionally place greater weight on more recent observations. This paper develops an alternative semi-parametric method for forecasting that does not rely on this convention and applies it to the problem of forecasting asset return volatility. In this approach, a forecast is a weighted average of historical volatility, with the greatest weight given to periods that exhibit similar market conditions to the time at which the forecast is being formed. Weighting is determined by comparing short-term trends in volatility across time (as a measure of market conditions) by means of a multivariate kernel scheme. It is found that the semi-parametric method produces forecasts that are significantly more accurate than a number of competing approaches at both short and long forecast horizons.
Resumo:
Particles emitted by vehicles are known to cause detrimental health effects, with their size and oxidative potential among the main factors responsible. Therefore, understanding the relationship between traffic composition and both the physical characteristics and oxidative potential of particles is critical. To contribute to the limited knowledge base in this area, we investigated this relationship in a 4.5 km road tunnel in Brisbane, Australia. On-road concentrations of ultrafine particles (<100 nm, UFPs), fine particles (PM2.5), CO, CO2 and particle associated reactive oxygen species (ROS) were measured using vehicle-based mobile sampling. UFPs were measured using a condensation particle counter and PM2.5 with a DustTrak aerosol photometer. A new profluorescent nitroxide probe, BPEAnit, was used to determine ROS levels. Comparative measurements were also performed on an above-ground road to assess the role of emission dilution on the parameters measured. The profile of UFP and PM2.5 concentration with distance through the tunnel was determined, and demonstrated relationships with both road gradient and tunnel ventilation. ROS levels in the tunnel were found to be high compared to an open road with similar traffic characteristics, which was attributed to the substantial difference in estimated emission dilution ratios on the two roadways. Principal component analysis (PCA) revealed that the levels of pollutants and ROS were generally better correlated with total traffic count, rather than the traffic composition (i.e. diesel and gasoline-powered vehicles). A possible reason for the lack of correlation with HDV, which has previously been shown to be strongly associated with UFPs especially, was the low absolute numbers encountered during the sampling. This may have made their contribution to in-tunnel pollution largely indistinguishable from the total vehicle volume. For ROS, the stronger association observed with HDV and gasoline vehicles when combined (total traffic count) compared to when considered individually may signal a role for the interaction of their emissions as a determinant of on-road ROS in this pilot study. If further validated, this should not be overlooked in studies of on- or near-road particle exposure and its potential health effects.
Resumo:
Decision table and decision rules play an important role in rough set based data analysis, which compress databases into granules and describe the associations between granules. Granule mining was also proposed to interpret decision rules in terms of association rules and multi-tier structure. In this paper, we further extend granule mining to describe the relationships between granules not only by traditional support and confidence, but by diversity and condition diversity as well. Diversity measures how diverse of a granule associated with the other ganules, it provides a kind of novel knowledge in databases. Some experiments are conducted to test the proposed new concepts for describing the characteristics of a real network traffic data collection. The results show that the proposed concepts are promising.
Resumo:
Traffic safety studies demand more than what current micro-simulation models can provide as they presume that all drivers exhibit safe behaviors. All the microscopic traffic simulation models include a car following model. This paper highlights the limitations of the Gipps car following model ability to emulate driver behavior for safety study purposes. A safety adapted car following model based on the Gipps car following model is proposed to simulate unsafe vehicle movements, with safety indicators below critical thresholds. The modifications are based on the observations of driver behavior in real data and also psychophysical notions. NGSIM vehicle trajectory data is used to evaluate the new model and short following headways and Time To Collision are employed to assess critical safety events within traffic flow. Risky events are extracted from available NGSIM data to evaluate the modified model against them. The results from simulation tests illustrate that the proposed model can predict the safety metrics better than the generic Gipps model. The outcome of this paper can potentially facilitate assessing and predicting traffic safety using microscopic simulation.
Resumo:
The purpose of traffic law enforcement is to encourage compliant driver behaviour. That is, the threat of an undesirable sanction encourages drivers to comply with traffic laws. However, not all traffic law violations are considered equal. For example, while drink driving is generally seen as socially unacceptable, behaviours such as speeding are arguably less so, and speed enforcement is often portrayed in the popular media as a means of “revenue raising”. The perceived legitimacy of traffic law enforcement has received limited research attention to date. Perceived legitimacy of traffic law enforcement may influence (or be influenced by) attitudes toward illegal driving behaviours, and both of these factors are likely to influence on-road driving behaviour. This study aimed to explore attitudes toward a number of illegal driving behaviours and traffic law enforcement approaches that typically target these behaviours using self-reported data from a large sample of drivers. The results of this research can be used to inform further research in this area, as well as the content of public education and advertising campaigns designed to influence attitudes toward illegal driving behaviours and perceived legitimacy of traffic law enforcement.
Resumo:
Most unsignalised intersection capacity calculation procedures are based on gap acceptance models. Accuracy of critical gap estimation affects accuracy of capacity and delay estimation. Several methods have been published to estimate drivers’ sample mean critical gap, the Maximum Likelihood Estimation (MLE) technique regarded as the most accurate. This study assesses three novel methods; Average Central Gap (ACG) method, Strength Weighted Central Gap method (SWCG), and Mode Central Gap method (MCG), against MLE for their fidelity in rendering true sample mean critical gaps. A Monte Carlo event based simulation model was used to draw the maximum rejected gap and accepted gap for each of a sample of 300 drivers across 32 simulation runs. Simulation mean critical gap is varied between 3s and 8s, while offered gap rate is varied between 0.05veh/s and 0.55veh/s. This study affirms that MLE provides a close to perfect fit to simulation mean critical gaps across a broad range of conditions. The MCG method also provides an almost perfect fit and has superior computational simplicity and efficiency to the MLE. The SWCG method performs robustly under high flows; however, poorly under low to moderate flows. Further research is recommended using field traffic data, under a variety of minor stream and major stream flow conditions for a variety of minor stream movement types, to compare critical gap estimates using MLE against MCG. Should the MCG method prove as robust as MLE, serious consideration should be given to its adoption to estimate critical gap parameters in guidelines.
Resumo:
Forecasts of volatility and correlation are important inputs into many practical financial problems. Broadly speaking, there are two ways of generating forecasts of these variables. Firstly, time-series models apply a statistical weighting scheme to historical measurements of the variable of interest. The alternative methodology extracts forecasts from the market traded value of option contracts. An efficient options market should be able to produce superior forecasts as it utilises a larger information set of not only historical information but also the market equilibrium expectation of options market participants. While much research has been conducted into the relative merits of these approaches, this thesis extends the literature along several lines through three empirical studies. Firstly, it is demonstrated that there exist statistically significant benefits to taking the volatility risk premium into account for the implied volatility for the purposes of univariate volatility forecasting. Secondly, high-frequency option implied measures are shown to lead to superior forecasts of the intraday stochastic component of intraday volatility and that these then lead on to superior forecasts of intraday total volatility. Finally, the use of realised and option implied measures of equicorrelation are shown to dominate measures based on daily returns.
Resumo:
The serviceability and safety of bridges are crucial to people’s daily lives and to the national economy. Every effort should be taken to make sure that bridges function safely and properly as any damage or fault during the service life can lead to transport paralysis, catastrophic loss of property or even casualties. Nonetheless, aggressive environmental conditions, ever-increasing and changing traffic loads and aging can all contribute to bridge deterioration. With often constrained budget, it is of significance to identify bridges and bridge elements that should be given higher priority for maintenance, rehabilitation or replacement, and to select optimal strategy. Bridge health prediction is an essential underpinning science to bridge maintenance optimization, since the effectiveness of optimal maintenance decision is largely dependent on the forecasting accuracy of bridge health performance. The current approaches for bridge health prediction can be categorised into two groups: condition ratings based and structural reliability based. A comprehensive literature review has revealed the following limitations of the current modelling approaches: (1) it is not evident in literature to date that any integrated approaches exist for modelling both serviceability and safety aspects so that both performance criteria can be evaluated coherently; (2) complex system modelling approaches have not been successfully applied to bridge deterioration modelling though a bridge is a complex system composed of many inter-related bridge elements; (3) multiple bridge deterioration factors, such as deterioration dependencies among different bridge elements, observed information, maintenance actions and environmental effects have not been considered jointly; (4) the existing approaches are lacking in Bayesian updating ability to incorporate a variety of event information; (5) the assumption of series and/or parallel relationship for bridge level reliability is always held in all structural reliability estimation of bridge systems. To address the deficiencies listed above, this research proposes three novel models based on the Dynamic Object Oriented Bayesian Networks (DOOBNs) approach. Model I aims to address bridge deterioration in serviceability using condition ratings as the health index. The bridge deterioration is represented in a hierarchical relationship, in accordance with the physical structure, so that the contribution of each bridge element to bridge deterioration can be tracked. A discrete-time Markov process is employed to model deterioration of bridge elements over time. In Model II, bridge deterioration in terms of safety is addressed. The structural reliability of bridge systems is estimated from bridge elements to the entire bridge. By means of conditional probability tables (CPTs), not only series-parallel relationship but also complex probabilistic relationship in bridge systems can be effectively modelled. The structural reliability of each bridge element is evaluated from its limit state functions, considering the probability distributions of resistance and applied load. Both Models I and II are designed in three steps: modelling consideration, DOOBN development and parameters estimation. Model III integrates Models I and II to address bridge health performance in both serviceability and safety aspects jointly. The modelling of bridge ratings is modified so that every basic modelling unit denotes one physical bridge element. According to the specific materials used, the integration of condition ratings and structural reliability is implemented through critical failure modes. Three case studies have been conducted to validate the proposed models, respectively. Carefully selected data and knowledge from bridge experts, the National Bridge Inventory (NBI) and existing literature were utilised for model validation. In addition, event information was generated using simulation to demonstrate the Bayesian updating ability of the proposed models. The prediction results of condition ratings and structural reliability were presented and interpreted for basic bridge elements and the whole bridge system. The results obtained from Model II were compared with the ones obtained from traditional structural reliability methods. Overall, the prediction results demonstrate the feasibility of the proposed modelling approach for bridge health prediction and underpin the assertion that the three models can be used separately or integrated and are more effective than the current bridge deterioration modelling approaches. The primary contribution of this work is to enhance the knowledge in the field of bridge health prediction, where more comprehensive health performance in both serviceability and safety aspects are addressed jointly. The proposed models, characterised by probabilistic representation of bridge deterioration in hierarchical ways, demonstrated the effectiveness and pledge of DOOBNs approach to bridge health management. Additionally, the proposed models have significant potential for bridge maintenance optimization. Working together with advanced monitoring and inspection techniques, and a comprehensive bridge inventory, the proposed models can be used by bridge practitioners to achieve increased serviceability and safety as well as maintenance cost effectiveness.
Resumo:
Our aim is to develop a set of leading performance indicators to enable managers of large projects to forecast during project execution how various stakeholders will perceive success months or even years into the operation of the output. Large projects have many stakeholders who have different objectives for the project, its output, and the business objectives they will deliver. The output of a large project may have a lifetime that lasts for years, or even decades, and ultimate impacts that go beyond its immediate operation. How different stakeholders perceive success can change with time, and so the project manager needs leading performance indicators that go beyond the traditional triple constraint to forecast how key stakeholders will perceive success months or even years later. In this article, we develop a model for project success that identifies how project stakeholders might perceive success in the months and years following a project. We identify success or failure factors that will facilitate or mitigate against achievement of those success criteria, and a set of potential leading performance indicators that forecast how stakeholders will perceive success during the life of the project's output. We conducted a scale development study with 152 managers of large projects and identified two project success factor scales and seven stakeholder satisfaction scales that can be used by project managers to predict stakeholder satisfaction on projects and so may be used by the managers of large projects for the basis of project control.