893 resultados para Traffic congestion
Resumo:
Assessing and prioritising cost-effective strategies to mitigate the impacts of traffic incidents and accidents on non-recurrent congestion on major roads represents a significant challenge for road network managers. This research examines the influence of numerous factors associated with incidents of various types on their duration. It presents a comprehensive traffic incident data mining and analysis by developing an incident duration model based on twelve months of incident data obtained from the Australian freeway network. Parametric accelerated failure time (AFT) survival models of incident duration were developed, including log-logistic, lognormal, and Weibul-considering both fixed and random parameters, as well as a Weibull model with gamma heterogeneity. The Weibull AFT models with random parameters were appropriate for modelling incident duration arising from crashes and hazards. A Weibull model with gamma heterogeneity was most suitable for modelling incident duration of stationary vehicles. Significant variables affecting incident duration include characteristics of the incidents (severity, type, towing requirements, etc.), and location, time of day, and traffic characteristics of the incident. Moreover, the findings reveal no significant effects of infrastructure and weather on incident duration. A significant and unique contribution of this paper is that the durations of each type of incident are uniquely different and respond to different factors. The results of this study are useful for traffic incident management agencies to implement strategies to reduce incident duration, leading to reduced congestion, secondary incidents, and the associated human and economic losses.
Resumo:
The IEEE Wireless LAN standard has been a true success story by enabling convenient, efficient and low-cost access to broadband networks for both private and professional use. However, the increasing density and uncoordinated operation of wireless access points, combined with constantly growing traffic demands have started hurting the users' quality of experience. On the other hand, the emerging ubiquity of wireless access has placed it at the center of attention for network attacks, which not only raises users' concerns on security but also indirectly affects connection quality due to proactive measures against security attacks. In this work, we introduce an integrated solution to congestion avoidance and attack mitigation problems through cooperation among wireless access points. The proposed solution implements a Partially Observable Markov Decision Process (POMDP) as an intelligent distributed control system. By successfully differentiating resource hampering attacks from overload cases, the control system takes an appropriate action in each detected anomaly case without disturbing the quality of service for end users. The proposed solution is fully implemented on a small-scale testbed, on which we present our observations and demonstrate the effectiveness of the system to detect and alleviate both attack and congestion situations.
Resumo:
Crashes on motorway contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence reduce crashes will help address congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a Short time window around the time of crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques, that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists, and that this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with traffic flow data of one hour prior to the crash using an incident detection algorithm. Traffic flow trends (traffic speed/occupancy time series) revealed that crashes could be clustered with regards of the dominant traffic flow pattern prior to the crash. Using the k-means clustering method allowed the crashes to be clustered based on their flow trends rather than their distance. Four major trends have been found in the clustering results. Based on these findings, crash likelihood estimation algorithms can be fine-tuned based on the monitored traffic flow conditions with a sliding window of 60 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestions. Hence, reducing the frequency of crashes assists in addressing congestion issues (Meyer, 2008). Crash likelihood estimation studies commonly focus on traffic conditions in a short time window around the time of a crash while longer-term pre-crash traffic flow trends are neglected. In this paper we will show, through data mining techniques that a relationship between pre-crash traffic flow patterns and crash occurrence on motorways exists. We will compare them with normal traffic trends and show this knowledge has the potential to improve the accuracy of existing models and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding to traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash. Using the K-Means clustering method with Euclidean distance function allowed the crashes to be clustered. Then, normal situation data was extracted based on the time distribution of crashes and were clustered to compare with the “high risk” clusters. Five major trends have been found in the clustering results for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Based on these findings, crash likelihood estimation models can be fine-tuned based on the monitored traffic conditions with a sliding window of 30 minutes to increase accuracy of the results and minimize false alarms.
Resumo:
Crashes that occur on motorways contribute to a significant proportion (40-50%) of non-recurrent motorway congestion. Hence, reducing the frequency of crashes assist in addressing congestion issues (Meyer, 2008). Analysing traffic conditions and discovering risky traffic trends and patterns are essential basics in crash likelihood estimations studies and still require more attention and investigation. In this paper we will show, through data mining techniques, that there is a relationship between pre-crash traffic flow patterns and crash occurrence on motorways, compare them with normal traffic trends, and that this knowledge has the potentiality to improve the accuracy of existing crash likelihood estimation models, and opens the path for new development approaches. The data for the analysis was extracted from records collected between 2007 and 2009 on the Shibuya and Shinjuku lines of the Tokyo Metropolitan Expressway in Japan. The dataset includes a total of 824 rear-end and sideswipe crashes that have been matched with crashes corresponding traffic flow data using an incident detection algorithm. Traffic trends (traffic speed time series) revealed that crashes can be clustered with regards to the dominant traffic patterns prior to the crash occurrence. K-Means clustering algorithm applied to determine dominant pre-crash traffic patterns. In the first phase of this research, traffic regimes identified by analysing crashes and normal traffic situations using half an hour speed in upstream locations of crashes. Then, the second phase investigated the different combination of speed risk indicators to distinguish crashes from normal traffic situations more precisely. Five major trends have been found in the first phase of this paper for both high risk and normal conditions. The study discovered traffic regimes had differences in the speed trends. Moreover, the second phase explains that spatiotemporal difference of speed is a better risk indicator among different combinations of speed related risk indicators. Based on these findings, crash likelihood estimation models can be fine-tuned to increase accuracy of estimations and minimize false alarms.
Resumo:
This research investigated strategies for motorway congestion management from a different angle: that is, how to quickly recover motorway from congestion at the end of peak hours, given congestion cannot be eliminated due to excessive demand during the long peak hours nowadays. The project developed a zone recovery strategy using ramp metering for rapid congestion recovery, and a serious of traffic simulation investigations were included to evaluate the developed strategy. The results, from both microscopic and macroscopic simulation, demonstrated the effectiveness of the zone recovery strategy.
Resumo:
Recurrent congestion caused by high commuter traffic is an irritation to motorway users. Ramp metering (RM) is the most effective motorway control means (M Papageorgiou & Kotsialos, 2002) for significantly reducing motorway congestion. However, given field constraints (e.g. limited ramp space and maximum ramp waiting time), RM cannot eliminate recurrent congestion during the increased long peak hours. This paper, therefore, focuses on rapid congestion recovery to further improve RM systems: that is, to quickly clear congestion in recovery periods. The feasibility of using RM for recovery is analyzed, and a zone recovery strategy (ZRS) for RM is proposed. Note that this study assumes no incident and demand management involved, i.e. no re-routing behavior and strategy considered. This strategy is modeled, calibrated and tested in the northbound model of the Pacific Motorway, Brisbane, Australia in a micro-simulation environment for recurrent congestion scenario, and evaluation results have justified its effectiveness.
Resumo:
Traffic incidents are key contributors to non-recurrent congestion, potentially generating significant delay. Factors that influence the duration of incidents are important to understand so that effective mitigation strategies can be implemented. To identify and quantify the effects of influential factors, a methodology for studying total incident duration based on historical data from an ‘integrated database’ is proposed. Incident duration models are developed using a selected freeway segment in the Southeast Queensland, Australia network. The models include incident detection and recovery time as components of incident duration. A hazard-based duration modelling approach is applied to model incident duration as a function of a variety of factors that influence traffic incident duration. Parametric accelerated failure time survival models are developed to capture heterogeneity as a function of explanatory variables, with both fixed and random parameters specifications. The analysis reveals that factors affecting incident duration include incident characteristics (severity, type, injury, medical requirements, etc.), infrastructure characteristics (roadway shoulder availability), time of day, and traffic characteristics. The results indicate that event type durations are uniquely different, thus requiring different responses to effectively clear them. Furthermore, the results highlight the presence of unobserved incident duration heterogeneity as captured by the random parameter models, suggesting that additional factors need to be considered in future modelling efforts.
Resumo:
This thesis presents a novel idea for an adaptive prioritized cross-layer design (APCLD) control algorithm to achieve comprehensive channel congestion control for vehicular safety communication based on DSRC technology. An appropriate evaluation metric and two control parameters have been established. Simulation studies have evaluated the DSRC network performance in different traffic scenario and under different channel conditions. The APCLD algorithm is derived from the results of the simulation analysis.
Resumo:
Traffic incidents are recognised as one of the key sources of non-recurrent congestion that often leads to reduction in travel time reliability (TTR), a key metric of roadway performance. A method is proposed here to quantify the impacts of traffic incidents on TTR on freeways. The method uses historical data to establish recurrent speed profiles and identifies non-recurrent congestion based on their negative impacts on speeds. The locations and times of incidents are used to identify incidents among non-recurrent congestion events. Buffer time is employed to measure TTR. Extra buffer time is defined as the extra delay caused by traffic incidents. This reliability measure indicates how much extra travel time is required by travellers to arrive at their destination on time with 95% certainty in the case of an incident, over and above the travel time that would have been required under recurrent conditions. An extra buffer time index (EBTI) is defined as the ratio of extra buffer time to recurrent travel time, with zero being the best case (no delay). A Tobit model is used to identify and quantify factors that affect EBTI using a selected freeway segment in the Southeast Queensland, Australia network. Both fixed and random parameter Tobit specifications are tested. The estimation results reveal that models with random parameters offer a superior statistical fit for all types of incidents, suggesting the presence of unobserved heterogeneity across segments. What factors influence EBTI depends on the type of incident. In addition, changes in TTR as a result of traffic incidents are related to the characteristics of the incidents (multiple vehicles involved, incident duration, major incidents, etc.) and traffic characteristics.
Resumo:
Wireless access is expected to play a crucial role in the future of the Internet. The demands of the wireless environment are not always compatible with the assumptions that were made on the era of the wired links. At the same time, new services that take advantage of the advances in many areas of technology are invented. These services include delivery of mass media like television and radio, Internet phone calls, and video conferencing. The network must be able to deliver these services with acceptable performance and quality to the end user. This thesis presents an experimental study to measure the performance of bulk data TCP transfers, streaming audio flows, and HTTP transfers which compete the limited bandwidth of the GPRS/UMTS-like wireless link. The wireless link characteristics are modeled with a wireless network emulator. We analyze how different competing workload types behave with regular TPC and how the active queue management, the Differentiated services (DiffServ), and a combination of TCP enhancements affect the performance and the quality of service. We test on four link types including an error-free link and the links with different Automatic Repeat reQuest (ARQ) persistency. The analysis consists of comparing the resulting performance in different configurations based on defined metrics. We observed that DiffServ and Random Early Detection (RED) with Explicit Congestion Notification (ECN) are useful, and in some conditions necessary, for quality of service and fairness because a long queuing delay and congestion related packet losses cause problems without DiffServ and RED. However, we observed situations, where there is still room for significant improvements if the link-level is aware of the quality of service. Only very error-prone link diminishes the benefits to nil. The combination of TCP enhancements improves performance. These include initial window of four, Control Block Interdependence (CBI) and Forward RTO recovery (F-RTO). The initial window of four helps a later starting TCP flow to start faster but generates congestion under some conditions. CBI prevents slow-start overshoot and balances slow start in the presence of error drops, and F-RTO reduces unnecessary retransmissions successfully.
Resumo:
We propose, for the first time, a reinforcement learning (RL) algorithm with function approximation for traffic signal control. Our algorithm incorporates state-action features and is easily implementable in high-dimensional settings. Prior work, e. g., the work of Abdulhai et al., on the application of RL to traffic signal control requires full-state representations and cannot be implemented, even in moderate-sized road networks, because the computational complexity exponentially grows in the numbers of lanes and junctions. We tackle this problem of the curse of dimensionality by effectively using feature-based state representations that use a broad characterization of the level of congestion as low, medium, or high. One advantage of our algorithm is that, unlike prior work based on RL, it does not require precise information on queue lengths and elapsed times at each lane but instead works with the aforementioned described features. The number of features that our algorithm requires is linear to the number of signaled lanes, thereby leading to several orders of magnitude reduction in the computational complexity. We perform implementations of our algorithm on various settings and show performance comparisons with other algorithms in the literature, including the works of Abdulhai et al. and Cools et al., as well as the fixed-timing and the longest queue algorithms. For comparison, we also develop an RL algorithm that uses full-state representation and incorporates prioritization of traffic, unlike the work of Abdulhai et al. We observe that our algorithm outperforms all the other algorithms on all the road network settings that we consider.
Resumo:
Mobile WiMAX is a burgeoning network technology with diverse applications, one of them being used for VANETs. The performance metrics such as Mean Throughput and Packet Loss Ratio for the operations of VANETs adopting 802.16e are computed through simulation techniques. Next we evaluated the similar performance of VANETs employing 802.11p, also known as WAVE (Wireless Access in Vehicular Environment). The simulation model proposed is close to reality as we have generated mobility traces for both the cases using a traffic simulator (SUMO), and fed it into network simulator (NS2) based on their operations in a typical urban scenario for VANETs. In sequel, a VANET application called `Street Congestion Alert' is developed to assess the performances of these two technologies. For this application, TraCI is used for coupling SUMO and NS2 in a feedback loop to set up a realistic simulation scenario. Our inferences show that the Mobile WiMAX performs better than WAVE for larger network sizes.
Resumo:
Existing approaches for multirate multicast congestion control are either friendly to TCP only over large time scales or introduce unfortunate side effects, such as significant control traffic, wasted bandwidth, or the need for modifications to existing routers. We advocate a layered multicast approach in which steady-state receiver reception rates emulate the classical TCP sawtooth derived from additive-increase, multiplicative decrease (AIMD) principles. Our approach introduces the concept of dynamic stair layers to simulate various rates of additive increase for receivers with heterogeneous round-trip times (RTTs), facilitated by a minimal amount of IGMP control traffic. We employ a mix of cumulative and non-cumulative layering to minimize the amount of excess bandwidth consumed by receivers operating asynchronously behind a shared bottleneck. We integrate these techniques together into a congestion control scheme called STAIR which is amenable to those multicast applications which can make effective use of arbitrary and time-varying subscription levels.
Resumo:
Internet measurements show that the size distribution of Web-based transactions is usually very skewed; a few large requests constitute most of the total traffic. Motivated by the advantages of scheduling algorithms which favor short jobs, we propose to perform differentiated control over Web-based transactions to give preferential service to short web requests. The control is realized through service semantics provided by Internet Traffic Managers, a Diffserv-like architecture. To evaluate the performance of such a control system, it is necessary to have a fast but accurate analytical method. To this end, we model the Internet as a time-shared system and propose a numerical approach which utilizes Kleinrock's conservation law to solve the model. The numerical results are shown to match well those obtained by packet-level simulation, which runs orders of magnitude slower than our numerical method.