959 resultados para Traffic flow breakdown
Resumo:
We consider the dynamics of cargo driven by a collection of interacting molecular motors in the context of ail asymmetric simple exclusion process (ASEP). The model is formulated to account for (i) excluded-volume interactions, (ii) the observed asymmetry of the stochastic movement of individual motors and (iii) interactions between motors and cargo. Items (i) and (ii) form the basis of ASEP models and have already been considered to study the behavior of motor density profile [A. Parmeggiani. T. Franosch, E. Frey, Phase Coexistence in driven one-dimensional transport, Phys. Rev. Lett. 90 (2003) 086601-1-086601-4]. Item (iii) is new. It is introduced here as an attempt to describe explicitly the dependence of cargo movement on the dynamics of motors in this context. The steady-state Solutions Of the model indicate that the system undergoes a phase transition of condensation type as the motor density varies. We study the consequences of this transition to the behavior of the average cargo velocity. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Accurate speed prediction is a crucial step in the development of a dynamic vehcile activated sign (VAS). A previous study showed that the optimal trigger speed of such signs will need to be pre-determined according to the nature of the site and to the traffic conditions. The objective of this paper is to find an accurate predictive model based on historical traffic speed data to derive the optimal trigger speed for such signs. Adaptive neuro fuzzy (ANFIS), classification and regression tree (CART) and random forest (RF) were developed to predict one step ahead speed during all times of the day. The developed models were evaluated and compared to the results obtained from artificial neural network (ANN), multiple linear regression (MLR) and naïve prediction using traffic speed data collected at four sites located in Sweden. The data were aggregated into two periods, a short term period (5-min) and a long term period (1-hour). The results of this study showed that using RF is a promising method for predicting mean speed in the two proposed periods.. It is concluded that in terms of performance and computational complexity, a simplistic input features to the predicitive model gave a marked increase in the response time of the model whilse still delivering a low prediction error.
Resumo:
The current system of controlling oil spills involves a complex relationship of international, federal and state law, which has not proven to be very effective. The multiple layers of regulation often leave shipowners unsure of the laws facing them. Furthemore, nations have had difficulty enforcing these legal requirements. This thesis deals with the role marine insurance can play within the existing system of legislation to provide a strong preventative influence that is simple and cost-effective to enforce. In principle, insurance has two ways of enforcing higher safety standards and limiting the risk of an accident occurring. The first is through the use of insurance premiums that are based on the level of care taken by the insured. This means that a person engaging in riskier behavior faces a higher insurance premium, because their actions increase the probability of an accident occurring. The second method, available to the insurer, is collectively known as cancellation provisions or underwriting clauses. These are clauses written into an insurance contract that invalidates the agreement when certain conditions are not met by the insured The problem has been that obtaining information about the behavior of an insured party requires monitoring and that incurs a cost to the insurer. The application of these principles proves to be a more complicated matter. The modern marine insurance industry is a complicated system of multiple contracts, through different insurers, that covers the many facets of oil transportation. Their business practices have resulted in policy packages that cross the neat bounds of individual, specific insurance coverage. This paper shows that insurance can improve safety standards in three general areas -crew training, hull and equipment construction and maintenance, and routing schemes and exclusionary zones. With crew, hull and equipment, underwriting clauses can be used to ensure that minimum standards are met by the insured. Premiums can then be structured to reflect the additional care taken by the insured above and beyond these minimum standards. Routing schemes are traffic flow systems applied to congested waterways, such as the entrance to New York harbor. Using natural obstacles or manmade dividers, ships are separated into two lanes of opposing traffic, similar to a road. Exclusionary zones are marine areas designated off limits to tanker traffic either because of a sensitive ecosystem or because local knowledge is required of the region to ensure safe navigation. Underwriting clauses can be used to nullify an insurance contract when a tanker is not in compliance with established exclusionary zones or routing schemes.
Resumo:
The Noise Pollution causes degradation in the quality of the environment and presents itself as one of the most common environmental problems in the big cities. An Urban environment present scenario and their complex acoustic study need to consider the contribution of various noise sources. Accordingly to computational models through mapping and prediction of acoustic scene become important, because they enable the realization of calculations, analyzes and reports, allowing the interpretation of satisfactory results. The study neighborhood is the neighborhood of Lagoa Nova, a central area of the city of Natal, which will undergo major changes in urban space due to urban mobility projects planned for the area around the stadium and the consequent changes of urban form and traffic. Thus, this study aims to evaluate the noise impact caused by road and morphological changes around the stadium Arena das Dunas in the neighborhood of Lagoa Nova, through on-site measurements and mapping using the computational model SoundPLAN year 2012 and the scenario evolution acoustic for the year 2017. For this analysis was the construction of the first acoustic mapping based on current diagnostic acoustic neighborhood, physical mapping, classified vehicle count and measurement of sound pressure level, and to build the prediction of noise were observed for the area study the modifications provided for traffic, urban form and mobility work. In this study, it is concluded that the sound pressure levels of the year in 2012 and 2017 extrapolate current legislation. For the prediction of noise were numerous changes in the acoustic scene, in which the works of urban mobility provided will improve traffic flow, thus reduce the sound pressure level where interventions are expected
Resumo:
This paper presents a tool that combines two kinds of Petri Net analyses to set the fastest routes to one vehicle in a bounded area of traffic urban. The first analysis consists of the discovery of possible routes in a state space generated from an IOPT Petri net model given the initial marking as the vehicle position. The second analysis receives the routes found in the first analysis and calculates the state equations at incidence matrix created from the High Level Petri net model to define the fastest route for each vehicle that arrive in the roads. It was considered the exchange of information between vehicle and infrastructure (V2I) to get the position and speed of all vehicles and support the analyses. With the results obtained we conclude that is possible optimizing the urban traffic flow if this tool is applied to all vehicles in a bounded urban traffic. © 2012 IEEE.
Resumo:
The development of medium-sized cities in recent decade, caused, partly, by the industrial deconcentration process generated, beyond benefits, several problems for these cities population. The unplanned rapid growth of these cities, together with the capitalist model of production collaborated for the increase of socioeconomics questions in these locations. The urban mobility became one of these problems, embarrassing citizen’s lives, especially in downtown area. Therefore, the State began looking for solutions to improve urban mobility of the population, contributing to their quality of life and also to adapt the city to new market demand. In these work, we analyzed the situation of Brazilian medium-sized cities downtown area, as well as its growth process, tanking as an example the case of the city of Rio Claro – SP and it´s Public Administration proposal to improve the flow and urban mobility in a particular street in the town´s commercial centre
Resumo:
Thema: Quantifizierung von Steinschlagrisiken an Straßen Die Einschätzung eines bestehenden Steinschlagrisikos an Verkehrswegen ist in Gebirgs- und Mittelgebirgsregionen seit jeher eine Aufgabe, die mit verschiedensten Methoden und unterschiedlichem Aufwand bearbeitet wird. In der vorliegenden Untersuchung werden die maßgebenden Parameter zur Beschreibung einer Böschung aufgenommen und bewertet. Es wurde ein Arbeitsblatt entwickelt, in dem festgelegte Parameter erfasst werden, die teils mit Ankreuztechnik, teils mit der Eingabe von Daten, im Computer notiert werden. Das Arbeitsblatt umfasst vier Themenbereiche: Allgemeine Daten, Angaben zur Geometrie der Böschung, Angaben zum Verkehr und Angaben zum Gestein und Gebirge. Ein Computerprogramm, das auf der Basis der Software Excel von Microsoft erstellt wurde, vergibt nach der Dateneingabe Bewertungspunkte (1. Bewertungsschritt). Es werden Summen gebildet und die Teilbereiche bewertet (2. Bewertungsschritt). Jeder Teilbereich besitzt drei Bewertungsklassen. Die Verknüpfung der Bewertung der Teilbereiche Geometrische Angaben und Angaben zum Gestein und Gebirge stellt die eigentliche Risikoeinschätzung dar (3. Bewertungsschritt). Es gibt drei Einstufungen zur Beschreibung des Risikos: ð Der Verkehr ist durch Steinschlag sehr gering gefährdet. ð Der Verkehr ist durch Steinschlag gering gefährdet. Eine Detailüberprüfung muss erfolgen, da eine Gefährdung nicht auszuschließen ist. ð Der Verkehr ist gefährdet. Es besteht ein hohes Steinschlagrisiko. Bewertungen und Hinweise zu den Teilbereichen Allgemeine Daten und Angaben zum Verkehr kann der Anwender nach eigenem Ermessen zusätzlich nutzen. Die abschließende Risikoeinschätzung erfolgt durch den Anwender bzw. einen Sachverständigen.
Resumo:
Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.
Resumo:
Several studies conducted in urban areas have pointed out that road dust resuspension contributes significantly to PM concentration levels. Street washing is one of the methods proposed to reduce resuspended road dust contributions to ambient PM concentrations. As resuspended particles are mainly found in the coarse mode, published studies investigating the effects of street washing have focused on PM10 size fraction. As the PM2.5 mass fraction of particles originating from mechanical abrasion processes may still be significant we conducted a study in order to evaluate the effects of street washing on the mitigation of resuspension of fine particles. The PM2.5 mass concentration data were examined and integrated with the occurrence of street washing activities. In addition, the effect of the meteorological variability, traffic flow and street washing activities, on ambient PM2.5 levels was valuated by means of a multivariate regression model. The results revealed that traffic low is the most important factor that controls PM2.5 hourly concentrations while street washing activities did not influence fine particle mass levels.
Resumo:
Global demand for mobility is increasing and the environmental impact of transport has become an important issue in transportation network planning and decision-making, as well as in the operational management phase. Suitable methods are required to assess emissions and fuel consumption reduction strategies that seek to improve energy efficiency and furthering decarbonization. This study describes the development and application of an improved modeling framework – the HERA (Highway EneRgy Assessment) methodology – that enables to assess the energy and carbon footprint of different highways and traffic flow scenarios and their comparison. HERA incorporates an average speed consumption model adjusted with a correction factor which takes into account the road gradient. It provides a more comprehensive method for estimating the footprint of particular highway segments under specific traffic conditions. It includes the application of the methodology to the Spanish highway network to validate it. Finally, a case study shows the benefits from using this methodology and how to integrate the objective of carbon footprint reductions into highway design, operation and scenario comparison.
Resumo:
O paradigma das redes em chip (NoCs) surgiu a fim de permitir alto grau de integração entre vários núcleos de sistemas em chip (SoCs), cuja comunicação é tradicionalmente baseada em barramentos. As NoCs são definidas como uma estrutura de switches e canais ponto a ponto que interconectam núcleos de propriedades intelectuais (IPs) de um SoC, provendo uma plataforma de comunicação entre os mesmos. As redes em chip sem fio (WiNoCs) são uma abordagem evolucionária do conceito de rede em chip (NoC), a qual possibilita a adoção dos mecanismos de roteamento das NoCs com o uso de tecnologias sem fio, propondo a otimização dos fluxos de tráfego, a redução de conectores e a atuação em conjunto com as NoCs tradicionais, reduzindo a carga nos barramentos. O uso do roteamento dinâmico dentro das redes em chip sem fio permite o desligamento seletivo de partes do hardware, o que reduz a energia consumida. Contudo, a escolha de onde empregar um link sem fio em uma NoC é uma tarefa complexa, dado que os nós são pontes de tráfego os quais não podem ser desligados sem potencialmente quebrar uma rota preestabelecida. Além de fornecer uma visão sobre as arquiteturas de NoCs e do estado da arte do paradigma emergente de WiNoC, este trabalho também propõe um método de avaliação baseado no já consolidado simulador ns-2, cujo objetivo é testar cenários híbridos de NoC e WiNoC. A partir desta abordagem é possível avaliar diferentes parâmetros das WiNoCs associados a aspectos de roteamento, aplicação e número de nós envolvidos em redes hierárquicas. Por meio da análise de tais simulações também é possível investigar qual estratégia de roteamento é mais recomendada para um determinado cenário de utilização, o que é relevante ao se escolher a disposição espacial dos nós em uma NoC. Os experimentos realizados são o estudo da dinâmica de funcionamento dos protocolos ad hoc de roteamento sem fio em uma topologia hierárquica de WiNoC, seguido da análise de tamanho da rede e dos padrões de tráfego na WiNoC.
Resumo:
Federal Highway Administration, Office of Safety and Traffic Operations, Washington, D.C.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Transportation Department, Office of University Research, Washington, D.C.
Resumo:
Federal Highway Administration, Washington, D.C.