969 resultados para traffic flow stability


Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider the dynamics of cargo driven by a collection of interacting molecular motors in the context of ail asymmetric simple exclusion process (ASEP). The model is formulated to account for (i) excluded-volume interactions, (ii) the observed asymmetry of the stochastic movement of individual motors and (iii) interactions between motors and cargo. Items (i) and (ii) form the basis of ASEP models and have already been considered to study the behavior of motor density profile [A. Parmeggiani. T. Franosch, E. Frey, Phase Coexistence in driven one-dimensional transport, Phys. Rev. Lett. 90 (2003) 086601-1-086601-4]. Item (iii) is new. It is introduced here as an attempt to describe explicitly the dependence of cargo movement on the dynamics of motors in this context. The steady-state Solutions Of the model indicate that the system undergoes a phase transition of condensation type as the motor density varies. We study the consequences of this transition to the behavior of the average cargo velocity. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Accurate speed prediction is a crucial step in the development of a dynamic vehcile activated sign (VAS). A previous study showed that the optimal trigger speed of such signs will need to be pre-determined according to the nature of the site and to the traffic conditions. The objective of this paper is to find an accurate predictive model based on historical traffic speed data to derive the optimal trigger speed for such signs. Adaptive neuro fuzzy (ANFIS), classification and regression tree (CART) and random forest (RF) were developed to predict one step ahead speed during all times of the day. The developed models were evaluated and compared to the results obtained from artificial neural network (ANN), multiple linear regression (MLR) and naïve prediction using traffic speed data collected at four sites located in Sweden. The data were aggregated into two periods, a short term period (5-min) and a long term period (1-hour). The results of this study showed that using RF is a promising method for predicting mean speed in the two proposed periods.. It is concluded that in terms of performance and computational complexity, a simplistic input features to the predicitive model gave a marked increase in the response time of the model whilse still delivering a low prediction error.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The current system of controlling oil spills involves a complex relationship of international, federal and state law, which has not proven to be very effective. The multiple layers of regulation often leave shipowners unsure of the laws facing them. Furthemore, nations have had difficulty enforcing these legal requirements. This thesis deals with the role marine insurance can play within the existing system of legislation to provide a strong preventative influence that is simple and cost-effective to enforce. In principle, insurance has two ways of enforcing higher safety standards and limiting the risk of an accident occurring. The first is through the use of insurance premiums that are based on the level of care taken by the insured. This means that a person engaging in riskier behavior faces a higher insurance premium, because their actions increase the probability of an accident occurring. The second method, available to the insurer, is collectively known as cancellation provisions or underwriting clauses. These are clauses written into an insurance contract that invalidates the agreement when certain conditions are not met by the insured The problem has been that obtaining information about the behavior of an insured party requires monitoring and that incurs a cost to the insurer. The application of these principles proves to be a more complicated matter. The modern marine insurance industry is a complicated system of multiple contracts, through different insurers, that covers the many facets of oil transportation. Their business practices have resulted in policy packages that cross the neat bounds of individual, specific insurance coverage. This paper shows that insurance can improve safety standards in three general areas -crew training, hull and equipment construction and maintenance, and routing schemes and exclusionary zones. With crew, hull and equipment, underwriting clauses can be used to ensure that minimum standards are met by the insured. Premiums can then be structured to reflect the additional care taken by the insured above and beyond these minimum standards. Routing schemes are traffic flow systems applied to congested waterways, such as the entrance to New York harbor. Using natural obstacles or manmade dividers, ships are separated into two lanes of opposing traffic, similar to a road. Exclusionary zones are marine areas designated off limits to tanker traffic either because of a sensitive ecosystem or because local knowledge is required of the region to ensure safe navigation. Underwriting clauses can be used to nullify an insurance contract when a tanker is not in compliance with established exclusionary zones or routing schemes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents techniques for analysing human behaviour via video surveillance. In known scenes under surveillance, common paths of movement between entry and exit points are obtained and classified. These are used, together with a priori velocity data, to serve as a model of normal traffic flow in the scene. Surveillance sequences are then processed to extract and track the movement of people in the scene, which is compared with the models to enable detection of abnormal movement

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Noise Pollution causes degradation in the quality of the environment and presents itself as one of the most common environmental problems in the big cities. An Urban environment present scenario and their complex acoustic study need to consider the contribution of various noise sources. Accordingly to computational models through mapping and prediction of acoustic scene become important, because they enable the realization of calculations, analyzes and reports, allowing the interpretation of satisfactory results. The study neighborhood is the neighborhood of Lagoa Nova, a central area of the city of Natal, which will undergo major changes in urban space due to urban mobility projects planned for the area around the stadium and the consequent changes of urban form and traffic. Thus, this study aims to evaluate the noise impact caused by road and morphological changes around the stadium Arena das Dunas in the neighborhood of Lagoa Nova, through on-site measurements and mapping using the computational model SoundPLAN year 2012 and the scenario evolution acoustic for the year 2017. For this analysis was the construction of the first acoustic mapping based on current diagnostic acoustic neighborhood, physical mapping, classified vehicle count and measurement of sound pressure level, and to build the prediction of noise were observed for the area study the modifications provided for traffic, urban form and mobility work. In this study, it is concluded that the sound pressure levels of the year in 2012 and 2017 extrapolate current legislation. For the prediction of noise were numerous changes in the acoustic scene, in which the works of urban mobility provided will improve traffic flow, thus reduce the sound pressure level where interventions are expected

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper describes a methodology for solving efficiently the sparse network equations on multiprocessor computers. The methodology is based on the matrix inverse factors (W-matrix) approach to the direct solution phase of A(x) = b systems. A partitioning scheme of W-matrix , based on the leaf-nodes of the factorization path tree, is proposed. The methodology allows the performance of all the updating operations on vector b in parallel, within each partition, using a row-oriented processing. The approach takes advantage of the processing power of the individual processors. Performance results are presented and discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a tool that combines two kinds of Petri Net analyses to set the fastest routes to one vehicle in a bounded area of traffic urban. The first analysis consists of the discovery of possible routes in a state space generated from an IOPT Petri net model given the initial marking as the vehicle position. The second analysis receives the routes found in the first analysis and calculates the state equations at incidence matrix created from the High Level Petri net model to define the fastest route for each vehicle that arrive in the roads. It was considered the exchange of information between vehicle and infrastructure (V2I) to get the position and speed of all vehicles and support the analyses. With the results obtained we conclude that is possible optimizing the urban traffic flow if this tool is applied to all vehicles in a bounded urban traffic. © 2012 IEEE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The development of medium-sized cities in recent decade, caused, partly, by the industrial deconcentration process generated, beyond benefits, several problems for these cities population. The unplanned rapid growth of these cities, together with the capitalist model of production collaborated for the increase of socioeconomics questions in these locations. The urban mobility became one of these problems, embarrassing citizen’s lives, especially in downtown area. Therefore, the State began looking for solutions to improve urban mobility of the population, contributing to their quality of life and also to adapt the city to new market demand. In these work, we analyzed the situation of Brazilian medium-sized cities downtown area, as well as its growth process, tanking as an example the case of the city of Rio Claro – SP and it´s Public Administration proposal to improve the flow and urban mobility in a particular street in the town´s commercial centre

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thema: Quantifizierung von Steinschlagrisiken an Straßen Die Einschätzung eines bestehenden Steinschlagrisikos an Verkehrswegen ist in Gebirgs- und Mittelgebirgsregionen seit jeher eine Aufgabe, die mit verschiedensten Methoden und unterschiedlichem Aufwand bearbeitet wird. In der vorliegenden Untersuchung werden die maßgebenden Parameter zur Beschreibung einer Böschung aufgenommen und bewertet. Es wurde ein Arbeitsblatt entwickelt, in dem festgelegte Parameter erfasst werden, die teils mit Ankreuztechnik, teils mit der Eingabe von Daten, im Computer notiert werden. Das Arbeitsblatt umfasst vier Themenbereiche: Allgemeine Daten, Angaben zur Geometrie der Böschung, Angaben zum Verkehr und Angaben zum Gestein und Gebirge. Ein Computerprogramm, das auf der Basis der Software Excel von Microsoft erstellt wurde, vergibt nach der Dateneingabe Bewertungspunkte (1. Bewertungsschritt). Es werden Summen gebildet und die Teilbereiche bewertet (2. Bewertungsschritt). Jeder Teilbereich besitzt drei Bewertungsklassen. Die Verknüpfung der Bewertung der Teilbereiche Geometrische Angaben und Angaben zum Gestein und Gebirge stellt die eigentliche Risikoeinschätzung dar (3. Bewertungsschritt). Es gibt drei Einstufungen zur Beschreibung des Risikos: ð Der Verkehr ist durch Steinschlag sehr gering gefährdet. ð Der Verkehr ist durch Steinschlag gering gefährdet. Eine Detailüberprüfung muss erfolgen, da eine Gefährdung nicht auszuschließen ist. ð Der Verkehr ist gefährdet. Es besteht ein hohes Steinschlagrisiko. Bewertungen und Hinweise zu den Teilbereichen Allgemeine Daten und Angaben zum Verkehr kann der Anwender nach eigenem Ermessen zusätzlich nutzen. Die abschließende Risikoeinschätzung erfolgt durch den Anwender bzw. einen Sachverständigen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Space-based (satellite, scientific probe, space station, etc.) and millimeter – to – microscale (such as are used in high power electronics cooling, weapons cooling in aircraft, etc.) condensers and boilers are shear/pressure driven. They are of increasing interest to system engineers for thermal management because flow boilers and flow condensers offer both high fluid flow-rate-specific heat transfer capacity and very low thermal resistance between the fluid and the heat exchange surface, so large amounts of heat may be removed using reasonably-sized devices without the need for excessive temperature differences. However, flow stability issues and degradation of performance of shear/pressure driven condensers and boilers due to non-desirable flow morphology over large portions of their lengths have mostly prevented their use in these applications. This research is part of an ongoing investigation seeking to close the gap between science and engineering by analyzing two key innovations which could help address these problems. First, it is recommended that the condenser and boiler be operated in an innovative flow configuration which provides a non-participating core vapor stream to stabilize the annular flow regime throughout the device length, accomplished in an energy-efficient manner by means of ducted vapor re-circulation. This is demonstrated experimentally. Second, suitable pulsations applied to the vapor entering the condenser or boiler (from the re-circulating vapor stream) greatly reduce the thermal resistance of the already effective annular flow regime. For experiments reported here, application of pulsations increased time-averaged heat-flux up to 900 % at a location within the flow condenser and up to 200 % at a location within the flow boiler, measured at the heat-exchange surface. Traditional fully condensing flows, reported here for comparison purposes, show similar heat-flux enhancements due to imposed pulsations over a range of frequencies. Shear/pressure driven condensing and boiling flow experiments are carried out in horizontal mm-scale channels with heat exchange through the bottom surface. The sides and top of the flow channel are insulated. The fluid is FC-72 from 3M Corporation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The physical processes controlling the mixed layer salinity (MLS) seasonal budget in the tropical Atlantic Ocean are investigated using a regional configuration of an ocean general circulation model. The analysis reveals that the MLS cycle is generally weak in comparison of individual physical processes entering in the budget because of strong compensation. In evaporative regions, around the surface salinity maxima, the ocean acts to freshen the mixed layer against the action of evaporation. Poleward of the southern SSS maxima, the freshening is ensured by geostrophic advection, the vertical salinity diffusion and, during winter, a dominant contribution of the convective entrainment. On the equatorward flanks of the SSS maxima, Ekman transport mainly contributes to supply freshwater from ITCZ regions while vertical salinity diffusion adds on the effect of evaporation. All these terms are phase locked through the effect of the wind. Under the seasonal march of the ITCZ and in coastal areas affected by river (7°S:15°N), the upper ocean freshening by precipitations and/or runoff is attenuated by vertical salinity diffusion. In the eastern equatorial regions, seasonal cycle of wind forced surface currents advect freshwaters, which are mixed with subsurface saline water because of the strong vertical turbulent diffusion. In all these regions, the vertical diffusion presents an important contribution to the MLS budget by providing, in general, an upwelling flux of salinity. It is generally due to vertical salinity gradient and mixing due to winds. Furthermore, in the equator where the vertical shear, associated to surface horizontal currents, is developed, the diffusion depends also on the sheared flow stability.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Abstract The proliferation of wireless sensor networks and the variety of envisioned applications associated with them has motivated the development of distributed algorithms for collaborative processing over networked systems. One of the applications that has attracted the attention of the researchers is that of target localization where the nodes of the network try to estimate the position of an unknown target that lies within its coverage area. Particularly challenging is the problem of estimating the target’s position when we use received signal strength indicator (RSSI) due to the nonlinear relationship between the measured signal and the true position of the target. Many of the existing approaches suffer either from high computational complexity (e.g., particle filters) or lack of accuracy. Further, many of the proposed solutions are centralized which make their application to a sensor network questionable. Depending on the application at hand and, from a practical perspective it could be convenient to find a balance between localization accuracy and complexity. Into this direction we approach the maximum likelihood location estimation problem by solving a suboptimal (and more tractable) problem. One of the main advantages of the proposed scheme is that it allows for a decentralized implementation using distributed processing tools (e.g., consensus and convex optimization) and therefore, it is very suitable to be implemented in real sensor networks. If further accuracy is needed an additional refinement step could be performed around the found solution. Under the assumption of independent noise among the nodes such local search can be done in a fully distributed way using a distributed version of the Gauss-Newton method based on consensus. Regardless of the underlying application or function of the sensor network it is al¬ways necessary to have a mechanism for data reporting. While some approaches use a special kind of nodes (called sink nodes) for data harvesting and forwarding to the outside world, there are however some scenarios where such an approach is impractical or even impossible to deploy. Further, such sink nodes become a bottleneck in terms of traffic flow and power consumption. To overcome these issues instead of using sink nodes for data reporting one could use collaborative beamforming techniques to forward directly the generated data to a base station or gateway to the outside world. In a dis-tributed environment like a sensor network nodes cooperate in order to form a virtual antenna array that can exploit the benefits of multi-antenna communications. In col-laborative beamforming nodes synchronize their phases in order to add constructively at the receiver. Some of the inconveniences associated with collaborative beamforming techniques is that there is no control over the radiation pattern since it is treated as a random quantity. This may cause interference to other coexisting systems and fast bat-tery depletion at the nodes. Since energy-efficiency is a major design issue we consider the development of a distributed collaborative beamforming scheme that maximizes the network lifetime while meeting some quality of service (QoS) requirement at the re¬ceiver side. Using local information about battery status and channel conditions we find distributed algorithms that converge to the optimal centralized beamformer. While in the first part we consider only battery depletion due to communications beamforming, we extend the model to account for more realistic scenarios by the introduction of an additional random energy consumption. It is shown how the new problem generalizes the original one and under which conditions it is easily solvable. By formulating the problem under the energy-efficiency perspective the network’s lifetime is significantly improved. Resumen La proliferación de las redes inalámbricas de sensores junto con la gran variedad de posi¬bles aplicaciones relacionadas, han motivado el desarrollo de herramientas y algoritmos necesarios para el procesado cooperativo en sistemas distribuidos. Una de las aplicaciones que suscitado mayor interés entre la comunidad científica es la de localization, donde el conjunto de nodos de la red intenta estimar la posición de un blanco localizado dentro de su área de cobertura. El problema de la localization es especialmente desafiante cuando se usan niveles de energía de la seal recibida (RSSI por sus siglas en inglés) como medida para la localization. El principal inconveniente reside en el hecho que el nivel de señal recibida no sigue una relación lineal con la posición del blanco. Muchas de las soluciones actuales al problema de localization usando RSSI se basan en complejos esquemas centralizados como filtros de partículas, mientas que en otras se basan en esquemas mucho más simples pero con menor precisión. Además, en muchos casos las estrategias son centralizadas lo que resulta poco prácticos para su implementación en redes de sensores. Desde un punto de vista práctico y de implementation, es conveniente, para ciertos escenarios y aplicaciones, el desarrollo de alternativas que ofrezcan un compromiso entre complejidad y precisión. En esta línea, en lugar de abordar directamente el problema de la estimación de la posición del blanco bajo el criterio de máxima verosimilitud, proponemos usar una formulación subóptima del problema más manejable analíticamente y que ofrece la ventaja de permitir en¬contrar la solución al problema de localization de una forma totalmente distribuida, convirtiéndola así en una solución atractiva dentro del contexto de redes inalámbricas de sensores. Para ello, se usan herramientas de procesado distribuido como los algorit¬mos de consenso y de optimización convexa en sistemas distribuidos. Para aplicaciones donde se requiera de un mayor grado de precisión se propone una estrategia que con¬siste en la optimización local de la función de verosimilitud entorno a la estimación inicialmente obtenida. Esta optimización se puede realizar de forma descentralizada usando una versión basada en consenso del método de Gauss-Newton siempre y cuando asumamos independencia de los ruidos de medida en los diferentes nodos. Independientemente de la aplicación subyacente de la red de sensores, es necesario tener un mecanismo que permita recopilar los datos provenientes de la red de sensores. Una forma de hacerlo es mediante el uso de uno o varios nodos especiales, llamados nodos “sumidero”, (sink en inglés) que actúen como centros recolectores de información y que estarán equipados con hardware adicional que les permita la interacción con el exterior de la red. La principal desventaja de esta estrategia es que dichos nodos se convierten en cuellos de botella en cuanto a tráfico y capacidad de cálculo. Como alter¬nativa se pueden usar técnicas cooperativas de conformación de haz (beamforming en inglés) de manera que el conjunto de la red puede verse como un único sistema virtual de múltiples antenas y, por tanto, que exploten los beneficios que ofrecen las comu¬nicaciones con múltiples antenas. Para ello, los distintos nodos de la red sincronizan sus transmisiones de manera que se produce una interferencia constructiva en el recep¬tor. No obstante, las actuales técnicas se basan en resultados promedios y asintóticos, cuando el número de nodos es muy grande. Para una configuración específica se pierde el control sobre el diagrama de radiación causando posibles interferencias sobre sis¬temas coexistentes o gastando más potencia de la requerida. La eficiencia energética es una cuestión capital en las redes inalámbricas de sensores ya que los nodos están equipados con baterías. Es por tanto muy importante preservar la batería evitando cambios innecesarios y el consecuente aumento de costes. Bajo estas consideraciones, se propone un esquema de conformación de haz que maximice el tiempo de vida útil de la red, entendiendo como tal el máximo tiempo que la red puede estar operativa garantizando unos requisitos de calidad de servicio (QoS por sus siglas en inglés) que permitan una decodificación fiable de la señal recibida en la estación base. Se proponen además algoritmos distribuidos que convergen a la solución centralizada. Inicialmente se considera que la única causa de consumo energético se debe a las comunicaciones con la estación base. Este modelo de consumo energético es modificado para tener en cuenta otras formas de consumo de energía derivadas de procesos inherentes al funcionamiento de la red como la adquisición y procesado de datos, las comunicaciones locales entre nodos, etc. Dicho consumo adicional de energía se modela como una variable aleatoria en cada nodo. Se cambia por tanto, a un escenario probabilístico que generaliza el caso determinista y se proporcionan condiciones bajo las cuales el problema se puede resolver de forma eficiente. Se demuestra que el tiempo de vida de la red mejora de forma significativa usando el criterio propuesto de eficiencia energética.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Several studies conducted in urban areas have pointed out that road dust resuspension contributes significantly to PM concentration levels. Street washing is one of the methods proposed to reduce resuspended road dust contributions to ambient PM concentrations. As resuspended particles are mainly found in the coarse mode, published studies investigating the effects of street washing have focused on PM10 size fraction. As the PM2.5 mass fraction of particles originating from mechanical abrasion processes may still be significant we conducted a study in order to evaluate the effects of street washing on the mitigation of resuspension of fine particles. The PM2.5 mass concentration data were examined and integrated with the occurrence of street washing activities. In addition, the effect of the meteorological variability, traffic flow and street washing activities, on ambient PM2.5 levels was valuated by means of a multivariate regression model. The results revealed that traffic low is the most important factor that controls PM2.5 hourly concentrations while street washing activities did not influence fine particle mass levels.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Global demand for mobility is increasing and the environmental impact of transport has become an important issue in transportation network planning and decision-making, as well as in the operational management phase. Suitable methods are required to assess emissions and fuel consumption reduction strategies that seek to improve energy efficiency and furthering decarbonization. This study describes the development and application of an improved modeling framework – the HERA (Highway EneRgy Assessment) methodology – that enables to assess the energy and carbon footprint of different highways and traffic flow scenarios and their comparison. HERA incorporates an average speed consumption model adjusted with a correction factor which takes into account the road gradient. It provides a more comprehensive method for estimating the footprint of particular highway segments under specific traffic conditions. It includes the application of the methodology to the Spanish highway network to validate it. Finally, a case study shows the benefits from using this methodology and how to integrate the objective of carbon footprint reductions into highway design, operation and scenario comparison.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The subseafloor at the mid-ocean ridge is predicted to be an excellent microbial habitat, because there is abundant space, fluid flow, and geochemical energy in the porous, hydrothermally influenced oceanic crust. These characteristics also make it a good analog for potential subsurface extraterrestrial habitats. Subseafloor environments created by the mixing of hot hydrothermal fluids and seawater are predicted to be particularly energy-rich, and hyperthermophilic microorganisms that broadly reflect such predictions are ejected from these systems in low-temperature (≈15°C), basalt-hosted diffuse effluents. Seven hyperthermophilic heterotrophs isolated from low-temperature diffuse fluids exiting the basaltic crust in and near two hydrothermal vent fields on the Endeavour Segment, Juan de Fuca Ridge, were compared phylogenetically and physiologically to six similarly enriched hyperthermophiles from samples associated with seafloor metal sulfide structures. The 13 organisms fell into four distinct groups: one group of two organisms corresponding to the genus Pyrococcus and three groups corresponding to the genus Thermococcus. Of these three groups, one was composed solely of sulfide-derived organisms, and the other two related groups were composed of subseafloor organisms. There was no evidence of restricted exchange of organisms between sulfide and subseafloor habitats, and therefore this phylogenetic distinction indicates a selective force operating between the two habitats. Hypotheses regarding the habitat differences were generated through comparison of the physiology of the two groups of hyperthermophiles; some potential differences between these habitats include fluid flow stability, metal ion concentrations, and sources of complex organic matter.