990 resultados para Travel Demand Modeling
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
In this thesis, we extend some ideas of statistical physics to describe the properties of human mobility. By using a database containing GPS measures of individual paths (position, velocity and covered space at a spatial scale of 2 Km or a time scale of 30 sec), which includes the 2% of the private vehicles in Italy, we succeed in determining some statistical empirical laws pointing out "universal" characteristics of human mobility. Developing simple stochastic models suggesting possible explanations of the empirical observations, we are able to indicate what are the key quantities and cognitive features that are ruling individuals' mobility. To understand the features of individual dynamics, we have studied different aspects of urban mobility from a physical point of view. We discuss the implications of the Benford's law emerging from the distribution of times elapsed between successive trips. We observe how the daily travel-time budget is related with many aspects of the urban environment, and describe how the daily mobility budget is then spent. We link the scaling properties of individual mobility networks to the inhomogeneous average durations of the activities that are performed, and those of the networks describing people's common use of space with the fractional dimension of the urban territory. We study entropy measures of individual mobility patterns, showing that they carry almost the same information of the related mobility networks, but are also influenced by a hierarchy among the activities performed. We discover that Wardrop's principles are violated as drivers have only incomplete information on traffic state and therefore rely on knowledge on the average travel-times. We propose an assimilation model to solve the intrinsic scattering of GPS data on the street network, permitting the real-time reconstruction of traffic state at a urban scale.
Resumo:
Die Entstehung eines Marktpreises für einen Vermögenswert kann als Superposition der einzelnen Aktionen der Marktteilnehmer aufgefasst werden, die damit kumulativ Angebot und Nachfrage erzeugen. Dies ist in der statistischen Physik mit der Entstehung makroskopischer Eigenschaften vergleichbar, die von mikroskopischen Wechselwirkungen zwischen den beteiligten Systemkomponenten hervorgerufen werden. Die Verteilung der Preisänderungen an Finanzmärkten unterscheidet sich deutlich von einer Gaußverteilung. Dies führt zu empirischen Besonderheiten des Preisprozesses, zu denen neben dem Skalierungsverhalten nicht-triviale Korrelationsfunktionen und zeitlich gehäufte Volatilität zählen. In der vorliegenden Arbeit liegt der Fokus auf der Analyse von Finanzmarktzeitreihen und den darin enthaltenen Korrelationen. Es wird ein neues Verfahren zur Quantifizierung von Muster-basierten komplexen Korrelationen einer Zeitreihe entwickelt. Mit dieser Methodik werden signifikante Anzeichen dafür gefunden, dass sich typische Verhaltensmuster von Finanzmarktteilnehmern auf kurzen Zeitskalen manifestieren, dass also die Reaktion auf einen gegebenen Preisverlauf nicht rein zufällig ist, sondern vielmehr ähnliche Preisverläufe auch ähnliche Reaktionen hervorrufen. Ausgehend von der Untersuchung der komplexen Korrelationen in Finanzmarktzeitreihen wird die Frage behandelt, welche Eigenschaften sich beim Wechsel von einem positiven Trend zu einem negativen Trend verändern. Eine empirische Quantifizierung mittels Reskalierung liefert das Resultat, dass unabhängig von der betrachteten Zeitskala neue Preisextrema mit einem Anstieg des Transaktionsvolumens und einer Reduktion der Zeitintervalle zwischen Transaktionen einhergehen. Diese Abhängigkeiten weisen Charakteristika auf, die man auch in anderen komplexen Systemen in der Natur und speziell in physikalischen Systemen vorfindet. Über 9 Größenordnungen in der Zeit sind diese Eigenschaften auch unabhängig vom analysierten Markt - Trends, die nur für Sekunden bestehen, zeigen die gleiche Charakteristik wie Trends auf Zeitskalen von Monaten. Dies eröffnet die Möglichkeit, mehr über Finanzmarktblasen und deren Zusammenbrüche zu lernen, da Trends auf kleinen Zeitskalen viel häufiger auftreten. Zusätzlich wird eine Monte Carlo-basierte Simulation des Finanzmarktes analysiert und erweitert, um die empirischen Eigenschaften zu reproduzieren und Einblicke in deren Ursachen zu erhalten, die zum einen in der Finanzmarktmikrostruktur und andererseits in der Risikoaversion der Handelsteilnehmer zu suchen sind. Für die rechenzeitintensiven Verfahren kann mittels Parallelisierung auf einer Graphikkartenarchitektur eine deutliche Rechenzeitreduktion erreicht werden. Um das weite Spektrum an Einsatzbereichen von Graphikkarten zu aufzuzeigen, wird auch ein Standardmodell der statistischen Physik - das Ising-Modell - auf die Graphikkarte mit signifikanten Laufzeitvorteilen portiert. Teilresultate der Arbeit sind publiziert in [PGPS07, PPS08, Pre11, PVPS09b, PVPS09a, PS09, PS10a, SBF+10, BVP10, Pre10, PS10b, PSS10, SBF+11, PB10].
Resumo:
The aim of this thesis, included within the THESEUS project, is the development of a mathematical model 2DV two-phase, based on the existing code IH-2VOF developed by the University of Cantabria, able to represent together the overtopping phenomenon and the sediment transport. Several numerical simulations were carried out in order to analyze the flow characteristics on a dike crest. The results show that the seaward/landward slope does not affect the evolution of the flow depth and velocity over the dike crest whereas the most important parameter is the relative submergence. Wave heights decrease and flow velocities increase while waves travel over the crest. In particular, by increasing the submergence, the wave height decay and the increase of the velocity are less marked. Besides, an appropriate curve able to fit the variation of the wave height/velocity over the dike crest were found. Both for the wave height and for the wave velocity different fitting coefficients were determined on the basis of the submergence and of the significant wave height. An equation describing the trend of the dimensionless coefficient c_h for the wave height was derived. These conclusions could be taken into consideration for the design criteria and the upgrade of the structures. In the second part of the thesis, new equations for the representation of the sediment transport in the IH-2VOF model were introduced in order to represent beach erosion while waves run-up and overtop the sea banks during storms. The new model allows to calculate sediment fluxes in the water column together with the sediment concentration. Moreover it is possible to model the bed profile evolution. Different tests were performed under low-intensity regular waves with an homogeneous layer of sand on the bottom of a channel in order to analyze the erosion-deposition patterns and verify the model results.
Resumo:
The hydraulic fracturing of the Marcellus Formation creates a byproduct known as frac water. Five frac water samples were collected in Bradford County, PA. Inorganic chemical analysis, field parameters analysis, alkalinity titrations, total dissolved solids(TDS), total suspended solids (TSS), biological oxygen demand (BOD), and chemical oxygen demand (COD) were conducted on each sample to characterize frac water. A database of frac water chemistry results from across the state of Pennsylvania from multiple sources was compiled in order to provide the public and research communitywith an accurate characterization of frac water. Four geochemical models were created to model the reactions between frac water and the Marcellus Formation, Purcell Limestone, and the oil field brines presumed present in the formations. The average concentrations of chloride and TDS in the five frac water samples were 1.1 �± 0.5 x 105 mg/L (5.5X average seawater) and 140,000 mg/L (4X average seawater). BOD values for frac water immediately upon flow back were over 10X greater than the BOD of typical wastewater, but decreased into the range of typical wastewater after a short period of time. The COD of frac water decreases dramatically with an increase in elapsed time from flow back, but remain considerably higher than typicalwastewater. Different alkalinity calculation methods produced a range of alkalinity values for frac water: this result is most likely due to high concentrations of aliphatic acid anions present in the samples. Laboratory analyses indicate that the frac watercomposition is quite variable depending on the companies from which the water was collected, the geology of the local area, and number of fracturing jobs in which the frac water was used, but will require more treatment than typical wastewater regardless of theprecise composition of each sample. The geochemical models created suggest that the presence of organic complexes in an oil field brine and Marcellus Formation aid in the dissolution of ions such as bariumand strontium into the solution. Although equilibration reactions between the Marcellus Formation and the slickwater account for some of the final frac water composition, the predominant control of frac water composition appears to be the ratio of the mixture between the oil field brine and slickwater. The high concentration of barium in the frac water is likely due to the abundance of barite nodules in the Purcell Limestone, and the lack of sulfate in the frac water samples is due to the reducing, anoxic conditions in the earth's subsurface that allow for the degassing of H2S(g).
Resumo:
Pumped-storage (PS) systems are used to store electric energy as potential energy for release during peak demand. We investigate the impacts of a planned 1000 MW PS scheme connecting Lago Bianco with Lago di Poschiavo (Switzerland) on temperature and particle mass concentration in both basins. The upper (turbid) basin is a reservoir receiving large amounts of fine particles from the partially glaciated watershed, while the lower basin is a much clearer natural lake. Stratification, temperature and particle concentrations in the two basins were simulated with and without PS for four different hydrological conditions and 27 years of meteorological forcing using the software CE-QUAL-W2. The simulations showed that the PS operations lead to an increase in temperature in both basins during most of the year. The increase is most pronounced (up to 4°C) in the upper hypolimnion of the natural lake toward the end of summer stratification and is partially due to frictional losses in the penstocks, pumps and turbines. The remainder of the warming is from intense coupling to the atmosphere while water resides in the shallower upper reservoir. These impacts are most pronounced during warm and dry years, when the upper reservoir is strongly heated and the effects are least concealed by floods. The exchange of water between the two basins relocates particles from the upper reservoir to the lower lake, where they accumulate during summer in the upper hypolimnion (10 to 20 mg L−1) but also to some extent decrease light availability in the trophic surface layer.
Resumo:
A great increase of private car ownership took place in China from 1980 to 2009 with the development of the economy. To explain the relationship between car ownership and economic and social changes, an ordinary least squares linear regression model is developed using car ownership per capita as the dependent variable with GDP, savings deposits and highway mileages per capita as the independent variables. The model is tested and corrected for econometric problems such as spurious correlation and cointegration. Finally, the regression model is used to project oil consumption by the Chinese transportation sector through 2015. The result shows that about 2.0 million barrels of oil will be consumed by private cars in conservative scenario, and about 2.6 million barrels of oil per day in high case scenario in 2015. Both of them are much higher than the consumption level of 2009, which is 1.9 million barrels per day. It also shows that the annual growth rate of oil demand by transportation is 2.7% - 3.1% per year in the conservative scenario, and 6.9% - 7.3% per year in the high case forecast scenario from 2010 to 2015. As a result, actions like increasing oil efficiency need to be taken to deal with challenges of the increasing demand for oil.
Resumo:
In recent times, the demand for the storage of electrical energy has grown rapidly for both static applications and the portable electronics enforcing the substantial improvement in battery systems, and Li-ion batteries have been proven to have maximum energy storage density in all rechargeable batteries. However, major breakthroughs are required to consummate the requirement of higher energy density with lower cost to penetrate new markets. Graphite anode having limited capacity has become a bottle neck in the process of developing next generation batteries and can be replaced by higher capacity metals such as Silicon. In the present study we are focusing on the mechanical behavior of the Si-thin film anode under various operating conditions. A numerical model is developed to simulate the intercalation induced stress and the failure mechanism of the complex anode structure. Effect of the various physical phenomena such as diffusion induced stress, plasticity and the crack propagation are investigated to predict better performance parameters for improved design.
Resumo:
With the economic development of China, the demand for electricity generation is rapidly increasing. To explain electricity generation, we use gross GDP, the ratio of urban population to rural population, the average per capita income of urban residents, the electricity price for industry in Beijing, and the policy shift that took place in China. Ordinary least squares (OLS) is used to develop a model for the 1979-2009 period. During the process of designing the model, econometric methods are used to test and develop the model. The final model is used to forecast total electricity generation and assess the possible role of photovoltaic generation. Due to the high demand for resources and serious environmental problems, China is pushing to develop the photovoltaic industry. The system price of PV is falling; therefore, photovoltaics may be competitive in the future.
Resumo:
PMR-15 polyimide is a polymer that is used as a matrix in composites. These composites with PMR-15 matrices are called advanced polymer matrix composite that is abundantly used in the aerospace and electronics industries because of its high temperature resistivity. Apart from having high temperature sustainability, PMR-15 composites also display good thermal-oxidative stability, mechanical properties, processability and low costs, which makes it a suitable material for manufacturing aircraft structures. PMR-15 uses the reverse Diels-Alder (RDA) method for crosslinking which provides it with the groundwork for its distinctive thermal stability and a range of 280-300 degree Centigrade use temperature. Regardless of such desirable properties, this material has a number of limitations that compromises its application on a large scale basis. PMR-15 composites has been known to be very vulnerable to micro-cracking at inter and intra-laminar cracking. But the major factor that hinders its demand is PMR-15's carcinogenic constituent, methylene dianilineme (MDA), also a liver toxin. The necessity of providing a safe working environment during its production adds up to the cost of this material. In this study, Molecular Dynamics and Energy Minimization techniques are utilized to simulate a structure of PMR-15 at a given density of 1.324 g/cc and an attempt to recreate the polyimide to reduce the number of experimental testing and hence subdue the health hazards as well as the cost involved in its production. Even though this study does not involve in validating any mechanical properties of the model, it could be used in future for the validation of its properties and further testing for different properties like aging, microcracking, creep etc.
Resumo:
Background Access to health care can be described along four dimensions: geographic accessibility, availability, financial accessibility and acceptability. Geographic accessibility measures how physically accessible resources are for the population, while availability reflects what resources are available and in what amount. Combining these two types of measure into a single index provides a measure of geographic (or spatial) coverage, which is an important measure for assessing the degree of accessibility of a health care network. Results This paper describes the latest version of AccessMod, an extension to the Geographical Information System ArcView 3.×, and provides an example of application of this tool. AccessMod 3 allows one to compute geographic coverage to health care using terrain information and population distribution. Four major types of analysis are available in AccessMod: (1) modeling the coverage of catchment areas linked to an existing health facility network based on travel time, to provide a measure of physical accessibility to health care; (2) modeling geographic coverage according to the availability of services; (3) projecting the coverage of a scaling-up of an existing network; (4) providing information for cost effectiveness analysis when little information about the existing network is available. In addition to integrating travelling time, population distribution and the population coverage capacity specific to each health facility in the network, AccessMod can incorporate the influence of landscape components (e.g. topography, river and road networks, vegetation) that impact travelling time to and from facilities. Topographical constraints can be taken into account through an anisotropic analysis that considers the direction of movement. We provide an example of the application of AccessMod in the southern part of Malawi that shows the influences of the landscape constraints and of the modes of transportation on geographic coverage. Conclusion By incorporating the demand (population) and the supply (capacities of heath care centers), AccessMod provides a unifying tool to efficiently assess the geographic coverage of a network of health care facilities. This tool should be of particular interest to developing countries that have a relatively good geographic information on population distribution, terrain, and health facility locations.
Resumo:
In terms of atmospheric impact, the volcanic eruption of Mt. Pinatubo (1991) is the best characterized large eruption on record. We investigate here the model-derived stratospheric warming following the Pinatubo eruption as derived from SAGE II extinction data including recent improvements in the processing algorithm. This method, termed SAGE_4λ, makes use of the four wavelengths (385, 452, 525 and 1024 nm) of the SAGE II data when available, and uses a data-filling procedure in the opacity-induced "gap" regions. Using SAGE_4λ, we derived aerosol size distributions that properly reproduce extinction coefficients also at much longer wavelengths. This provides a good basis for calculating the absorption of terrestrial infrared radiation and the resulting stratospheric heating. However, we also show that the use of this data set in a global chemistry–climate model (CCM) still leads to stronger aerosol-induced stratospheric heating than observed, with temperatures partly even higher than the already too high values found by many models in recent general circulation model (GCM) and CCM intercomparisons. This suggests that the overestimation of the stratospheric warming after the Pinatubo eruption may not be ascribed to an insufficient observational database but instead to using outdated data sets, to deficiencies in the implementation of the forcing data, or to radiative or dynamical model artifacts. Conversely, the SAGE_4λ approach reduces the infrared absorption in the tropical tropopause region, resulting in a significantly better agreement with the post-volcanic temperature record at these altitudes.
Resumo:
Both climate change and socio-economic development will significantly modify the supply and consumption of water in future. Consequently, regional development has to face aggravation of existing or emergence of new conflicts of interest. In this context, transdisciplinary co-production of knowledge is considered as an important means for coping with these challenges. Accordingly, the MontanAqua project aims at developing strategies for more sustainable water management in the study area Crans-Montana-Sierre (Switzerland) in a transdisciplinary way. It strives for co-producing system, target and transformation knowledge among researchers, policy makers, public administration and civil society organizations. The research process basically consisted of the following steps: First, the current water situation in the study region was investigated. How much water is available? How much water is being used? How are decisions on water distribution and use taken? Second, participatory scenario workshops were conducted in order to identify the stakeholders’ visions of regional development. Third, the water situation in 2050 was simulated by modeling the evolution of water resources and water use and by reflecting on the institutional aspects. These steps laid ground for jointly assessing the consequences of the stakeholders’ visions of development in view of scientific data regarding governance, availability and use of water in the region as well as developing necessary transformation knowledge. During all of these steps researchers have collaborated with stakeholders in the support group RegiEau. The RegiEau group consists of key representatives of owners, managers, users, and pressure groups related to water and landscape: representatives of the communes (mostly the presidents), the canton (administration and parliament), water management associations, agriculture, viticulture, hydropower, tourism, and landscape protection. The aim of the talk is to explore potentials and constraints of scientific modeling of water availability and use within the process of transdisciplinary co-producing strategies for more sustainable water governance.
Modeling the effects of land use and climate changes on hydrology in the Ursern Valley: final report
Volcanic forcing for climate modeling: a new microphysics-based data set covering years 1600–present
Resumo:
As the understanding and representation of the impacts of volcanic eruptions on climate have improved in the last decades, uncertainties in the stratospheric aerosol forcing from large eruptions are now linked not only to visible optical depth estimates on a global scale but also to details on the size, latitude and altitude distributions of the stratospheric aerosols. Based on our understanding of these uncertainties, we propose a new model-based approach to generating a volcanic forcing for general circulation model (GCM) and chemistry–climate model (CCM) simulations. This new volcanic forcing, covering the 1600–present period, uses an aerosol microphysical model to provide a realistic, physically consistent treatment of the stratospheric sulfate aerosols. Twenty-six eruptions were modeled individually using the latest available ice cores aerosol mass estimates and historical data on the latitude and date of eruptions. The evolution of aerosol spatial and size distribution after the sulfur dioxide discharge are hence characterized for each volcanic eruption. Large variations are seen in hemispheric partitioning and size distributions in relation to location/date of eruptions and injected SO2 masses. Results for recent eruptions show reasonable agreement with observations. By providing these new estimates of spatial distributions of shortwave and long-wave radiative perturbations, this volcanic forcing may help to better constrain the climate model responses to volcanic eruptions in the 1600–present period. The final data set consists of 3-D values (with constant longitude) of spectrally resolved extinction coefficients, single scattering albedos and asymmetry factors calculated for different wavelength bands upon request. Surface area densities for heterogeneous chemistry are also provided.