933 resultados para macroscopic traffic flow models
Resumo:
This study mainly aims to provide an inter-industry analysis through the subdivision of various industries in flow of funds (FOF) accounts. Combined with the Financial Statement Analysis data from 2004 and 2005, the Korean FOF accounts are reconstructed to form "from-whom-to-whom" basis FOF tables, which are composed of 115 institutional sectors and correspond to tables and techniques of input–output (I–O) analysis. First, power of dispersion indices are obtained by applying the I–O analysis method. Most service and IT industries, construction, and light industries in manufacturing are included in the first quadrant group, whereas heavy and chemical industries are placed in the fourth quadrant since their power indices in the asset-oriented system are comparatively smaller than those of other institutional sectors. Second, investments and savings, which are induced by the central bank, are calculated for monetary policy evaluations. Industries are bifurcated into two groups to compare their features. The first group refers to industries whose power of dispersion in the asset-oriented system is greater than 1, whereas the second group indicates that their index is less than 1. We found that the net induced investments (NII)–total liabilities ratios of the first group show levels half those of the second group since the former's induced savings are obviously greater than the latter.
Resumo:
In this thesis we present a mathematical formulation of the interaction between microorganisms such as bacteria or amoebae and chemicals, often produced by the organisms themselves. This interaction is called chemotaxis and leads to cellular aggregation. We derive some models to describe chemotaxis. The first is the pioneristic Keller-Segel parabolic-parabolic model and it is derived by two different frameworks: a macroscopic perspective and a microscopic perspective, in which we start with a stochastic differential equation and we perform a mean-field approximation. This parabolic model may be generalized by the introduction of a degenerate diffusion parameter, which depends on the density itself via a power law. Then we derive a model for chemotaxis based on Cattaneo's law of heat propagation with finite speed, which is a hyperbolic model. The last model proposed here is a hydrodynamic model, which takes into account the inertia of the system by a friction force. In the limit of strong friction, the model reduces to the parabolic model, whereas in the limit of weak friction, we recover a hyperbolic model. Finally, we analyze the instability condition, which is the condition that leads to aggregation, and we describe the different kinds of aggregates we may obtain: the parabolic models lead to clusters or peaks whereas the hyperbolic models lead to the formation of network patterns or filaments. Moreover, we discuss the analogy between bacterial colonies and self gravitating systems by comparing the chemotactic collapse and the gravitational collapse (Jeans instability).
Resumo:
The objective of this study is to identify the optimal designs of converging-diverging supersonic and hypersonic nozzles that perform at maximum uniformity of thermodynamic and flow-field properties with respect to their average values at the nozzle exit. Since this is a multi-objective design optimization problem, the design variables used are parameters defining the shape of the nozzle. This work presents how variation of such parameters can influence the nozzle exit flow non-uniformities. A Computational Fluid Dynamics (CFD) software package, ANSYS FLUENT, was used to simulate the compressible, viscous gas flow-field in forty nozzle shapes, including the heat transfer analysis. The results of two turbulence models, k-e and k-ω, were computed and compared. With the analysis results obtained, the Response Surface Methodology (RSM) was applied for the purpose of performing a multi-objective optimization. The optimization was performed with ModeFrontier software package using Kriging and Radial Basis Functions (RBF) response surfaces. Final Pareto optimal nozzle shapes were then analyzed with ANSYS FLUENT to confirm the accuracy of the optimization process.
Resumo:
In 2010, the American Association of State Highway and Transportation Officials (AASHTO) released a safety analysis software system known as SafetyAnalyst. SafetyAnalyst implements the empirical Bayes (EB) method, which requires the use of Safety Performance Functions (SPFs). The system is equipped with a set of national default SPFs, and the software calibrates the default SPFs to represent the agency’s safety performance. However, it is recommended that agencies generate agency-specific SPFs whenever possible. Many investigators support the view that the agency-specific SPFs represent the agency data better than the national default SPFs calibrated to agency data. Furthermore, it is believed that the crash trends in Florida are different from the states whose data were used to develop the national default SPFs. In this dissertation, Florida-specific SPFs were developed using the 2008 Roadway Characteristics Inventory (RCI) data and crash and traffic data from 2007-2010 for both total and fatal and injury (FI) crashes. The data were randomly divided into two sets, one for calibration (70% of the data) and another for validation (30% of the data). The negative binomial (NB) model was used to develop the Florida-specific SPFs for each of the subtypes of roadway segments, intersections and ramps, using the calibration data. Statistical goodness-of-fit tests were performed on the calibrated models, which were then validated using the validation data set. The results were compared in order to assess the transferability of the Florida-specific SPF models. The default SafetyAnalyst SPFs were calibrated to Florida data by adjusting the national default SPFs with local calibration factors. The performance of the Florida-specific SPFs and SafetyAnalyst default SPFs calibrated to Florida data were then compared using a number of methods, including visual plots and statistical goodness-of-fit tests. The plots of SPFs against the observed crash data were used to compare the prediction performance of the two models. Three goodness-of-fit tests, represented by the mean absolute deviance (MAD), the mean square prediction error (MSPE), and Freeman-Tukey R2 (R2FT), were also used for comparison in order to identify the better-fitting model. The results showed that Florida-specific SPFs yielded better prediction performance than the national default SPFs calibrated to Florida data. The performance of Florida-specific SPFs was further compared with that of the full SPFs, which include both traffic and geometric variables, in two major applications of SPFs, i.e., crash prediction and identification of high crash locations. The results showed that both SPF models yielded very similar performance in both applications. These empirical results support the use of the flow-only SPF models adopted in SafetyAnalyst, which require much less effort to develop compared to full SPFs.
Resumo:
Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.
Resumo:
Power flow calculations are one of the most important tools for power system planning and operation. The need to account for uncertainties when performing power flow studies led, among others methods, to the development of the fuzzy power flow (FPF). This kind of models is especially interesting when a scarcity of information exists, which is a common situation in liberalized power systems (where generation and commercialization of electricity are market activities). In this framework, the symmetric/constrained fuzzy power flow (SFPF/CFPF) was proposed in order to avoid some of the problems of the original FPF model. The SFPF/CFPF models are suitable to quantify the adequacy of transmission network to satisfy “reasonable demands for the transmission of electricity” as defined, for instance, in the European Directive 2009/72/EC. In this work it is illustrated how the SFPF/CFPF may be used to evaluate the impact on the adequacy of a transmission system originated by specific investments on new network elements
Resumo:
Abstract In the last years, several models have been presented trying to obtain lithosphere and Moho thickness in the Iberian Peninsula, using data related to geoid elevation and topography, gravity, seismicity and thermal analysis. The results obtained show a decrease in the thickness of the crust and the lithosphere in the SW part of the Iberian Peninsula. Density anomalies in the crust are also referred. Data obtained in the region was collected and deviations from average values used were detected. In this work, models were made taking into account the specific characteristics of the region. Heat flow, thermal conductivity, heat production, topography, gravity, seismic and geological data available for the region, were used to adjust the model. The results show that this region is different from other parts of the Iberian Peninsula and a special attention must be given to it. This work shows the importance of trying to know and understand the thermal structure of the region.
Resumo:
ABSTRACT In the last years, several models were presented trying to obtain lithosphere and Moho thickness in the Iberian Peninsula, using data related to geoid elevation and topography, gravity, seismicity and thermal analysis. The results obtained show a decrease in the thickness of the crust and the lithosphere in the SW part of the Iberian Peninsula. Density anomalies in the crust are also referred. The work I intend to present is related with the south of the Ossa Morena Zone, the South Portuguese Zone and the Algarve, in the south of Portugal. Data obtained in the region was collected and deviations from average values used were detected. Models were made taking into account the specific characteristics of the region. Heat flow, thermal conductivity, heat production, topography, gravity, seismic and geological data available for the region, are used to adapt the models. A special attention will be given to the spatial variation of heat flow values and to Moho depth in the region. The results show that this region is different from other parts of the Iberian Peninsula and a special attention must be given to it. The different values obtained using seismic, gravity, and geoid height data, and the results obtained with models using thermal data shows the importance of trying to know and understand the thermal structure of the regions. Problems related with the use of average values will be focused.
Resumo:
Ice ages are known to be the most dominant palaeoclimatic feature occurring on Earth, producing severe climatic oscillations and consequently shaping the distribution and the population structure of several species. Lampreys constitute excellent models to study the colonization of freshwater systems, as they commonly appear in pairs of closely related species of anadromous versus freshwater resident adults, thus having the ability to colonize new habitats, through the anadromous species, and establish freshwater resident derivates. We used 10 microsatellite loci to investigate the spatial structure, patterns of gene flow and migration routes of Lampetra populations in Europe. We sampled 11 populations including the migratory L. fluviatilis and four resident species, L. planeri, L. alavariensis, L. auremensis and L. lusitanica, the last three endemic to the Iberian Peninsula. In this southern glacial refugium almost all sampled populations represent a distinct genetic cluster, showing high levels of allopatric differentiation, reflecting long periods of isolation. As result of their more recent common ancestor, populations from northern Europe are less divergent among them, they are represented by fewer genetic clusters, and there is evidence of strong recent gene flow among populations. These previously glaciated areas from northern Europe may have been colonized from lampreys expanding out of the Iberian refugia. The pair L. fluviatilis/L. planeri is apparently at different stages of speciation in different locations, showing evidences of high reproductive isolation in the southern refugium, and low differentiation in the north.
Resumo:
A possible future scenario for the water injection (WI) application has been explored as an advanced strategy for modern GDI engines. The aim is to verify whether the PWI (Port Water Injection) and DWI (Direct Water Injection) architectures can replace current fuel enrichment strategies to limit turbine inlet temperatures (TiT) and knock engine attitude. In this way, it might be possible to extend the stoichiometric mixture condition over the entire engine map, meeting possible future restrictions in the use of AES (Auxiliary Emission Strategies) and future emission limitations. The research was first addressed through a comprehensive assessment of the state-of-the-art of the technology and the main effects of the chemical-physical water properties. Then, detailed chemical kinetics simulations were performed in order to compute the effects of WI on combustion development and auto-ignition. The latter represents an important methodology step for accurate numerical combustion simulations. The water injection was then analysed in detail for a PWI system, through an experimental campaign for macroscopic and microscopic injector characterization inside a test chamber. The collected data were used to perform a numerical validation of the spray models, obtaining an excellent matching in terms of particle size and droplet velocity distributions. Finally, a wide range of three-dimensional CFD simulations of a virtual high-bmep engine were realized and compared, exploring also different engine designs and water/fuel injection strategies under non-reacting and reacting flow conditions. According to the latter, it was found that thanks to the introduction of water, for both PWI and DWI systems, it could be possible to obtain an increase of the target performance and an optimization of the bsfc (Break Specific Fuel Consumption), lowering the engine knock risk at the same time, while the TiT target has been achieved hardly only for one DWI configuration.
Resumo:
In this study, the lubrication theory is used to model flow in geological fractures and analyse the compound effect of medium heterogeneity and complex fluid rheology. Such studies are warranted as the Newtonian rheology is adopted in most numerical models because of its ease of use, despite non-Newtonian fluids being ubiquitous in subsurface applications. Past studies on Newtonian and non-Newtonian flow in single rock fractures are summarized in Chapter 1. Chapter 2 presents analytical and semi-analytical conceptual models for flow of a shear-thinning fluid in rock fractures having a simplified geometry, providing a first insight on their permeability. in Chapter 3, a lubrication-based 2-D numerical model is first implemented to solve flow of an Ellis fluid in rough fractures; the finite-volumes model developed is more computationally effective than conducting full 3-D simulations, and introduces an acceptable approximation as long as the flow is laminar and the fracture walls relatively smooth. The compound effect of shear-thinning fluid nature and fracture heterogeneity promotes flow localization, which in turn affects the performance of industrial activities and remediation techniques. In Chapter 4, a Monte Carlo framework is adopted to produce multiple realizations of synthetic fractures, and analyze their ensemble statistics pertaining flow for a variety of real non-Newtonian fluids; the Newtonian case is used as a benchmark. In Chapter 5 and Chapter 6, a conceptual model of the hydro-mechanical aspects of backflow occurring in the last phase of hydraulic fracturing is proposed and experimentally validated, quantifying the effects of the relaxation induced by the flow.
Resumo:
The growing interest for constellation of small, less expensive satellites is bringing space junk and traffic management to the attention of space community. At the same time, the continuous quest for more efficient propulsion systems put the spotlight on electric (low thrust) propulsion as an appealing solution for collision avoidance. Starting with an overview of the current techniques for conjunction assessment and avoidance, we then highlight the possible problems when a low thrust propulsion is used. The need for accurate propagation model shows up from the conducted simulations. Thus, aiming at propagation models with low computational burden, we study the available models from the literature and propose an analytical alternative to improve propagation accuracy. The model is then tested in the particular case of a tangential maneuver. Results show that the proposed solution significantly improve on state of the art methods and is a good candidate to be used in collision avoidance operations. For instance to propagate satellite uncertainty or optimizing avoidance maneuver when conjunction occurs within few (3-4) orbits from measurements time.
Resumo:
Bioelectronic interfaces have significantly advanced in recent years, offering potential treatments for vision impairments, spinal cord injuries, and neurodegenerative diseases. However, the classical neurocentric vision drives the technological development toward neurons. Emerging evidence highlights the critical role of glial cells in the nervous system. Among them, astrocytes significantly influence neuronal networks throughout life and are implicated in several neuropathological states. Although they are incapable to fire action potentials, astrocytes communicate through diverse calcium (Ca2+) signalling pathways, crucial for cognitive functions and brain blood flow regulation. Current bioelectronic devices are primarily designed to interface neurons and are unsuitable for studying astrocytes. Graphene, with its unique electrical, mechanical and biocompatibility properties, has emerged as a promising neural interface material. However, its use as electrode interface to modulate astrocyte functionality remains unexplored. The aim of this PhD work was to exploit Graphene-oxide (GO) and reduced GO (rGO)-coated electrodes to control Ca2+ signalling in astrocytes by electrical stimulation. We discovered that distinct Ca2+dynamics in astrocytes can be evoked, in vitro and in brain slices, depending on the conductive/insulating properties of rGO/GO electrodes. Stimulation by rGO electrodes induces intracellular Ca2+ response with sharp peaks of oscillations (“P-type”), exclusively due to Ca2+ release from intracellular stores. Conversely, astrocytes stimulated by GO electrodes show slower and sustained Ca2+ response (“S-type”), largely mediated by external Ca2+ influx through specific ion channels. Astrocytes respond faster than neurons and activate distinct G-Protein Coupled Receptor intracellular signalling pathways. We propose a resistive/insulating model, hypothesizing that the different conductivity of the substrate influences the electric field at the cell/electrolyte or cell/material interfaces, favouring, respectively, the Ca2+ release from intracellular stores or the extracellular Ca2+ influx. This research provides a simple tool to selectively control distinct Ca2+ signals in brain astrocytes in neuroscience and bioelectronic medicine.
Resumo:
In this thesis, the viability of the Dynamic Mode Decomposition (DMD) as a technique to analyze and model complex dynamic real-world systems is presented. This method derives, directly from data, computationally efficient reduced-order models (ROMs) which can replace too onerous or unavailable high-fidelity physics-based models. Optimizations and extensions to the standard implementation of the methodology are proposed, investigating diverse case studies related to the decoding of complex flow phenomena. The flexibility of this data-driven technique allows its application to high-fidelity fluid dynamics simulations, as well as time series of real systems observations. The resulting ROMs are tested against two tasks: (i) reduction of the storage requirements of high-fidelity simulations or observations; (ii) interpolation and extrapolation of missing data. The capabilities of DMD can also be exploited to alleviate the cost of onerous studies that require many simulations, such as uncertainty quantification analysis, especially when dealing with complex high-dimensional systems. In this context, a novel approach to address parameter variability issues when modeling systems with space and time-variant response is proposed. Specifically, DMD is merged with another model-reduction technique, namely the Polynomial Chaos Expansion, for uncertainty quantification purposes. Useful guidelines for DMD deployment result from the study, together with the demonstration of its potential to ease diagnosis and scenario analysis when complex flow processes are involved.
Resumo:
Graphite is a mineral commodity used as anode for lithium-ion batteries (LIBs), and its global demand is doomed to increase significantly in the future due to the forecasted global market demand of electric vehicles. Currently, the graphite used to produce LIBs is a mix of synthetic and natural graphite. The first one is produced by the crystallization of petroleum by-products and the second comes from mining, which causes threats related to pollution, social acceptance, and health. This MSc work has the objective of determining compositional and textural characteristics of natural, synthetic, and recycled graphite by using SEM-EDS, XRF, XRD, and TEM analytical techniques and couple these data with dynamic Material Flow Analysis (MFA) models, which have the objective of predicting the future global use of graphite in order to test the hypothesis that natural graphite will no longer be used in the LIB market globally. The mineral analyses reveal that the synthetic graphite samples contain less impurities than the natural graphite, which has a rolled internal structure similar to the recycled one. However, recycled graphite shows fractures and discontinuities of the graphene layers caused by the recycling process, but its rolled internal structure can help the Li-ions’ migration through the fractures. Three dynamic MFA studies have been conducted to test distinct scenarios that include graphite recycling in the period 2022-2050 and it emerges that - irrespective of any considered scenario - there will be an increase of synthetic graphite demand, caused by the limited stocks of battery scrap available. Hence, I conclude that both natural and recycled graphite is doomed to be used in the LIB market in the future, at least until the year 2050 when the stock of recycled graphite production will be enough to supersede natural graphite. In addition, some new improvement in the dismantling and recycling processes are necessary to improve the quality of recycled graphite.