957 resultados para Dynamic Emission Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stochastic methods based on time-series modeling combined with geostatistics can be useful tools to describe the variability of water-table levels in time and space and to account for uncertainty. Monitoring water-level networks can give information about the dynamic of the aquifer domain in both dimensions. Time-series modeling is an elegant way to treat monitoring data without the complexity of physical mechanistic models. Time-series model predictions can be interpolated spatially, with the spatial differences in water-table dynamics determined by the spatial variation in the system properties and the temporal variation driven by the dynamics of the inputs into the system. An integration of stochastic methods is presented, based on time-series modeling and geostatistics as a framework to predict water levels for decision making in groundwater management and land-use planning. The methodology is applied in a case study in a Guarani Aquifer System (GAS) outcrop area located in the southeastern part of Brazil. Communication of results in a clear and understandable form, via simulated scenarios, is discussed as an alternative, when translating scientific knowledge into applications of stochastic hydrogeology in large aquifers with limited monitoring network coverage like the GAS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this thesis is to test the ability of some correlative models such as Alpert correlations on 1972 and re-examined on 2011, the investigation of Heskestad and Delichatsios in 1978, the correlations produced by Cooper in 1982, to define both dynamic and thermal characteristics of a fire induced ceiling-jet flow. The flow occurs when the fire plume impinges the ceiling and develops in the radial direction of the fire axis. Both temperature and velocity predictions are decisive for sprinklers positioning, fire alarms positions, detectors (heat, smoke) positions and activation times and back-layering predictions. These correlative models will be compared with a 3D numerical simulation software CFAST. For the results comparison of temperature and velocity near the ceiling. These results are also compared with a Computational Fluid Dynamics (CFD) analysis, using ANSYS FLUENT.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Investigation of large, destructive earthquakes is challenged by their infrequent occurrence and the remote nature of geophysical observations. This thesis sheds light on the source processes of large earthquakes from two perspectives: robust and quantitative observational constraints through Bayesian inference for earthquake source models, and physical insights on the interconnections of seismic and aseismic fault behavior from elastodynamic modeling of earthquake ruptures and aseismic processes.

To constrain the shallow deformation during megathrust events, we develop semi-analytical and numerical Bayesian approaches to explore the maximum resolution of the tsunami data, with a focus on incorporating the uncertainty in the forward modeling. These methodologies are then applied to invert for the coseismic seafloor displacement field in the 2011 Mw 9.0 Tohoku-Oki earthquake using near-field tsunami waveforms and for the coseismic fault slip models in the 2010 Mw 8.8 Maule earthquake with complementary tsunami and geodetic observations. From posterior estimates of model parameters and their uncertainties, we are able to quantitatively constrain the near-trench profiles of seafloor displacement and fault slip. Similar characteristic patterns emerge during both events, featuring the peak of uplift near the edge of the accretionary wedge with a decay toward the trench axis, with implications for fault failure and tsunamigenic mechanisms of megathrust earthquakes.

To understand the behavior of earthquakes at the base of the seismogenic zone on continental strike-slip faults, we simulate the interactions of dynamic earthquake rupture, aseismic slip, and heterogeneity in rate-and-state fault models coupled with shear heating. Our study explains the long-standing enigma of seismic quiescence on major fault segments known to have hosted large earthquakes by deeper penetration of large earthquakes below the seismogenic zone, where mature faults have well-localized creeping extensions. This conclusion is supported by the simulated relationship between seismicity and large earthquakes as well as by observations from recent large events. We also use the modeling to connect the geodetic observables of fault locking with the behavior of seismicity in numerical models, investigating how a combination of interseismic geodetic and seismological estimates could constrain the locked-creeping transition of faults and potentially their co- and post-seismic behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Firms in China within the same industry but with different ownership and size have very different production functions and can face very different emission regulations and financial conditions. This fact has largely been ignored in most of the existing literature on climate change. Using a newly augmented Chinese input–output table in which information about firm size and ownership are explicitly reported, this paper employs a dynamic computable general equilibrium (CGE) model to analyze the impact of alternative climate policy designs with respect to regulation and financial conditions on heterogeneous firms. The simulation results indicate that with a business-as-usual regulatory structure, the effectiveness and economic efficiency of climate policies is significantly undermined. Expanding regulation to cover additional firms has a first-order effect of improving efficiency. However, over-investment in energy technologies in certain firms may decrease the overall efficiency of investments and dampen long-term economic growth by competing with other fixed-capital investments for financial resources. Therefore, a market-oriented arrangement for sharing emission reduction burden and a mechanism for allocating green investment is crucial for China to achieve a more ambitious emission target in the long run.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Estimating un-measurable states is an important component for onboard diagnostics (OBD) and control strategy development in diesel exhaust aftertreatment systems. This research focuses on the development of an Extended Kalman Filter (EKF) based state estimator for two of the main components in a diesel engine aftertreatment system: the Diesel Oxidation Catalyst (DOC) and the Selective Catalytic Reduction (SCR) catalyst. One of the key areas of interest is the performance of these estimators when the catalyzed particulate filter (CPF) is being actively regenerated. In this study, model reduction techniques were developed and used to develop reduced order models from the 1D models used to simulate the DOC and SCR. As a result of order reduction, the number of states in the estimator is reduced from 12 to 1 per element for the DOC and 12 to 2 per element for the SCR. The reduced order models were simulated on the experimental data and compared to the high fidelity model and the experimental data. The results show that the effect of eliminating the heat transfer and mass transfer coefficients are not significant on the performance of the reduced order models. This is shown by an insignificant change in the kinetic parameters between the reduced order and 1D model for simulating the experimental data. An EKF based estimator to estimate the internal states of the DOC and SCR was developed. The DOC and SCR estimators were simulated on the experimental data to show that the estimator provides improved estimation of states compared to a reduced order model. The results showed that using the temperature measurement at the DOC outlet improved the estimates of the CO , NO , NO2 and HC concentrations from the DOC. The SCR estimator was used to evaluate the effect of NH3 and NOX sensors on state estimation quality. Three sensor combinations of NOX sensor only, NH3 sensor only and both NOX and NH3 sensors were evaluated. The NOX only configuration had the worst performance, the NH3 sensor only configuration was in the middle and both the NOX and NH3 sensor combination provided the best performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traditional decision making research has often focused on one's ability to choose from a set of prefixed options, ignoring the process by which decision makers generate courses of action (i.e., options) in-situ (Klein, 1993). In complex and dynamic domains, this option generation process is particularly critical to understanding how successful decisions are made (Zsambok & Klein, 1997). When generating response options for oneself to pursue (i.e., during the intervention-phase of decision making) previous research has supported quick and intuitive heuristics, such as the Take-The-First heuristic (TTF; Johnson & Raab, 2003). When generating predictive options for others in the environment (i.e., during the assessment-phase of decision making), previous research has supported the situational-model-building process described by Long Term Working Memory theory (LTWM; see Ward, Ericsson, & Williams, 2013). In the first three experiments, the claims of TTF and LTWM are tested during assessment- and intervention-phase tasks in soccer. To test what other environmental constraints may dictate the use of these cognitive mechanisms, the claims of these models are also tested in the presence and absence of time pressure. In addition to understanding the option generation process, it is important that researchers in complex and dynamic domains also develop tools that can be used by `real-world' professionals. For this reason, three more experiments were conducted to evaluate the effectiveness of a new online assessment of perceptual-cognitive skill in soccer. This test differentiated between skill groups and predicted performance on a previously established test and predicted option generation behavior. The test also outperformed domain-general cognitive tests, but not a domain-specific knowledge test when predicting skill group membership. Implications for theory and training, and future directions for the development of applied tools are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This dissertation is a collection of three economics essays on different aspects of carbon emission trading markets. The first essay analyzes the dynamic optimal emission control strategies of two nations. With a potential to become the largest buyer under the Kyoto Protocol, the US is assumed to be a monopsony, whereas with a large number of tradable permits on hand Russia is assumed to be a monopoly. Optimal costs of emission control programs are estimated for both the countries under four different market scenarios: non-cooperative no trade, US monopsony, Russia monopoly, and cooperative trading. The US monopsony scenario is found to be the most Pareto cost efficient. The Pareto efficient outcome, however, would require the US to make side payments to Russia, which will even out the differences in the cost savings from cooperative behavior. The second essay analyzes the price dynamics of the Chicago Climate Exchange (CCX), a voluntary emissions trading market. By examining the volatility in market returns using AR-GARCH and Markov switching models, the study associates the market price fluctuations with two different political regimes of the US government. Further, the study also identifies a high volatility in the returns few months before the market collapse. Three possible regulatory and market-based forces are identified as probable causes of market volatility and its ultimate collapse. Organizers of other voluntary markets in the US and worldwide may closely watch for these regime switching forces in order to overcome emission market crashes. The third essay compares excess skewness and kurtosis in carbon prices between CCX and EU ETS (European Union Emission Trading Scheme) Phase I and II markets, by examining the tail behavior when market expectations exceed the threshold level. Dynamic extreme value theory is used to find out the mean price exceedence of the threshold levels and estimate the risk loss. The calculated risk measures suggest that CCX and EU ETS Phase I are extremely immature markets for a risk investor, whereas EU ETS Phase II is a more stable market that could develop as a mature carbon market in future years.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Managed lane strategies are innovative road operation schemes for addressing congestion problems. These strategies operate a lane (lanes) adjacent to a freeway that provides congestion-free trips to eligible users, such as transit or toll-payers. To ensure the successful implementation of managed lanes, the demand on these lanes need to be accurately estimated. Among different approaches for predicting this demand, the four-step demand forecasting process is most common. Managed lane demand is usually estimated at the assignment step. Therefore, the key to reliably estimating the demand is the utilization of effective assignment modeling processes. Managed lanes are particularly effective when the road is functioning at near-capacity. Therefore, capturing variations in demand and network attributes and performance is crucial for their modeling, monitoring and operation. As a result, traditional modeling approaches, such as those used in static traffic assignment of demand forecasting models, fail to correctly predict the managed lane demand and the associated system performance. The present study demonstrates the power of the more advanced modeling approach of dynamic traffic assignment (DTA), as well as the shortcomings of conventional approaches, when used to model managed lanes in congested environments. In addition, the study develops processes to support an effective utilization of DTA to model managed lane operations. Static and dynamic traffic assignments consist of demand, network, and route choice model components that need to be calibrated. These components interact with each other, and an iterative method for calibrating them is needed. In this study, an effective standalone framework that combines static demand estimation and dynamic traffic assignment has been developed to replicate real-world traffic conditions. With advances in traffic surveillance technologies collecting, archiving, and analyzing traffic data is becoming more accessible and affordable. The present study shows how data from multiple sources can be integrated, validated, and best used in different stages of modeling and calibration of managed lanes. Extensive and careful processing of demand, traffic, and toll data, as well as proper definition of performance measures, result in a calibrated and stable model, which closely replicates real-world congestion patterns, and can reasonably respond to perturbations in network and demand properties.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract : Recently, there is a great interest to study the flow characteristics of suspensions in different environmental and industrial applications, such as snow avalanches, debris flows, hydrotransport systems, and material casting processes. Regarding rheological aspects, the majority of these suspensions, such as fresh concrete, behave mostly as non-Newtonian fluids. Concrete is the most widely used construction material in the world. Due to the limitations that exist in terms of workability and formwork filling abilities of normal concrete, a new class of concrete that is able to flow under its own weight, especially through narrow gaps in the congested areas of the formwork was developed. Accordingly, self-consolidating concrete (SCC) is a novel construction material that is gaining market acceptance in various applications. Higher fluidity characteristics of SCC enable it to be used in a number of special applications, such as densely reinforced sections. However, higher flowability of SCC makes it more sensitive to segregation of coarse particles during flow (i.e., dynamic segregation) and thereafter at rest (i.e., static segregation). Dynamic segregation can increase when SCC flows over a long distance or in the presence of obstacles. Therefore, there is always a need to establish a trade-off between the flowability, passing ability, and stability properties of SCC suspensions. This should be taken into consideration to design the casting process and the mixture proportioning of SCC. This is called “workability design” of SCC. An efficient and non-expensive workability design approach consists of the prediction and optimization of the workability of the concrete mixtures for the selected construction processes, such as transportation, pumping, casting, compaction, and finishing. Indeed, the mixture proportioning of SCC should ensure the construction quality demands, such as demanded levels of flowability, passing ability, filling ability, and stability (dynamic and static). This is necessary to develop some theoretical tools to assess under what conditions the construction quality demands are satisfied. Accordingly, this thesis is dedicated to carry out analytical and numerical simulations to predict flow performance of SCC under different casting processes, such as pumping and tremie applications, or casting using buckets. The L-Box and T-Box set-ups can evaluate flow performance properties of SCC (e.g., flowability, passing ability, filling ability, shear-induced and gravitational dynamic segregation) in casting process of wall and beam elements. The specific objective of the study consists of relating numerical results of flow simulation of SCC in L-Box and T-Box test set-ups, reported in this thesis, to the flow performance properties of SCC during casting. Accordingly, the SCC is modeled as a heterogeneous material. Furthermore, an analytical model is proposed to predict flow performance of SCC in L-Box set-up using the Dam Break Theory. On the other hand, results of the numerical simulation of SCC casting in a reinforced beam are verified by experimental free surface profiles. The results of numerical simulations of SCC casting (modeled as a single homogeneous fluid), are used to determine the critical zones corresponding to the higher risks of segregation and blocking. The effects of rheological parameters, density, particle contents, distribution of reinforcing bars, and particle-bar interactions on flow performance of SCC are evaluated using CFD simulations of SCC flow in L-Box and T-box test set-ups (modeled as a heterogeneous material). Two new approaches are proposed to classify the SCC mixtures based on filling ability and performability properties, as a contribution of flowability, passing ability, and dynamic stability of SCC.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The fisheries for mackerel scad, Decapterus macarellus, are particularly important in Cape Verde, constituting almost 40% of total catches at the peak of the fishery in 1997 and 1998 ( 3700 tonnes). Catches have been stable at a much lower level of about 2 100 tonnes in recent years. Given the importance of mackerel scad in terms of catch weight and local food security, there is an urgent need for updated assessment. Stock assessment was carried out using a Bayesian approach to biomass dynamic modelling. In order to tackle the problem of a non-informative CPUE series, the intrinsic rate of increase, r, was estimated separately, and the ratio B-0/X, initial biomass relative to carrying capacity, was assumed based on available information. The results indicated that the current level of fishing is sustainable. The probability of collapse is low, particularly in the short-term, and it is likely that biomass may increase further above B-msy, indicating a healthy stock level. It would appear that it is relatively safe to increase catches even up to 4000 tonnes. However, the marginal posterior of r was almost identical to the prior, indicating that there is relatively low information content in CPUE. This was also the case in relation to B-0/X There have been substantial increases in fishing efficiency, which have not been adequately captured by the measure used for effort (days or trips), implying that the results may be overly optimistic and should be considered preliminary. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Until recently the dynamical evolution of the interstellar medium (ISM) was simu- lated using collisional ionization equilibrium (CIE) conditions. However, the ISM is a dynamical system, in which the plasma is naturally driven out of equilibrium due to atomic and dynamic processes operating on different timescales. A step forward in the field comprises a multi-fluid approach taking into account the joint thermal and dynamical evolutions of the ISM gas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this Thesis a series of numerical models for the evaluation of the seasonal performance of reversible air-to-water heat pump systems coupled to residential and non-residential buildings are presented. The exploitation of the energy saving potential linked to the adoption of heat pumps is a hard task for designers due to the influence on their energy performance of several factors, like the external climate variability, the heat pump modulation capacity, the system control strategy and the hydronic loop configuration. The aim of this work is to study in detail all these aspects. In the first part of this Thesis a series of models which use a temperature class approach for the prediction of the seasonal performance of reversible air source heat pumps are shown. An innovative methodology for the calculation of the seasonal performance of an air-to-water heat pump has been proposed as an extension of the procedure reported by the European standard EN 14825. This methodology can be applied not only to air-to-water single-stage heat pumps (On-off HPs) but also to multi-stage (MSHPs) and inverter-driven units (IDHPs). In the second part, dynamic simulation has been used with the aim to optimize the control systems of the heat pump and of the HVAC plant. A series of dynamic models, developed by means of TRNSYS, are presented to study the behavior of On-off HPs, MSHPs and IDHPs. The main goal of these dynamic simulations is to show the influence of the heat pump control strategies and of the lay-out of the hydronic loop used to couple the heat pump to the emitters on the seasonal performance of the system. A particular focus is given to the modeling of the energy losses linked to on-off cycling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New powertrain design is highly influenced by CO2 and pollutant limits defined by legislations, the demand of fuel economy in for real conditions, high performances and acceptable cost. To reach the requirements coming from both end-users and legislations, several powertrain architectures and engine technologies are possible (e.g. SI or CI engines), with many new technologies, new fuels, and different degree of electrification. The benefits and costs given by the possible architectures and technology mix must be accurately evaluated by means of objective procedures and tools in order to choose among the best alternatives. This work presents a basic design methodology and a comparison at concept level of the main powertrain architectures and technologies that are currently being developed, considering technical benefits and their cost effectiveness. The analysis is carried out on the basis of studies from the technical literature, integrating missing data with evaluations performed by means of powertrain-vehicle simplified models, considering the most important powertrain architectures. Technology pathways for passenger cars up to 2025 and beyond have been defined. After that, with support of more detailed models and experimentations, the investigation has been focused on the more promising technologies to improve internal combustion engine, such as: water injection, low temperature combustions and heat recovery systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding the complex dynamics of beam-halo formation and evolution in circular particle accelerators is crucial for the design of current and future rings, particularly those utilizing superconducting magnets such as the CERN Large Hadron Collider (LHC), its luminosity upgrade HL-LHC, and the proposed Future Circular Hadron Collider (FCC-hh). A recent diffusive framework, which describes the evolution of the beam distribution by means of a Fokker-Planck equation, with diffusion coefficient derived from the Nekhoroshev theorem, has been proposed to describe the long-term behaviour of beam dynamics and particle losses. In this thesis, we discuss the theoretical foundations of this framework, and propose the implementation of an original measurement protocol based on collimator scans in view of measuring the Nekhoroshev-like diffusive coefficient by means of beam loss data. The available LHC collimator scan data, unfortunately collected without the proposed measurement protocol, have been successfully analysed using the proposed framework. This approach is also applied to datasets from detailed measurements of the impact on the beam losses of so-called long-range beam-beam compensators also at the LHC. Furthermore, dynamic indicators have been studied as a tool for exploring the phase-space properties of realistic accelerator lattices in single-particle tracking simulations. By first examining the classification performance of known and new indicators in detecting the chaotic character of initial conditions for a modulated Hénon map and then applying this knowledge to study the properties of realistic accelerator lattices, we tried to identify a connection between the presence of chaotic regions in the phase space and Nekhoroshev-like diffusive behaviour, providing new tools to the accelerator physics community.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accurate representation of the Earth Radiation Budget by General Circulation Models (GCMs) is a fundamental requirement to provide reliable historical and future climate simulations. In this study, we found reasonable agreement between the integrated energy fluxes at the top of the atmosphere simulated by 34 state-of-the-art climate models and the observations provided by the Cloud and Earth Radiant Energy System (CERES) mission on a global scale, but large regional biases have been detected throughout the globe. Furthermore, we highlighted that a good agreement between simulated and observed integrated Outgoing Longwave Radiation (OLR) fluxes may be obtained from the cancellation of opposite-in-sign systematic errors, localized in different spectral ranges. To avoid this and to understand the causes of these biases, we compared the observed Earth emission spectra, measured by the Infrared Atmospheric Sounding Interferometer (IASI) in the period 2008-2016, with the synthetic radiances computed on the basis of the atmospheric fields provided by the EC-Earth GCM. To this purpose, the fast σ-IASI radiative transfer model was used, after its validation and implementation in EC-Earth. From the comparison between observed and simulated spectral radiances, a positive temperature bias in the stratosphere and a negative temperature bias in the middle troposphere, as well as a dry bias of the water vapor concentration in the upper troposphere, have been identified in the EC-Earth climate model. The analysis has been performed in clear-sky conditions, but the feasibility of its extension in the presence of clouds, whose impact on the radiation represents the greatest source of uncertainty in climate models, has also been proven. Finally, the analysis of simulated and observed OLR trends indicated good agreement and provided detailed information on the spectral fingerprints of the evolution of the main climate variables.