958 resultados para Physical-ecological coupled model
Resumo:
This dissertation focused on developing an integrated surface – subsurface hydrologic simulation numerical model by programming and testing the coupling of the USGS MODFLOW-2005 Groundwater Flow Process (GWF) package (USGS, 2005) with the 2D surface water routing model: FLO-2D (O’Brien et al., 1993). The coupling included the necessary procedures to numerically integrate and verify both models as a single computational software system that will heretofore be referred to as WHIMFLO-2D (Wetlands Hydrology Integrated Model). An improved physical formulation of flow resistance through vegetation in shallow waters based on the concept of drag force was also implemented for the simulations of floodplains, while the use of the classical methods (e.g., Manning, Chezy, Darcy-Weisbach) to calculate flow resistance has been maintained for the canals and deeper waters. A preliminary demonstration exercise WHIMFLO-2D in an existing field site was developed for the Loxahatchee Impoundment Landscape Assessment (LILA), an 80 acre area, located at the Arthur R. Marshall Loxahatchee National Wild Life Refuge in Boynton Beach, Florida. After applying a number of simplifying assumptions, results have illustrated the ability of the model to simulate the hydrology of a wetland. In this illustrative case, a comparison between measured and simulated stages level showed an average error of 0.31% with a maximum error of 2.8%. Comparison of measured and simulated groundwater head levels showed an average error of 0.18% with a maximum of 2.9%.
Resumo:
Recent research indicates that characteristics of El Niño and the Southern Oscillation (ENSO) have changed over the past several decades. Here, I examined different flavors of El Niño in the observational record and the recent changes in the character of El Niño events. The fundamental physical processes that drive ENSO were described and the Eastern Pacific (EP) and Central Pacific (CP) types or flavors of El Niño were defined. Using metrics from the peer-reviewed literature, I examined several historical data sets to interpret El Niño behavior from 1950-2010. A Monte Carlo Simulation was then applied to output from coupled model simulations to test the statistical significance of recent observations surrounding EP and CP El Niño. Results suggested that EP and CP El Niño had been occurring in a similar fashion over the past 60 years with natural variability, but no significant increase in CP El Niño behavior.
Resumo:
The role of computer modeling has grown recently to integrate itself as an inseparable tool to experimental studies for the optimization of automotive engines and the development of future fuels. Traditionally, computer models rely on simplified global reaction steps to simulate the combustion and pollutant formation inside the internal combustion engine. With the current interest in advanced combustion modes and injection strategies, this approach depends on arbitrary adjustment of model parameters that could reduce credibility of the predictions. The purpose of this study is to enhance the combustion model of KIVA, a computational fluid dynamics code, by coupling its fluid mechanics solution with detailed kinetic reactions solved by the chemistry solver, CHEMKIN. As a result, an engine-friendly reaction mechanism for n-heptane was selected to simulate diesel oxidation. Each cell in the computational domain is considered as a perfectly-stirred reactor which undergoes adiabatic constant- volume combustion. The model was applied to an ideally-prepared homogeneous- charge compression-ignition combustion (HCCI) and direct injection (DI) diesel combustion. Ignition and combustion results show that the code successfully simulates the premixed HCCI scenario when compared to traditional combustion models. Direct injection cases, on the other hand, do not offer a reliable prediction mainly due to the lack of turbulent-mixing model, inherent in the perfectly-stirred reactor formulation. In addition, the model is sensitive to intake conditions and experimental uncertainties which require implementation of enhanced predictive tools. It is recommended that future improvements consider turbulent-mixing effects as well as optimization techniques to accurately simulate actual in-cylinder process with reduced computational cost. Furthermore, the model requires the extension of existing fuel oxidation mechanisms to include pollutant formation kinetics for emission control studies.
Resumo:
This paper presents the general framework of an ecological model of the English Channel. The model is a result of combining a physical sub-model with a biological one. in the physical submodel, the Channel is divided into 71 boxes and water fluxes between them are calculated automatically. A 2-layer, vertical thermohaline model was then linked with the horizontal circulation scheme. This physical sub-model exhibits thermal stratification in the western Channel during spring and summer and haline stratification in the Bay of Seine due to high flow rates from the river. The biological sub-model takes 2 elements, nitrogen and silicon, into account and divides phytoplankton into diatoms and dinoflagellates. Results from this ecological model emphasize the influence of stratification on chlorophyll a concentrations as well as on primary production. Stratified waters appear to be much less productive than well-mixed ones. Nevertheless, when simulated production values are compared with literature data, calculated production is shown to be underestimated. This could be attributed to a lack of refinement of the 2-layer box-model or processes omitted from the biological model, such as production by nanoplankton.
Resumo:
Tsunamis are rare events. However, their impact can be devastating and it may extend to large geographical areas. For low-probability high-impact events like tsunamis, it is crucial to implement all possible actions to mitigate the risk. The tsunami hazard assessment is the result of a scientific process that integrates traditional geological methods, numerical modelling and the analysis of tsunami sources and historical records. For this reason, analysing past events and understanding how they interacted with the land is the only way to inform tsunami source and propagation models, and quantitatively test forecast models like hazard analyses. The primary objective of this thesis is to establish an explicit relationship between the macroscopic intensity, derived from historical descriptions, and the quantitative physical parameters measuring tsunami waves. This is done first by defining an approximate estimation method based on a simplified 1D physical onshore propagation model to convert the available observations into one reference physical metric. Wave height at the coast was chosen as the reference due to its stability and independence of inland effects. This method was then implemented for a set of well-known past events to build a homogeneous dataset with both macroseismic intensity and wave height. By performing an orthogonal regression, a direct and invertible empirical relationship could be established between the two parameters, accounting for their relevant uncertainties. The target relationship is extensively tested and finally applied to the Italian Tsunami Effect Database (ITED), providing a homogeneous estimation of the wave height for all existing tsunami observations in Italy. This provides the opportunity for meaningful comparison for models and simulations, as well as quantitatively testing tsunami hazard models for the Italian coasts and informing tsunami risk management initiatives.
Resumo:
Understanding the emergence of extreme opinions and in what kind of environment they might become less extreme is a central theme in our modern globalized society. A model combining continuous opinions and observed discrete actions (CODA) capable of addressing the important issue of measuring how extreme opinions might be has been recently proposed. In this paper I show extreme opinions to arise in a ubiquitous manner in the CODA model for a multitude of social network structures. Depending on network details reducing extremism seems to be possible. However, a large number of agents with extreme opinions is always observed. A significant decrease in the number of extremists can be observed by allowing agents to change their positions in the network.
Resumo:
We searched for a sidereal modulation in the MINOS far detector neutrino rate. Such a signal would be a consequence of Lorentz and CPT violation as described by the standard-model extension framework. It also would be the first detection of a perturbative effect to conventional neutrino mass oscillations. We found no evidence for this sidereal signature, and the upper limits placed on the magnitudes of the Lorentz and CPT violating coefficients describing the theory are an improvement by factors of 20-510 over the current best limits found by using the MINOS near detector.
Resumo:
A search for a sidereal modulation in the MINOS near detector neutrino data was performed. If present, this signature could be a consequence of Lorentz and CPT violation as predicted by the effective field theory called the standard-model extension. No evidence for a sidereal signal in the data set was found, implying that there is no significant change in neutrino propagation that depends on the direction of the neutrino beam in a sun-centered inertial frame. Upper limits on the magnitudes of the Lorentz and CPT violating terms in the standard-model extension lie between 10(-4) and 10(-2) of the maximum expected, assuming a suppression of these signatures by a factor of 10(-17).
Resumo:
The structure of probability currents is studied for the dynamical network after consecutive contraction on two-state, nonequilibrium lattice systems. This procedure allows us to investigate the transition rates between configurations on small clusters and highlights some relevant effects of lattice symmetries on the elementary transitions that are responsible for entropy production. A method is suggested to estimate the entropy production for different levels of approximations (cluster sizes) as demonstrated in the two-dimensional contact process with mutation.
Resumo:
It is shown that the families of generalized matrix ensembles recently considered which give rise to an orthogonal invariant stable Levy ensemble can be generated by the simple procedure of dividing Gaussian matrices by a random variable. The nonergodicity of this kind of disordered ensembles is investigated. It is shown that the same procedure applied to random graphs gives rise to a family that interpolates between the Erdos-Renyi and the scale free models.
Resumo:
We have analyzed a large set of alpha + alpha elastic scattering data for bombarding energies ranging from 0.6 to 29.5 MeV. Because of the complete lack of open reaction channels, the optical interaction at these energies must have a vanishing imaginary part. Thus, this system is particularly important because the corresponding elastic scattering cross sections are very sensitive to the real part of the interaction. The data were analyzed in the context of the velocity-dependent Sao Paulo potential, which is a successful theoretical model for the description of heavy-ion reactions from sub-barrier to intermediate energies. We have verified that, even in this low-energy region, the velocity dependence of the model is quite important for describing the data of the alpha + alpha system.
Resumo:
The nucleus (46)Ti has been studied with the reaction (42)Ca((7)Li,p2n)(46)Ti at a bombarding energy of 31 MeV. Thin target foils backed with a thick Au layer were used. Five new levels of negative parity were observed. Several lifetimes have been determined with the Doppler shift attenuation method. Low-lying experimental negative-parity levels are assigned to three bands with K(pi) = 3, 0, and 4, which are interpreted in terms of the large-scale shell model, considering particle-hole excitations from d(3/2) and s(1/2) orbitals. Shell model calculations were performed using a few effective interactions. However, good agreement was not achieved in the description of either negative- or positive-parity low-lying levels.
Resumo:
The thermal dependence of the zero-bias conductance for the single electron transistor is the target of two independent renormalization-group approaches, both based on the spin-degenerate Anderson impurity model. The first approach, an analytical derivation, maps the Kondo-regime conductance onto the universal conductance function for the particle-hole symmetric model. Linear, the mapping is parametrized by the Kondo temperature and the charge in the Kondo cloud. The second approach, a numerical renormalization-group computation of the conductance as a function the temperature and applied gate voltages offers a comprehensive view of zero-bias charge transport through the device. The first approach is exact in the Kondo regime; the second, essentially exact throughout the parametric space of the model. For illustrative purposes, conductance curves resulting from the two approaches are compared.
Resumo:
The demands for improvement in sound quality and reduction of noise generated by vehicles are constantly increasing, as well as the penalties for space and weight of the control solutions. A promising approach to cope with this challenge is the use of active structural-acoustic control. Usually, the low frequency noise is transmitted into the vehicle`s cabin through structural paths, which raises the necessity of dealing with vibro-acoustic models. This kind of models should allow the inclusion of sensors and actuators models, if accurate performance indexes are to be accessed. The challenge thus resides in deriving reasonable sized models that integrate structural, acoustic, electrical components and the controller algorithm. The advantages of adequate active control simulation strategies relies on the cost and time reduction in the development phase. Therefore, the aim of this paper is to present a methodology for simulating vibro-acoustic systems including this coupled model in a closed loop control simulation framework that also takes into account the interaction between the system and the control sensors/actuators. It is shown that neglecting the sensor/actuator dynamics can lead to inaccurate performance predictions.
Resumo:
MCM-41 samples of various pore dimensions are synthesized. Plotting of nitrogen adsorption data at 77 K versus the statistical film thickness (comparison plot) reveals three distinct stages, with a characteristic of two points of inflection. The steep intermediate stage caused by capillary condensation occurred in the highly uniform mesopores. From the slopes of the sections before and after the condensation, the surface area of the mesopores is calculated. The linear portion of the last section is extrapolated to the adsorption axis of the comparison plot, and this intercept is used to obtain the volume of the mesopores. From the surface area and pore volume, average mesopore diameter is calculated, and the value thus obtained is in good agreement with the pore dimension obtained from powder X-ray diffraction measurements. The principle of the calculation as well as problems associated are discussed in detail.