7 resultados para Inverse method
em Universidad Politécnica de Madrid
Resumo:
One of the outstanding problems of the modelling of temperate ice dynamics is the limited knowledge on the rheology of temperate ice and, in particular, on how the rate factor depends on the liquid water content. Though it is well known that the rate factor depends strongly on the water content, in practice the only available experimentally-based relationship is that by Duval (1977), which is only valid for water contents up to 1%. However, actual water contents found in temperate and polythermal glaciers are sometimes substantially larger.
Resumo:
En este trabajo se presenta el desarrollo de una metodología para obtener un universo de funciones de Green y el algoritmo correspondiente, para estimar la altura de tsunamis a lo largo de la costa occidental de México en función del momento sísmico y de la extensión del área de ruptura de sismos interplaca localizados entre la costa y la Trinchera Mesoamericana. Tomando como caso de estudio el sismo ocurrido el 9 de octubre de 1995 en la costa de Jalisco-Colima, se estudiaron los efectos del tsunami originados en la hidrodinámica del Puerto de Manzanillo, México, con una propuesta metodológica que contempló lo siguiente: El primer paso de la metodología contempló la aplicación del método inverso de tsunamis para acotar los parámetros de la fuente sísmica mediante la confección de un universo de funciones de Green para la costa occidental de México. Tanto el momento sísmico como la localización y extensión del área de ruptura de sismos se prescribe en segmentos de planos de falla de 30 X 30 km. A cada uno de estos segmentos del plano de falla corresponde un conjunto de funciones de Green ubicadas en la isobata de 100 m, para 172 localidades a lo largo de la costa, separadas en promedio 12 km entre una y otra. El segundo paso de la metodología contempló el estudio de la hidrodinámica (velocidades de las corrientes y niveles del mar en el interior del puerto y el estudio del runup en la playa) originada por el tsunami, la cual se estudió en un modelo hidráulico de fondo fijo y en un modelo numérico, representando un tsunami sintético en la profundidad de 34 m como condición inicial, el cual se propagó a la costa con una señal de onda solitaria. Como resultado de la hidrodinámica del puerto de Manzanillo, se realizó un análisis de riesgo para la definición de las condiciones operativas del puerto en términos de las velocidades en el interior del mismo, y partiendo de las condiciones iniciales del terremoto de 1995, se definieron las condiciones límites de operación de los barcos en el interior y exterior del puerto. In this work is presented the development of a methodology in order to obtain a universe of Green's functions and the corresponding algorithm in order to estimate the tsunami wave height along the west coast of Mexico, in terms of seismic moment and the extent of the area of the rupture, in the interplate earthquakes located between the coast and the Middle America Trench. Taking as a case of study the earthquake occurred on October 9, 1995 on the coast of Jalisco-Colima, were studied the hydrodynamics effects of the tsunami caused in the Port of Manzanillo, Mexico, with a methodology that contemplated the following The first step of the methodology contemplated the implementation of the tsunami inverse method to narrow the parameters of the seismic source through the creation of a universe of Green's functions for the west coast of Mexico. Both the seismic moment as the location and extent of earthquake rupture area prescribed in segments fault planes of 30 X 30 km. Each of these segments of the fault plane corresponds a set of Green's functions located in the 100 m isobath, to 172 locations along the coast, separated on average 12 km from each other. The second step of the methodology contemplated the study of the hydrodynamics (speed and directions of currents and sea levels within the port and the study of the runup on the beach Las Brisas) caused by the tsunami, which was studied in a hydraulic model of fix bed and in a numerical model, representing a synthetic tsunami in the depth of 34 m as an initial condition which spread to the coast with a solitary wave signal. As a result of the hydrodynamics of the port of Manzanillo, a risk analysis to define the operating conditions of the port in terms of the velocities in the inner and outside of the port was made, taken in account the initial conditions of the earthquake and tsunami ocurred in Manzanillo port in 1995, were defined the limits conditions of operation of the ships inside and outside the port.
Resumo:
There is general agreement within the scientific community in considering Biology as the science with more potential to develop in the XXI century. This is due to several reasons, but probably the most important one is the state of development of the rest of experimental and technological sciences. In this context, there are a very rich variety of mathematical tools, physical techniques and computer resources that permit to do biological experiments that were unbelievable only a few years ago. Biology is nowadays taking advantage of all these newly developed technologies, which are been applied to life sciences opening new research fields and helping to give new insights in many biological problems. Consequently, biologists have improved a lot their knowledge in many key areas as human function and human diseases. However there is one human organ that is still barely understood compared with the rest: The human brain. The understanding of the human brain is one of the main challenges of the XXI century. In this regard, it is considered a strategic research field for the European Union and the USA. Thus, there is a big interest in applying new experimental techniques for the study of brain function. Magnetoencephalography (MEG) is one of these novel techniques that are currently applied for mapping the brain activity1. This technique has important advantages compared to the metabolic-based brain imagining techniques like Functional Magneto Resonance Imaging2 (fMRI). The main advantage is that MEG has a higher time resolution than fMRI. Another benefit of MEG is that it is a patient friendly clinical technique. The measure is performed with a wireless set up and the patient is not exposed to any radiation. Although MEG is widely applied in clinical studies, there are still open issues regarding data analysis. The present work deals with the solution of the inverse problem in MEG, which is the most controversial and uncertain part of the analysis process3. This question is addressed using several variations of a new solving algorithm based in a heuristic method. The performance of those methods is analyzed by applying them to several test cases with known solutions and comparing those solutions with the ones provided by our methods.
Resumo:
Stochastic model updating must be considered for quantifying uncertainties inherently existing in real-world engineering structures. By this means the statistical properties,instead of deterministic values, of structural parameters can be sought indicating the parameter variability. However, the implementation of stochastic model updating is much more complicated than that of deterministic methods particularly in the aspects of theoretical complexity and low computational efficiency. This study attempts to propose a simple and cost-efficient method by decomposing a stochastic updating process into a series of deterministic ones with the aid of response surface models and Monte Carlo simulation. The response surface models are used as surrogates for original FE models in the interest of programming simplification, fast response computation and easy inverse optimization. Monte Carlo simulation is adopted for generating samples from the assumed or measured probability distributions of responses. Each sample corresponds to an individual deterministic inverse process predicting the deterministic values of parameters. Then the parameter means and variances can be statistically estimated based on all the parameter predictions by running all the samples. Meanwhile, the analysis of variance approach is employed for the evaluation of parameter variability significance. The proposed method has been demonstrated firstly on a numerical beam and then a set of nominally identical steel plates tested in the laboratory. It is found that compared with the existing stochastic model updating methods, the proposed method presents similar accuracy while its primary merits consist in its simple implementation and cost efficiency in response computation and inverse optimization.
Resumo:
Penguin colonies represent some of the most concentrated sources of ammonia emissions to the atmosphere in the world. The ammonia emitted into the atmosphere can have a large influence on the nitrogen cycling of ecosystems near the colonies. However, despite the ecological importance of the emissions, no measurements of ammonia emissions from penguin colonies have been made. The objective of this work was to determine the ammonia emission rate of a penguin colony using inverse-dispersion modelling and gradient methods. We measured meteorological variables and mean atmospheric concentrations of ammonia at seven locations near a colony of Adélie penguins in Antarctica to provide input data for inverse-dispersion modelling. Three different atmospheric dispersion models (ADMS, LADD and a Lagrangian stochastic model) were used to provide a robust emission estimate. The Lagrangian stochastic model was applied both in ‘forwards’ and ‘backwards’ mode to compare the difference between the two approaches. In addition, the aerodynamic gradient method was applied using vertical profiles of mean ammonia concentrations measured near the centre of the colony. The emission estimates derived from the simulations of the three dispersion models and the aerodynamic gradient method agreed quite well, giving a mean emission of 1.1 g ammonia per breeding pair per day (95% confidence interval: 0.4–2.5 g ammonia per breeding pair per day). This emission rate represents a volatilisation of 1.9% of the estimated nitrogen excretion of the penguins, which agrees well with that estimated from a temperature-dependent bioenergetics model. We found that, in this study, the Lagrangian stochastic model seemed to give more reliable emission estimates in ‘forwards’ mode than in ‘backwards’ mode due to the assumptions made.
Resumo:
An inverse optimization strategy was developed to determine the single crystal properties from experimental results of the mechanical behavior of polycrystals. The polycrystal behavior was obtained by means of the finite element simulation of a representative volume element of the microstructure in which the dominant slip and twinning systems were included in the constitutive equation of each grain. The inverse problem was solved by means of the Levenberg-Marquardt method, which provided an excellent fit to the experimental results. The iterative optimization process followed a hierarchical scheme in which simple representative volume elements were initially used, followed by more realistic ones to reach the final optimum solution, leading to important reductions in computer time. The new strategy was applied to identify the initial and saturation critical resolved shear stresses and the hardening modulus of the active slip systems and extension twinning in a textured AZ31 Mg alloy. The results were in general agreement with the data in the literature but also showed some differences. They were partially explained because of the higher accuracy of the new optimization strategy but it was also shown that the number of independent experimental stress-strain curves used as input is critical to reach an accurate solution to the inverse optimization problem. It was concluded that at least three independent stress-strain curves are necessary to determine the single crystal behavior from polycrystal tests in the case of highly textured Mg alloys.
Resumo:
The CENTURY soil organic matter model was adapted for the DSSAT (Decision Support System for Agrotechnology Transfer), modular format in order to better simulate the dynamics of soil organic nutrient processes (Gijsman et al., 2002). The CENTURY model divides the soil organic carbon (SOC) into three hypothetical pools: microbial or active material (SOC1), intermediate (SOC2) and the largely inert and stable material (SOC3) (Jones et al., 2003). At the beginning of the simulation, CENTURY model needs a value of SOC3 per soil layer which can be estimated by the model (based on soil texture and management history) or given as an input. Then, the model assigns about 5% and 95% of the remaining SOC to SOC1 and SOC2, respectively. The model performance when simulating SOC and nitrogen (N) dynamics strongly depends on the initialization process. The common methods (e.g. Basso et al., 2011) to initialize SOC pools deal mostly with carbon (C) mineralization processes and less with N. Dynamics of SOM, SOC, and soil organic N are linked in the CENTURY-DSSAT model through the C/N ratio of decomposing material that determines either mineralization or immobilization of N (Gijsman et al., 2002). The aim of this study was to evaluate an alternative method to initialize the SOC pools in the DSSAT-CENTURY model from apparent soil N mineralization (Napmin) field measurements by using automatic inverse calibration (simulated annealing). The results were compared with the ones obtained by the iterative initialization procedure developed by Basso et al., 2011.