986 resultados para Inverse methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The weak-constraint inverse for nonlinear dynamical models is discussed and derived in terms of a probabilistic formulation. The well-known result that for Gaussian error statistics the minimum of the weak-constraint inverse is equal to the maximum-likelihood estimate is rederived. Then several methods based on ensemble statistics that can be used to find the smoother (as opposed to the filter) solution are introduced and compared to traditional methods. A strong point of the new methods is that they avoid the integration of adjoint equations, which is a complex task for real oceanographic or atmospheric applications. they also avoid iterative searches in a Hilbert space, and error estimates can be obtained without much additional computational effort. the feasibility of the new methods is illustrated in a two-layer quasigeostrophic model.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

info:eu-repo/semantics/published

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Inverse methods are widely used in various fields of atmospheric science. However, such methods are not commonly used within the boundary-layer community, where robust observations of surface fluxes are a particular concern. We present a new technique for deriving surface sensible heat fluxes from boundary-layer turbulence observations using an inverse method. Doppler lidar observations of vertical velocity variance are combined with two well-known mixed-layer scaling forward models for a convective boundary layer (CBL). The inverse method is validated using large-eddy simulations of a CBL with increasing wind speed. The majority of the estimated heat fluxes agree within error with the proscribed heat flux, across all wind speeds tested. The method is then applied to Doppler lidar data from the Chilbolton Observatory, UK. Heat fluxes are compared with those from a mast-mounted sonic anemometer. Errors in estimated heat fluxes are on average 18 %, an improvement on previous techniques. However, a significant negative bias is observed (on average −63%) that is more pronounced in the morning. Results are improved for the fully-developed CBL later in the day, which suggests that the bias is largely related to the choice of forward model, which is kept deliberately simple for this study. Overall, the inverse method provided reasonable flux estimates for the simple case of a CBL. Results shown here demonstrate that this method has promise in utilizing ground-based remote sensing to derive surface fluxes. Extension of the method is relatively straight-forward, and could include more complex forward models, or other measurements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A large proportion of our knowledge about the surfaces of atmosphereless solar-system bodies is obtained through remote-sensing measurements. The measurements can be carried out either as ground-based telescopic observations or space-based observations from orbiting spacecraft. In both cases, the measurement geometry normally varies during the observations due to the orbital motion of the target body, the spacecraft, etc.. As a result, the data are acquired over a variety of viewing and illumination angles. Surfaces of planetary bodies are usually covered with a layer of loose, broken-up rock material called the regolith whose physical properties affect the directional dependence of remote-sensed measurements. It is of utmost importance for correct interpretation of the remote-sensed data to understand the processes behind this alteration. In the thesis, the multi-angular effects that the physical properties of the regolith have on remote-sensing measurements are studied in two regimes of electromagnetic radiation, visible to near infrared and soft X-rays. These effects are here termed generally the regolith effects in remote sensing. Although the physical mechanisms that are important in these regions are largely different, notable similarities arise in the methodology that is used in the study of the regolith effects, including the characterization of the regolith both in experimental studies and in numerical simulations. Several novel experimental setups have been constructed for the thesis. Alongside the experimental work, theoretical modelling has been carried out, and results from both approaches are presented. Modelling of the directional behaviour of light scattered from a regolith is utilized to obtain shape and spin-state information of several asteroids from telescopic observations and to assess the surface roughness and single-scattering properties of lunar maria from spacecraft observations. One of the main conclusions is that the azimuthal direction is an important factor in detailed studies of planetary surfaces. In addition, even a single parameter, such as porosity, can alter the light scattering properties of a regolith significantly. Surface roughness of the regolith is found to alter the elemental fluorescence line ratios of a surface obtained through planetary soft X-ray spectrometry. The results presented in the thesis are among the first to report this phenomenon. Regolith effects need to be taken into account in the analysis of remote-sensed data, providing opportunities for retrieving physical parameters of the surface through inverse methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The search for reliable proxies of past deep ocean temperature and salinity has proved difficult, thereby limiting our ability to understand the coupling of ocean circulation and climate over glacial-interglacial timescales. Previous inferences of deep ocean temperature and salinity from sediment pore fluid oxygen isotopes and chlorinity indicate that the deep ocean density structure at the Last Glacial Maximum (LGM, approximately 20,000 years BP) was set by salinity, and that the density contrast between northern and southern sourced deep waters was markedly greater than in the modern ocean. High density stratification could help explain the marked contrast in carbon isotope distribution recorded in the LGM ocean relative to that we observe today, but what made the ocean's density structure so different at the LGM? How did it evolve from one state to another? Further, given the sparsity of the LGM temperature and salinity data set, what else can we learn by increasing the spatial density of proxy records?

We investigate the cause and feasibility of a highly and salinity stratified deep ocean at the LGM and we work to increase the amount of information we can glean about the past ocean from pore fluid profiles of oxygen isotopes and chloride. Using a coupled ocean--sea ice--ice shelf cavity model we test whether the deep ocean density structure at the LGM can be explained by ice--ocean interactions over the Antarctic continental shelves, and show that a large contribution of the LGM salinity stratification can be explained through lower ocean temperature. In order to extract the maximum information from pore fluid profiles of oxygen isotopes and chloride we evaluate several inverse methods for ill-posed problems and their ability to recover bottom water histories from sediment pore fluid profiles. We demonstrate that Bayesian Markov Chain Monte Carlo parameter estimation techniques enable us to robustly recover the full solution space of bottom water histories, not only at the LGM, but through the most recent deglaciation and the Holocene up to the present. Finally, we evaluate a non-destructive pore fluid sampling technique, Rhizon samplers, in comparison to traditional squeezing methods and show that despite their promise, Rhizons are unlikely to be a good sampling tool for pore fluid measurements of oxygen isotopes and chloride.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neste trabalho de dissertação apresentaremos uma classe de precondicionadores baseados na aproximação esparsa da inversa da matriz de coecientes, para a resolução de sistemas lineares esparsos de grandes portes através de métodos iterativos, mais especificamente métodos de Krylov. Para que um método de Krylov seja eficiente é extremamente necessário o uso de precondicionadores. No contexto atual, onde computadores de arquitetura híbrida são cada vez mais comuns temos uma demanda cada vez maior por precondicionadores paralelizáveis. Os métodos de inversa aproximada que serão descritos possuem aplicação paralela, pois so dependem de uma operação de produto matriz-vetor, que é altamente paralelizável. Além disso, alguns dos métodos também podem ser construídos em paralelo. A ideia principal é apresentar uma alternativa aos tradicionais precondicionadores que utilizam aproximações dos fatores LU, que apesar de robustos são de difícil paralelização.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Geophysical inversion is a theory that transforms the observation data into corresponding geophysical models. The goal of seismic inversion is not only wave velocity models, but also the fine structures and dynamic process of interior of the earth, expanding to more parameters such as density, aeolotropism, viscosity and so on. As is known to all, Inversion theory is divided to linear and non-linear inversion theories. In rencent 40 years linear inversion theory has formed into a complete and systematic theory and found extensive applications in practice. While there are still many urgent problems to be solved in non-linear inversion theory and practice. Based on wave equation, this dissertation has been mainly involved in the theoretical research of several non-linear inversion methods: waveform inversion, traveltime inversion and the joint inversion about two methods. The objective of gradient waveform inversion is to find a geologic model, thus synthetic seismograms generated by this geologic model are best fitted to observed seismograms. Contrasting with other inverse methods, waveform inversion uses all characteristics of waveform and has high resolution capacity. But waveform inversion is an interface by interface method. An artificial parameter limit should be provided in each inversion iteration. In addition, waveform information will tend to get stuck in local minima if the starting model is too far from the actual model. Based on velocity scanning in traditional seismic data processing, a layer-by-layer waveform inversion method is developed in this dissertation to deal with weaknesses of waveform inversion. Wave equation is used to calculate the traveltime and derivative (perturbation of traveltime with respect to velocity) in wave-equation traveltime inversion (WT). Unlike traditional ray-based travetime inversion, WT has many advantages. No ray tracing or traveltime picking and no high frequency assumption is necessary and good result can be got while starting model is far from real model. But, comparing with waveform inversion, WT has low resolution. Waveform inversion and WT have complementary advantages and similar algorithm, which proves that the joint inversion is a better inversion method. And another key point which this dissertation emphasizes is how to give fullest play to their complementary advantages on the premise of no increase of storage spaces and amount of calculation. Numerical tests are implemented to prove the feasibility of inversion methods mentioned above in this dissertation. Especially for gradient waveform inversion, field data are inversed. This field data are acquired by our group in Wali park and Shunyi district. Real data processing shows there are many problems for waveform inversion to deal with real data. The matching of synthetic seismograms with observed seismograms and noise cancellation are two primary problems. In conclusion, on the foundation of the former experiences, this dissertation has implemented waveform inversions on the basis of acoustic wave equation and elastic wave equation, traveltime inversion on the basis of acoustic wave equation and traditional combined waveform traveltime inversion. Besides the traditional analysis of inversion theory, there are two innovations: layer by layer inversion of seimic reflection data inversion and rapid method for acoustic wave-equation joint inversion.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The primary approaches for people to understand the inner properties of the earth and the distribution of the mineral resources are mainly coming from surface geology survey and geophysical/geochemical data inversion and interpretation. The purpose of seismic inversion is to extract information of the subsurface stratum geometrical structures and the distribution of material properties from seismic wave which is used for resource prospecting, exploitation and the study for inner structure of the earth and its dynamic process. Although the study of seismic parameter inversion has achieved a lot since 1950s, some problems are still persisting when applying in real data due to their nonlinearity and ill-posedness. Most inversion methods we use to invert geophysical parameters are based on iterative inversion which depends largely on the initial model and constraint conditions. It would be difficult to obtain a believable result when taking into consideration different factors such as environmental and equipment noise that exist in seismic wave excitation, propagation and acquisition. The seismic inversion based on real data is a typical nonlinear problem, which means most of their objective functions are multi-minimum. It makes them formidable to be solved using commonly used methods such as general-linearization and quasi-linearization inversion because of local convergence. Global nonlinear search methods which do not rely heavily on the initial model seem more promising, but the amount of computation required for real data process is unacceptable. In order to solve those problems mentioned above, this paper addresses a kind of global nonlinear inversion method which brings Quantum Monte Carlo (QMC) method into geophysical inverse problems. QMC has been used as an effective numerical method to study quantum many-body system which is often governed by Schrödinger equation. This method can be categorized into zero temperature method and finite temperature method. This paper is subdivided into four parts. In the first one, we briefly review the theory of QMC method and find out the connections with geophysical nonlinear inversion, and then give the flow chart of the algorithm. In the second part, we apply four QMC inverse methods in 1D wave equation impedance inversion and generally compare their results with convergence rate and accuracy. The feasibility, stability, and anti-noise capacity of the algorithms are also discussed within this chapter. Numerical results demonstrate that it is possible to solve geophysical nonlinear inversion and other nonlinear optimization problems by means of QMC method. They are also showing that Green’s function Monte Carlo (GFMC) and diffusion Monte Carlo (DMC) are more applicable than Path Integral Monte Carlo (PIMC) and Variational Monte Carlo (VMC) in real data. The third part provides the parallel version of serial QMC algorithms which are applied in a 2D acoustic velocity inversion and real seismic data processing and further discusses these algorithms’ globality and anti-noise capacity. The inverted results show the robustness of these algorithms which make them feasible to be used in 2D inversion and real data processing. The parallel inversion algorithms in this chapter are also applicable in other optimization. Finally, some useful conclusions are obtained in the last section. The analysis and comparison of the results indicate that it is successful to bring QMC into geophysical inversion. QMC is a kind of nonlinear inversion method which guarantees stability, efficiency and anti-noise. The most appealing property is that it does not rely heavily on the initial model and can be suited to nonlinear and multi-minimum geophysical inverse problems. This method can also be used in other filed regarding nonlinear optimization.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The dynamic prediction of complex reservoir development is one of the important research contents of dynamic analysis of oil and gas development. With the increase development of time, the permeabilities and porosities of reservoirs and the permeability of block reservoir at its boundaries are dynamically changing. How to track the dynamic change of permeability and porosity and make certain the permeability of block reservoir at its boundary is an important practical problem. To study developing dynamic prediction of complex reservoir, the key problem of research of dynamic prediction of complex reservoir development is realizing inversion of permeability and porosity. To realize the inversion, first of all, the fast forward and inverse method of 3-dimension reservoir simulation must be studied. Although the inversion has been widely applied to exploration and logging, it has not been applied to3-dimension reservoir simulation. Therefore, the study of fast forward and inverse method of 3-dimension reservoir simulation is a cutting-edge problem, takes on important realistic signification and application value. In this dissertation, 2-dimension and 3-dimension fluid equations in porous media are discretized by finite difference, obtaining finite difference equations to meet the inner boundary conditions by Peaceman's equations, giving successive over relaxation iteration of 3-dimension fluid equations in porous media and the dimensional analysis. Several equation-solving methods are compared in common use, analyzing its convergence and convergence rate. The alternating direction implicit procedure of 2-dimension has been turned into successive over relaxation iteration of alternating direction implicit procedure of 3-dimension fluid equations in porous media, which possesses the virtues of fast computing speed, needing small memory of computer, good adaptability for heterogeneous media and fast convergence rate. The geological model of channel-sandy reservoir has been generated with the help of stochastic simulation technique, whose cross sections of channel-sandy reservoir are parabolic shapes. This method makes the hard data commendably meet, very suit for geological modeling of containing complex boundary surface reservoir. To verify reliability of the method, theoretical solution and numerical solution are compared by simplifying model of 3-dimension fluid equations in porous media, whose results show that the only difference of the two pressure curves is that the numerical solution is lower than theoretical at the wellbore in the same space. It proves that using finite difference to solve fluid equations in porous media is reliable. As numerical examples of 3-dimension heterogeneous reservoir of the single-well and multi-well, the pressure distributions have been computed respectively, which show the pressure distributions there are clearly difference as difference of the permeabilities is greater than one order of magnitude, otherwise there are no clearly difference. As application, the pressure distribution of the channel-sandy reservoir have been computed, which indicates that the space distribution of pressure strongly relies on the direction of permeability, and is sensitive for space distributions of permeability. In this dissertation, the Peaceman's equations have been modified into solving vertical well problem and horizontal well problem simultaneously. In porous media, a 3D layer reservoir in which contain vertical wells and horizontal wells has been calculated with iteration. For channel-sandy reservoir in which there are also vertical wells and horizontal wells, a 3D transient heterogeneous fluid equation has been discretized. As an example, the space distribution of pressure has been calculated with iteration. The results of examples are accord with the fact, which shows the modification of Peaceman's equation is correct. The problem has been solved in the space where there are vertical and horizontal wells. In the dissertation, the nonuniform grid permeability integration equation upscaling method, the nonuniform grid 2D flow rate upscaling method and the nonuniform grid 3D flow rate upscaling method have been studied respectively. In those methods, they enhance computing speed greatly, but the computing speed of 3D flow rate upscaling method is faster than that of 2D flow rate upscaling method, and the precision of 3D flow rate upscaling method is better than that of 2D flow rate upscaling method. The results also show that the solutions of upscaling method are very approximating to that of fine grid blocks. In this paper, 4 methods of fast adaptive nonuniform grid upscaling method of 3D fluid equations in porous media have been put forward, and applied to calculate 3D heterogeneous reservoir and channel-sandy reservoir, whose computing results show that the solutions of nonuniform adaptive upscaling method of 3D heterogeneous fluid equations in porous media are very approximating to that of fine grid blocks in the regions the permeability or porosity being abnormity and very approximating to that of coarsen grid blocks in the other region, however, the computing speed of adaptive upscaling method is 100 times faster than that of fine grid block method. The formula of sensitivity coefficients are derived from initial boundary value problems of fluid equations in porous media by Green's reciprocity principle. The sensitivity coefficients of wellbore pressure to permeability parameters are given by Peaceman's equation and calculated by means of numerical calculation method of 3D transient anisotropic fluid equation in porous media and verified by direct method. The computing results are in excellent agreement with those obtained by the direct method, which shows feasibility of the method. In the dissertation, the calculating examples are also given for 3D reservoir, channel-sandy reservoir and 3D multi-well reservoir, whose numerical results indicate: around the well hole, the value of the sensitivity coefficients of permeability is very large, the value of the sensitivity coefficients of porosity is very large too, but the sensitivity coefficients of porosity is much less than the sensitivity coefficients of permeability, so that the effect of the sensitivity coefficients of permeability for inversion of reservoir parameters is much greater than that of the sensitivity coefficients of porosity. Because computing the sensitivity coefficients needs to call twice the program of reservoir simulation in one iteration, realizing inversion of reservoir parameters must be sustained by the fast forward method. Using the sensitivity coefficients of permeability and porosity, conditioned on observed valley erosion thickness in wells (hard data), the inversion of the permeabilities and porosities in the homogeneous reservoir, homogeneous reservoir only along the certain direction and block reservoir are implemented by Gauss-Newton method or conjugate gradient method respectively. The results of our examples are very approximating to the real data of permeability and porosity, but the convergence rate of conjugate gradient method is much faster than that of Gauss-Newton method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and a methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates, consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuel combustion and cement production (E-FF) are based on energy statistics and cement production data, respectively, while emissions from land-use change (E-LUC), mainly deforestation, are based on combined evidence from land-cover-change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (G(ATM)) is computed from the annual changes in concentration. The mean ocean CO2 sink (S-OCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in S-OCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (S-LAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models forced by observed climate, CO2, and land-cover-change (some including nitrogen-carbon interactions). We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as +/- 1 sigma, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2004-2013), E-FF was 8.9 +/- 0.4 GtC yr(-1), E-LUC 0.9 +/- 0.5 GtC yr(-1), G(ATM) 4.3 +/- 0.1 GtC yr(-1), S-OCEAN 2.6 +/- 0.5 GtC yr(-1), and S-LAND 2.9 +/- 0.8 GtC yr(-1). For year 2013 alone, E-FF grew to 9.9 +/- 0.5 GtC yr(-1), 2.3% above 2012, continuing the growth trend in these emissions, E-LUC was 0.9 +/- 0.5 GtC yr(-1), G(ATM) was 5.4 +/- 0.2 GtC yr(-1), S-OCEAN was 2.9 +/- 0.5 GtC yr(-1), and S-LAND was 2.5 +/- 0.9 GtC yr(-1). G(ATM) was high in 2013, reflecting a steady increase in E-FF and smaller and opposite changes between S-OCEAN and S-LAND compared to the past decade (2004-2013). The global atmospheric CO2 concentration reached 395.31 +/- 0.10 ppm averaged over 2013. We estimate that E-FF will increase by 2.5% (1.3-3.5 %) to 10.1 +/- 0.6 GtC in 2014 (37.0 +/- 2.2 GtCO(2) yr(-1)), 65% above emissions in 1990, based on projections of world gross domestic product and recent changes in the carbon intensity of the global economy. From this projection of E-FF and assumed constant E-LUC for 2014, cumulative emissions of CO2 will reach about 545 +/- 55 GtC (2000 +/- 200 GtCO(2)) for 1870-2014, about 75% from E-FF and 25% from E-LUC. This paper documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this living data set (Le Quere et al., 2013, 2014). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center (doi:10.3334/CDIAC/GCP_2014).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Accurate assessment of anthropogenic carbon dioxide (CO2) emissions and their redistribution among the atmosphere, ocean, and terrestrial biosphere is important to better understand the global carbon cycle, support the development of climate policies, and project future climate change. Here we describe data sets and a methodology to quantify all major components of the global carbon budget, including their uncertainties, based on the combination of a range of data, algorithms, statistics, and model estimates and their interpretation by a broad scientific community. We discuss changes compared to previous estimates as well as consistency within and among components, alongside methodology and data limitations. CO2 emissions from fossil fuels and industry (EFF) are based on energy statistics and cement production data, while emissions from land-use change (ELUC), mainly deforestation, are based on combined evidence from land-cover-change data, fire activity associated with deforestation, and models. The global atmospheric CO2 concentration is measured directly and its rate of growth (GATM) is computed from the annual changes in concentration. The mean ocean CO2 sink (SOCEAN) is based on observations from the 1990s, while the annual anomalies and trends are estimated with ocean models. The variability in SOCEAN is evaluated with data products based on surveys of ocean CO2 measurements. The global residual terrestrial CO2 sink (SLAND) is estimated by the difference of the other terms of the global carbon budget and compared to results of independent dynamic global vegetation models forced by observed climate, CO2, and land-cover change (some including nitrogen–carbon interactions). We compare the mean land and ocean fluxes and their variability to estimates from three atmospheric inverse methods for three broad latitude bands. All uncertainties are reported as ±1σ, reflecting the current capacity to characterise the annual estimates of each component of the global carbon budget. For the last decade available (2005–2014), EFF was 9.0 ± 0.5 GtC yr−1, ELUC was 0.9 ± 0.5 GtC yr−1, GATM was 4.4 ± 0.1 GtC yr−1, SOCEAN was 2.6 ± 0.5 GtC yr−1, and SLAND was 3.0 ± 0.8 GtC yr−1. For the year 2014 alone, EFF grew to 9.8 ± 0.5 GtC yr−1, 0.6 % above 2013, continuing the growth trend in these emissions, albeit at a slower rate compared to the average growth of 2.2 % yr−1 that took place during 2005–2014. Also, for 2014, ELUC was 1.1 ± 0.5 GtC yr−1, GATM was 3.9 ± 0.2 GtC yr−1, SOCEAN was 2.9 ± 0.5 GtC yr−1, and SLAND was 4.1 ± 0.9 GtC yr−1. GATM was lower in 2014 compared to the past decade (2005–2014), reflecting a larger SLAND for that year. The global atmospheric CO2 concentration reached 397.15 ± 0.10 ppm averaged over 2014. For 2015, preliminary data indicate that the growth in EFF will be near or slightly below zero, with a projection of −0.6 [range of −1.6 to +0.5] %, based on national emissions projections for China and the USA, and projections of gross domestic product corrected for recent changes in the carbon intensity of the global economy for the rest of the world. From this projection of EFF and assumed constant ELUC for 2015, cumulative emissions of CO2 will reach about 555 ± 55 GtC (2035 ± 205 GtCO2) for 1870–2015, about 75 % from EFF and 25 % from ELUC. This living data update documents changes in the methods and data sets used in this new carbon budget compared with previous publications of this data set (Le Quéré et al., 2015, 2014, 2013). All observations presented here can be downloaded from the Carbon Dioxide Information Analysis Center (doi:10.3334/CDIAC/GCP_2015).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Although estimation of turbulent transport parameters using inverse methods is not new, there is little evaluation of the method in the literature. Here, it is shown that extended observation of the broad scale hydrography by Argo provides a path to improved estimates of regional turbulent transport rates. Results from a 20 year ocean state estimate produced with the ECCO v4 non-linear inverse modeling framework provide supporting evidence. Turbulent transport parameter maps are estimated under the constraints of fitting the extensive collection of Argo profiles collected through 2011. The adjusted parameters dramatically reduce misfits to in situ profiles as compared with earlier ECCO solutions. They also yield a clear reduction in the model drift away from observations over multi-century long simulations, both for assimilated variables (temperature and salinity) and independent variables (bio-geochemical tracers). Despite the minimal constraints imposed specifically on the estimated parameters, their geography is physically plausible and exhibits close connections with the upper ocean ocean stratification as observed by Argo. The estimated parameter adjustments furthermore have first order impacts on upper-ocean stratification and mixed layer depths over 20 years. These results identify the constraint of fitting Argo profiles as an effective observational basis for regional turbulent transport rates. Uncertainties and further improvements of the method are discussed.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Aging societies suffer from an increasing incidence of bone fractures. Bone strength depends on the amount of mineral measured by clinical densitometry, but also on the micromechanical properties of the bone hierarchical organization. A good understanding has been reached for elastic properties on several length scales, but up to now there is a lack of reliable postyield data on the lower length scales. In order to be able to describe the behavior of bone at the microscale, an anisotropic elastic-viscoplastic damage model was developed using an eccentric generalized Hill criterion and nonlinear isotropic hardening. The model was implemented as a user subroutine in Abaqus and verified using single element tests. A FE simulation of microindentation in lamellar bone was finally performed show-ing that the new constitutive model can capture the main characteristics of the indentation response of bone. As the generalized Hill criterion is limited to elliptical and cylindrical yield surfaces and the correct shape for bone is not known, a new yield surface was developed that takes any convex quadratic shape. The main advantage is that in the case of material identification the shape of the yield surface does not have to be anticipated but a minimization results in the optimal shape among all convex quadrics. The generality of the formulation was demonstrated by showing its degeneration to classical yield surfaces. Also, existing yield criteria for bone at multiple length scales were converted to the quadric formulation. Then, a computational study to determine the influence of yield surface shape and damage on the in-dentation response of bone using spherical and conical tips was performed. The constitutive model was adapted to the quadric criterion and yield surface shape and critical damage were varied. They were shown to have a major impact on the indentation curves. Their influence on indentation modulus, hardness, their ratio as well as the elastic to total work ratio were found to be very well described by multilinear regressions for both tip shapes. For conical tips, indentation depth was not a significant fac-tor, while for spherical tips damage was insignificant. All inverse methods based on microindentation suffer from a lack of uniqueness of the found material properties in the case of nonlinear material behavior. Therefore, monotonic and cyclic micropillar com-pression tests in a scanning electron microscope allowing a straightforward interpretation comple-mented by microindentation and macroscopic uniaxial compression tests were performed on dry ovine bone to identify modulus, yield stress, plastic deformation, damage accumulation and failure mecha-nisms. While the elastic properties were highly consistent, the postyield deformation and failure mech-anisms differed between the two length scales. A majority of the micropillars showed a ductile behavior with strain hardening until failure by localization in a slip plane, while the macroscopic samples failed in a quasi-brittle fashion with microcracks coalescing into macroscopic failure surfaces. In agreement with a proposed rheological model, these experiments illustrate a transition from a ductile mechanical behavior of bone at the microscale to a quasi-brittle response driven by the growth of preexisting cracks along interfaces or in the vicinity of pores at the macroscale. Subsequently, a study was undertaken to quantify the topological variability of indentations in bone and examine its relationship with mechanical properties. Indentations were performed in dry human and ovine bone in axial and transverse directions and their topography measured by AFM. Statistical shape modeling of the residual imprint allowed to define a mean shape and describe the variability with 21 principal components related to imprint depth, surface curvature and roughness. The indentation profile of bone was highly consistent and free of any pile up. A few of the topological parameters, in particular depth, showed significant correlations to variations in mechanical properties, but the cor-relations were not very strong or consistent. We could thus verify that bone is rather homogeneous in its micromechanical properties and that indentation results are not strongly influenced by small de-viations from the ideal case. As the uniaxial properties measured by micropillar compression are in conflict with the current literature on bone indentation, another dissipative mechanism has to be present. The elastic-viscoplastic damage model was therefore extended to viscoelasticity. The viscoelastic properties were identified from macroscopic experiments, while the quasistatic postelastic properties were extracted from micropillar data. It was found that viscoelasticity governed by macroscale properties has very little influence on the indentation curve and results in a clear underestimation of the creep deformation. Adding viscoplasticity leads to increased creep, but hardness is still highly overestimated. It was possible to obtain a reasonable fit with experimental indentation curves for both Berkovich and spherical indenta-tion when abandoning the assumption of shear strength being governed by an isotropy condition. These results remain to be verified by independent tests probing the micromechanical strength prop-erties in tension and shear. In conclusion, in this thesis several tools were developed to describe the complex behavior of bone on the microscale and experiments were performed to identify its material properties. Micropillar com-pression highlighted a size effect in bone due to the presence of preexisting cracks and pores or inter-faces like cement lines. It was possible to get a reasonable fit between experimental indentation curves using different tips and simulations using the constitutive model and uniaxial properties measured by micropillar compression. Additional experimental work is necessary to identify the exact nature of the size effect and the mechanical role of interfaces in bone. Deciphering the micromechanical behavior of lamellar bone and its evolution with age, disease and treatment and its failure mechanisms on several length scales will help preventing fractures in the elderly in the future.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Much advancement has been made in recent years in field data assimilation, remote sensing and ecosystem modeling, yet our global view of phytoplankton biogeography beyond chlorophyll biomass is still a cursory taxonomic picture with vast areas of the open ocean requiring field validations. High performance liquid chromatography (HPLC) pigment data combined with inverse methods offer an advantage over many other phytoplankton quantification measures by way of providing an immediate perspective of the whole phytoplankton community in a sample as a function of chlorophyll biomass. Historically, such chemotaxonomic analysis has been conducted mainly at local spatial and temporal scales in the ocean. Here, we apply a widely tested inverse approach, CHEMTAX, to a global climatology of pigment observations from HPLC. This study marks the first systematic and objective global application of CHEMTAX, yielding a seasonal climatology comprised of ~1500 1°x1° global grid points of the major phytoplankton pigment types in the ocean characterizing cyanobacteria, haptophytes, chlorophytes, cryptophytes, dinoflagellates, and diatoms, with results validated against prior regional studies where possible. Key findings from this new global view of specific phytoplankton abundances from pigments are a) the large global proportion of marine haptophytes (comprising 32 ± 5% of total chlorophyll), whose biogeochemical functional roles are relatively unknown, and b) the contrasting spatial scales of complexity in global community structure that can be explained in part by regional oceanographic conditions. These publicly accessible results will guide future parameterizations of marine ecosystem models exploring the link between phytoplankton community structure and marine biogeochemical cycles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Methods of solving the neuro-electromagnetic inverse problem are examined and developed, with specific reference to the human visual cortex. The anatomy, physiology and function of the human visual system are first reviewed. Mechanisms by which the visual cortex gives rise to external electric and magnetic fields are then discussed, and the forward problem is described mathematically for the case of an isotropic, piecewise homogeneous volume conductor, and then for an anisotropic, concentric, spherical volume conductor. Methods of solving the inverse problem are reviewed, before a new technique is presented. This technique combines prior anatomical information gained from stereotaxic studies, with a probabilistic distributed-source algorithm to yield accurate, realistic inverse solutions. The solution accuracy is enhanced by using both visual evoked electric and magnetic responses simultaneously. The numerical algorithm is then modified to perform equivalent current dipole fitting and minimum norm estimation, and these three techniques are implemented on a transputer array for fast computation. Due to the linear nature of the techniques, they can be executed on up to 22 transputers with close to linear speedup. The latter part of the thesis describes the application of the inverse methods to the analysis of visual evoked electric and magnetic responses. The CIIm peak of the pattern onset evoked magnetic response is deduced to be a product of current flowing away from the surface areas 17, 18 and 19, while the pattern reversal P100m response originates in the same areas, but from oppositely directed current. Cortical retinotopy is examined using sectorial stimuli, the CI and CIm ;peaks of the pattern onset electric and magnetic responses are found to originate from areas V1 and V2 simultaneously, and they therefore do not conform to a simple cruciform model of primary visual cortex.